Geek Stuff category archive
Behind That (AI) Curtain 0
At Psychology Today Blogs, Gleb Tsipursky offers some pointers for detecting the deception. He make three main points:
- AI-generated misinformation blurs truth, making it hard to discern fact from fiction.
- People can unmask AI content by scrutinizing it for inconsistencies and a lack of human touch.
- AI content detection tools can spot and neutralize misinformation, protecting against its spread.
Follow the link for a detailed discussion of each one.
Aside:
I will add one bit of advice:
Don’t believe stuff just because you see it on a computer screen.
Artificial? Yes. Intelligent? Not So Much. 0
Bruce Schneier asks whether “we really want to entrust this revolutionary technology (AI–ed.) solely to a small group of US tech companies?”
He goes on to remind us that
Follow the link for the rest of his thoughts.
Twits Own Twitter 0
Elon Musk apparently has the courage of his evictions.
Artificial? Yes. Intelligent? Not So Much. 0
At Above the Law, Ethan Beberness reports that Open AI and ChatGPT just might be getting their day in court–as defendants.
No Place To Hide 0
At the Washington Monthly, Karina Montoya has a long and detailed article about how advertising strategy is changing once again. In the past decade, advertising moved to “social” media, with a disturbing side effect of eroding the business models of legitimate news organizations. Now, she argues, retailers are marketing personal information gathered through loyalty programs, credit card purchases, and the like to advertisers. The entire piece is worth a read, but this particular bit caught my eye:
Corporations, not the government, are your “surveillance state.” And we walk nekkid through its streets every day.
It’s All about the Algorithm 0
Said algorithm engages those eyeballs and sucks them right down into a vortex of vile.
“Social” media isn’t.
Artificial? Yes. Intelligent? Not So Much. 0
In Georgia, Mark Walters, who is apparently a local radio personality of some sort, has sued OpenAI for libel based on falsehoods propagated by ChatGPT, and Techdirt wonders whether the suit has a prayer. Here’s a bit from their article; follow the link for context.
And I’m not sure there are really good answers. First off, only one person actually saw this information, and there’s no indication that he actually believed any of it (indeed, it sounds like he was aware that it was hallucinating), which would push towards it not being defamation and even if it was, there was no harm at all.
Second, even if you could argue that the content was defamatory and created harm, is there actual malice by Open AI? First off, Watson is easily a public figure, so he’d need to show actual malice by OpenAI . . . .
Aside:
Whether Walters wins or loses, I doubt he’ll be the last to want a day in court with ChatGPT.
Full-Face and Profiled 0
The EFF reports on a court’s finding about farcical recognition. A snippet (emphasis added):
Follow the link for details.
Artificial? Yes. Intelligent? Not So Much. 0
In San Francisco, driverless cars are providing persuasive evidence that drivers are a valuable resource.
Afterthought:
Methinks one of the most striking characteristics of “Tech Bros”–and I’m referring specifically to that particular subset of the tech community–is their arrogance.
Artificial? Yes. Intelligent? Not So Much. 0
At Psychology Today Blogs, Peter Gärdenfors points out that (emphasis added)
Follow the link for his explanation of how he came to that conclusion and why he considers it important.
(Broken link fixed.)
Artificial? Yes. Intelligent? Not So Much. 0
Security expert Bruce Schneier shares his take on AI. Given all the hoopla of the past couple of months, I think his piece is worth a read.
A Question of Identification 0
Computer security expert Bruce Schneier reports that fingerprint logins may not be as secure as they are portrayed to be.
Food for Thought, It’s All about the Algorithm Dept. 0
At Psychology Today Blogs, Riccardo Dalle Grave discusses the association between “social” media and eating disorders in teens. He makes three main points; follow the link for his exploration.
- The use of social networks is associated with body dissatisfaction and disordered eating.
- Viewing and uploading photos and asking for negative feedback seem particularly problematic.
- The parental role in ensuring their children’s safe and helpful use of social networks cannot be understated.
There’s nothing healthy about a culture which values self-promotion above all else.
Artificial? Yes. Intelligent? Not So Much. 0
At Above the Law, Joe Patrice reminds us that, just because the machine said it, it ain’t necessarily so.
Artificial? Yes. Intelligent? Not So Much. 0
Given the hype nay, absolute swooning over “artificial intelligence,” I believe you will find the thoughtful discussion of it and of its implications on the latest Bad Voltage podcast to be well worth a listen.
Geeking Out 0
I do likes me my purty pictures.
Mageia v. 8 with the Fluxbox window manager. The wallpaper is from my collection.
Bumps on a Head 0
At Psychology Today Blogs, Stanley Finger tells the fascinating tale of how Mark Twain and Oliver Wendell Holmes took down phrenology con.
One wonders how they would deal with ChatGPT.








