Geek Stuff category archive
It’s All about the Algorithm 0
At Psychology Today Blogs, Mitchell B. Liester reminds us that “social” media isn’t. He notes that (emphasis added)
Family estrangement has reached epidemic proportions. . . .
What’s causing this destruction? Causes include substance abuse, violence, and personality conflicts, but a newer and increasingly powerful force is social media algorithms designed to increase engagement by promoting divisive content.
In these algorithmic times, methinks the entire article is worthy of your attention.
Artificial? Yes. Intelligent? Not So Much. 0
Assimilating you just like the Zuckerborg? Security maven Bruce Schneier notes
Artificial? Yes. Intelligent? Not So Much. 0
Omniscient? Just ask it.
The Unwelcome Visitor 0
My current wallpaper on the Plasma Desktop on Mageia v. 9. The image is from my collection.
Artificial? Yes. Intelligent? Not So Much? 0
Your BFF? At Psychology Today Blogs, Paul Thagard reminds us that AI bots can’t be our fiends because (my words, not his) they’re freaking machines playing a pre-programmed part for Pete’s sake.
Here’s his summary of his argument; follow the link for a detailed exploration of each point.
1. Caring is an emotional response.
2. Emotions are, in part, physiological reactions to situations.
3. AI models have none of these physiological reactions.
4. So AI models lack emotions.
5. So AI models are incapable of caring.
Artificial? Yes. Intelligent? Not So Much. 0
A factory for false witness and a perpetrator of perjury? According to D. C. Judge Herbert Dixon, fake AI evidence is getting too good to spot without extensive investigation.
Follow the link for one woman’s story.
Artificial? Yet. Intelligent? Not So Much. 0
Stultifying? At Psychology Today Blogs, Eric Solomon argues that AI “pushes anxious minds toward safety, shrinking curiosity and original thought” (emphasis added).
Follow the link for his reasoning.
Copywrongs 0
I have noted before in these electrons that, since my earliest days on Usenet and BBSs (that’s “bulletin board systems”–look it up), I have been amazed at how persons willingly believe stuff that they read on a computer screen, when they would not believe the same stuff if it happened before their eyes. Now, with the advent of AI chatbots, we’ve progressed to a point at which persons willingly believe stuff they hear from their computers when they wouldn’t believe the same stuff if it happened before their eyes.
Bloomberg’s Catherine Thorbecke thinks that, as AI spreads, it’s time for the companies that are manufabricating it to come clean about what they are using for their “training” data. She asks
The answer appears to be “yes” to all of the above. But we can’t know for sure because the companies building these systems refuse to say.
The secrecy is increasingly indefensible as AI systems creep into high-stakes environments like schools, hospitals, hiring tools and government services. The more decision-making and agency we hand over to machines, the more urgent it becomes to understand what’s going into them.
I commend the entire article to your attention.
Artificial? Yes. Intelligent? Not So Much. 0
A trustworthy advisor? That bridge in Brooklyn is still on sale.
It’s All about the Algorithm, Facebook Frolics Dept. 0
A news report from Reuters leads to the question: Is “social” media the new tobacco, knowingly promoting addiction for profit?
One more time, “social” media isn’t.
Artificial? Yes. Intelligent? Not So Much. 0
Trustworthy? At Psychology Today Blogs, CUNY professor Azadeh Aalai reminds us that
(s)ome of the perils of AI include the spread of false information and the potential to manipulate.
And, speaking of the potential to manipulate . . . .
Artificial? Yes. Intelligent? Not So Much. 0
Our new robotic overlords? Security maven Bruce Scneier points out that
Follow the link for his exploration of the implications of the infiltration.
Source for the Goose . . . 0
. . . but not source for the gander.
Artificial? Yes. Intelligent? Not So Much. 0
Trustworthy? Hardly.
At Psychology Today Blogs, Richard Gunderman reminds us that “(t)rust arises from relationships, not from digital probability functions.”
Recommended Listening 0
Harry Shearer’s interview with Gary Marcus about “the AI illusion and why it isn’t intelligence” on this week’s episode of Le Show.










