Geek Stuff category archive
Facebook Frolics 0
The EFF looks at the Zuckerborg’s latest assimilation strategy–enabling facial recogntion in its “smart” glasses–and explains why its a very bad no good stinking idea. Here’s a bit from their article:
Follow the link for the rest.
Artificial? Yes. Intelligent? Not So Much. 0
Corrosive? At the Psychology Today website, Cornelia C. Walther reports that “(r)esearch found that AI improved efficiency while eroding underlying expertise and agency.”
To put it another way, relying on AI to do our thinking for us may make us dumber.
Artificial? Yes. Intelligent? Not So Much. 0
Promoting passivity? At Psychology Today Blogs, John Nosta posits that the risk of AI “isn’t machine thought, but emergent passivity in us.”
Artificial? Yes. Intelligent? Not So Much. 0
Good for doing homework? Not if you actually, like, you know, want to learn stuff.
Post Mortem 0
Steve M. offers some thoughts on why Jeff Bezos has turned the once great Washington Post into the Washington Postcard. A snippet:
Tech guys become impatient when everything they touch doesn’t instantly turn to gold. They expect that they can move fast, break things, and watch the value of their new toy go up because they’ve made it buzzy. But that’s not how mature businesses work.
Artificial? Yes. Intelligent? Not So Much. 0
A conveyance for the con? At Psychology Today Blogs, Joe Pierre warns us that
Follow the link and be forewarned.
Artificial? Yes. Intelligent? Not So Much. 0
Programmed to prevaricate? At Psychology Today Blogs, John Nosta explains why your AI bot will come up with an answer even when there isn’t one.
It’s All about the Algorithm 0
At Psychology Today Blogs, Mitchell B. Liester reminds us that “social” media isn’t. He notes that (emphasis added)
Family estrangement has reached epidemic proportions. . . .
What’s causing this destruction? Causes include substance abuse, violence, and personality conflicts, but a newer and increasingly powerful force is social media algorithms designed to increase engagement by promoting divisive content.
In these algorithmic times, methinks the entire article is worthy of your attention.
Artificial? Yes. Intelligent? Not So Much. 0
Assimilating you just like the Zuckerborg? Security maven Bruce Schneier notes
Artificial? Yes. Intelligent? Not So Much. 0
Omniscient? Just ask it.
The Unwelcome Visitor 0
My current wallpaper on the Plasma Desktop on Mageia v. 9. The image is from my collection.
Artificial? Yes. Intelligent? Not So Much? 0
Your BFF? At Psychology Today Blogs, Paul Thagard reminds us that AI bots can’t be our fiends because (my words, not his) they’re freaking machines playing a pre-programmed part for Pete’s sake.
Here’s his summary of his argument; follow the link for a detailed exploration of each point.
1. Caring is an emotional response.
2. Emotions are, in part, physiological reactions to situations.
3. AI models have none of these physiological reactions.
4. So AI models lack emotions.
5. So AI models are incapable of caring.
Artificial? Yes. Intelligent? Not So Much. 0
A factory for false witness and a perpetrator of perjury? According to D. C. Judge Herbert Dixon, fake AI evidence is getting too good to spot without extensive investigation.
Follow the link for one woman’s story.
Artificial? Yet. Intelligent? Not So Much. 0
Stultifying? At Psychology Today Blogs, Eric Solomon argues that AI “pushes anxious minds toward safety, shrinking curiosity and original thought” (emphasis added).
Follow the link for his reasoning.
Copywrongs 0
I have noted before in these electrons that, since my earliest days on Usenet and BBSs (that’s “bulletin board systems”–look it up), I have been amazed at how persons willingly believe stuff that they read on a computer screen, when they would not believe the same stuff if it happened before their eyes. Now, with the advent of AI chatbots, we’ve progressed to a point at which persons willingly believe stuff they hear from their computers when they wouldn’t believe the same stuff if it happened before their eyes.
Bloomberg’s Catherine Thorbecke thinks that, as AI spreads, it’s time for the companies that are manufabricating it to come clean about what they are using for their “training” data. She asks
The answer appears to be “yes” to all of the above. But we can’t know for sure because the companies building these systems refuse to say.
The secrecy is increasingly indefensible as AI systems creep into high-stakes environments like schools, hospitals, hiring tools and government services. The more decision-making and agency we hand over to machines, the more urgent it becomes to understand what’s going into them.
I commend the entire article to your attention.









