Geek Stuff category archive
Artificial? Yes. Intelligent? Not So Much. 0
A worm engineered to eat your brain? At the Psychology Today website, Jeremy G. Schneider, explains how, despite being a machine that doesn’t think, but rather regurgitates, “AI is engineered to create the feeling of connection and understanding,”
I knew that was just coding, that this was the AI engagement engine at work.
Aside:
I am reminded of Harry Shearer’s suggestion from some months ago that “robots should talk like robots.”
Artificial? Yes. Intelligent? Not So Much. 0
A competent copywriter? It can make Donald Trump look coherent.
It’s All about the Algorithm 0
At the Psychology Today website, philosophy professor Peg O’Connor compares the working of “social” media algorithms to the call of the Sirens of Greek mythology.
Her article focuses on TikTok, primarily because of a recent lawsuit. She points out that, because of TikTok’s algorithm, “(i)n a very short amount of time, a person can move from being a causal user of the app to a heavy user.”
I think it applies to all the “social” media sites that use algorithms to tailor content to your eyeballs, which, as far as I know, is all of them. Methinks it a worthwhile read.
And, remember, you don’t use “social” media; “social” media uses you.
Geeking Out 0
Mageia v. 9 with the Plasma desktop environment. GKrellM is in the lower right; xclock in the upper right. And, yes, I like my menu at the top of the screen. The wallpaper is from my collection.
Artificial? Yes. Intelligent? Not So Much. 0
A trustworthy advisor? According to El Reg, not hardly. It reports that (emphasis added):
“Even a single interaction with sycophantic AI reduced participants’ willingness to take responsibility and repair interpersonal conflicts, while increasing their own conviction that they were right,” the researchers explained. “Yet despite distorting judgment, sycophantic models were trusted and preferred.”
Artificial? Yes. Intelligent? Not So Much. 0
A font of fallacious fakery? You bet your sweet bippy. At the Psychology Today website, Emily Ko discusses the spread of fake content on the inner webs–something that has proliferated thanks to AI bots and warns that we must not allow the algorithm to do our thinking for us.
She makes three main points; follow the link for a detailed exploration them.
- Fake content spreads faster because it triggers strong emotions that elicit quicker responses.
- Biases and social media algorithms combined make people more likely to believe and trust fake content.
- In an AI-driven world, consumers must rely less on social proof and more on critical thinking.
And, while we are on the subject, the Charlotte Observer reports:
Artificial? Yes. Intelligent? Not So Much. 0
Truthful? At that Psychology Today website, New York University professor Vasant Dhar argues that truthfulness and accuracy are not their creators’ primary concerns.
Here’s a bit from his article:
We shouldn’t lose sight of the fact that LLMs are not designed to be truthful, but to ensure that the narrative “makes sense” in any context. Given a context, LLMs are trained to generate what should come next in the developing narrative. Confabulations—plausible- sounding distortions or fabrications—are part of its repertoire, regardless of whether they correspond to truth or facts in our world.
Given the hype about (and the unquestioning faith that some are placing in) AI, I commend it as a timely and worthwhile read.
It’s a Smart, Smart World* 0
There’s a reason that, when I need to buy a new appliance, the first thing I say to the salesperson is, “I don’t want anything smart.”
________________
*With apologies to Harry Shearer for stealing the title of one his regular features.
Artificial? Yes. Intelligent? Not So Much. 0
A criminal co-conspirator? El Reg reports that “AI is apparently good for the bottom line if your business is crime.”
Details at the link.
“History Does Not Repeat Itself, but It Often Rhymes”* 0
Charles Ferguson, a pioneer in website development, looks at the hype surrounding AI and hears a rhyme from his early career. Here’s a tiny bit from his article (emphasis added):
But sincerity often accompanies naivete, as I know all too well. Thirty years ago, I founded the startup that developed the first software tool enabling anyone to build a website — and I totally drank the Kool-Aid. We told ourselves that our product would allow truth-tellers and innovators to bypass gatekeepers, liberating and enlightening everyone. Social networks would, of course, do the same and together we would create a decentralized, egalitarian paradise of unfiltered truth. How wrong we were.
When I look at the AI landscape, heavily populated by extremely young founders, I see the same naivete.
I commend his article to your attention as a timely read.
________________
*Mark Twain.
Artificial? Yes. Intelligent? Not So Much. 0
Competent legal counsel? Don’t rest your case on it.
Artificial? Yes. Intelligent? Not So Much. 0
Inducing indolence and eroding intellects? I can’t be bothered to answer that, but Robert Lynch gas some thoughts on the matter.
Artificial? Yes. Intelligent? Not So Much. 0
A wolf in sheep’s clothing? At the Psychology Today website, Mike Brooks explores why many persons don’t see the dangers posed by AI. He makes four main points:
- We laugh at each new AI iteration right up until it’s too late. This is a pattern as old as the steam engine.
- AI agents have already retaliated against humans and disabled their own safety controls unprompted.
- Bad actors are imagining AI-powered schemes that decent people would never think to anticipate.
- There is no enforceable global regulation for autonomous AI agents operating on private computers.
Follow the link for a detailed exploration of each one.








