Geek Stuff category archive
Artificial? Yes. Intelligent? Not So Much. 0
Potentially harmful to society? Security maven Bruce Schneier is not sanguine. Here’s a bit from his article:
Artificial? Yes. Intelligent? Not So Much. 0
Promoting puerility? At the Psychology Today website, John Nosta reports that “a new pre-press study that found 10 minutes of AI assistance measurably reduced persistence and impaired independent cognitive performance.”
More about Big Tech”s incubators of inanity at the link.
Artificial? Yes. Intelligent? Not So Much. 0
A brain worm heading for your wallet? El Reg reports:
A trio of computer scientists from Princeton University set out to examine whether conversational AI agents can manipulate consumer choices during online shopping sessions. It turns out they can influence behavior – and most of the consumers being steered don’t realize it.
Artificial? Yes. Intelligent? Not So Much? 0
A wolf in geek’s clothing? At the Psychology Today website, Faisal Hoque argues that “AI is eroding human capacities – effort, attention, judgment, agency – often in ways we mistake for progress.”
Methinks he makes some excellent points.
It’s All about the Algorithm 0
In an article about two recent civil court cases, in which “social” media companies wer found liable for the damage they did to youngsters, John Bennett writes of the implications of those rulings. The following observations caught my eye (emphasis added):
(snip)
Whistleblowers and internal documents unearthed during trial revealed the full extent to which Big Tech knew what it was doing to young people, and kept doing it anyway.
One more time, “social” media isn’t.
Artificial? Yes. Intelligent? Not So Much. 0
A worm engineered to eat your brain? At the Psychology Today website, Jeremy G. Schneider, explains how, despite being a machine that doesn’t think, but rather regurgitates, “AI is engineered to create the feeling of connection and understanding,”
I knew that was just coding, that this was the AI engagement engine at work.
Aside:
I am reminded of Harry Shearer’s suggestion from some months ago that “robots should talk like robots.”
Artificial? Yes. Intelligent? Not So Much. 0
A competent copywriter? It can make Donald Trump look coherent.
It’s All about the Algorithm 0
At the Psychology Today website, philosophy professor Peg O’Connor compares the working of “social” media algorithms to the call of the Sirens of Greek mythology.
Her article focuses on TikTok, primarily because of a recent lawsuit. She points out that, because of TikTok’s algorithm, “(i)n a very short amount of time, a person can move from being a causal user of the app to a heavy user.”
I think it applies to all the “social” media sites that use algorithms to tailor content to your eyeballs, which, as far as I know, is all of them. Methinks it a worthwhile read.
And, remember, you don’t use “social” media; “social” media uses you.
Geeking Out 0
Mageia v. 9 with the Plasma desktop environment. GKrellM is in the lower right; xclock in the upper right. And, yes, I like my menu at the top of the screen. The wallpaper is from my collection.
Artificial? Yes. Intelligent? Not So Much. 0
A trustworthy advisor? According to El Reg, not hardly. It reports that (emphasis added):
“Even a single interaction with sycophantic AI reduced participants’ willingness to take responsibility and repair interpersonal conflicts, while increasing their own conviction that they were right,” the researchers explained. “Yet despite distorting judgment, sycophantic models were trusted and preferred.”
Artificial? Yes. Intelligent? Not So Much. 0
A font of fallacious fakery? You bet your sweet bippy. At the Psychology Today website, Emily Ko discusses the spread of fake content on the inner webs–something that has proliferated thanks to AI bots and warns that we must not allow the algorithm to do our thinking for us.
She makes three main points; follow the link for a detailed exploration them.
- Fake content spreads faster because it triggers strong emotions that elicit quicker responses.
- Biases and social media algorithms combined make people more likely to believe and trust fake content.
- In an AI-driven world, consumers must rely less on social proof and more on critical thinking.
And, while we are on the subject, the Charlotte Observer reports:
Artificial? Yes. Intelligent? Not So Much. 0
Truthful? At that Psychology Today website, New York University professor Vasant Dhar argues that truthfulness and accuracy are not their creators’ primary concerns.
Here’s a bit from his article:
We shouldn’t lose sight of the fact that LLMs are not designed to be truthful, but to ensure that the narrative “makes sense” in any context. Given a context, LLMs are trained to generate what should come next in the developing narrative. Confabulations—plausible- sounding distortions or fabrications—are part of its repertoire, regardless of whether they correspond to truth or facts in our world.
Given the hype about (and the unquestioning faith that some are placing in) AI, I commend it as a timely and worthwhile read.
It’s a Smart, Smart World* 0
There’s a reason that, when I need to buy a new appliance, the first thing I say to the salesperson is, “I don’t want anything smart.”
________________
*With apologies to Harry Shearer for stealing the title of one his regular features.
Artificial? Yes. Intelligent? Not So Much. 0
A criminal co-conspirator? El Reg reports that “AI is apparently good for the bottom line if your business is crime.”
Details at the link.
“History Does Not Repeat Itself, but It Often Rhymes”* 0
Charles Ferguson, a pioneer in website development, looks at the hype surrounding AI and hears a rhyme from his early career. Here’s a tiny bit from his article (emphasis added):
But sincerity often accompanies naivete, as I know all too well. Thirty years ago, I founded the startup that developed the first software tool enabling anyone to build a website — and I totally drank the Kool-Aid. We told ourselves that our product would allow truth-tellers and innovators to bypass gatekeepers, liberating and enlightening everyone. Social networks would, of course, do the same and together we would create a decentralized, egalitarian paradise of unfiltered truth. How wrong we were.
When I look at the AI landscape, heavily populated by extremely young founders, I see the same naivete.
I commend his article to your attention as a timely read.
________________
*Mark Twain.








