Geek Stuff category archive
Artificial? Yes. Intelligent? Not So Much. 0
Spying on you? That’s just what Big Tech does.
Listen as Claude confesses to Bernie Sanders.
Via C&L.
It’s All about the Algorithm . . . 0
. . . and the algorithm is not your friend.
By the Book, Reprise 0
Colin Marshall, writing at Open Culture, argues that we may be nearing the point of bringing to life a book by George Orwell. Unlike Mark Hermann, though, he doesn’t point to Animal Farm.
He argues that AI may help lead us into the world envisioned in 1984.
Artificial? Yes. Intelligent? Not So Much. 0
Competent counsel? At Above the Law, Joe Patrice notes that
Follow the link for details.
Artificial? Yes. Intelligent? Not So Much. 0
A partner in crime? Bruce Schneier reports that hackers are salivating over putting AI to work for themselves.
Artificial? Yes. Intelligent? Not So Much. 0
Potentially harmful to society? Security maven Bruce Schneier is not sanguine. Here’s a bit from his article:
Artificial? Yes. Intelligent? Not So Much. 0
Promoting puerility? At the Psychology Today website, John Nosta reports that “a new pre-press study that found 10 minutes of AI assistance measurably reduced persistence and impaired independent cognitive performance.”
More about Big Tech”s incubators of inanity at the link.
Artificial? Yes. Intelligent? Not So Much. 0
A brain worm heading for your wallet? El Reg reports:
A trio of computer scientists from Princeton University set out to examine whether conversational AI agents can manipulate consumer choices during online shopping sessions. It turns out they can influence behavior – and most of the consumers being steered don’t realize it.
Artificial? Yes. Intelligent? Not So Much? 0
A wolf in geek’s clothing? At the Psychology Today website, Faisal Hoque argues that “AI is eroding human capacities – effort, attention, judgment, agency – often in ways we mistake for progress.”
Methinks he makes some excellent points.
It’s All about the Algorithm 0
In an article about two recent civil court cases, in which “social” media companies wer found liable for the damage they did to youngsters, John Bennett writes of the implications of those rulings. The following observations caught my eye (emphasis added):
(snip)
Whistleblowers and internal documents unearthed during trial revealed the full extent to which Big Tech knew what it was doing to young people, and kept doing it anyway.
One more time, “social” media isn’t.
Artificial? Yes. Intelligent? Not So Much. 0
A worm engineered to eat your brain? At the Psychology Today website, Jeremy G. Schneider, explains how, despite being a machine that doesn’t think, but rather regurgitates, “AI is engineered to create the feeling of connection and understanding,”
I knew that was just coding, that this was the AI engagement engine at work.
Aside:
I am reminded of Harry Shearer’s suggestion from some months ago that “robots should talk like robots.”
Artificial? Yes. Intelligent? Not So Much. 0
A competent copywriter? It can make Donald Trump look coherent.
It’s All about the Algorithm 0
At the Psychology Today website, philosophy professor Peg O’Connor compares the working of “social” media algorithms to the call of the Sirens of Greek mythology.
Her article focuses on TikTok, primarily because of a recent lawsuit. She points out that, because of TikTok’s algorithm, “(i)n a very short amount of time, a person can move from being a causal user of the app to a heavy user.”
I think it applies to all the “social” media sites that use algorithms to tailor content to your eyeballs, which, as far as I know, is all of them. Methinks it a worthwhile read.
And, remember, you don’t use “social” media; “social” media uses you.
Geeking Out 0
Mageia v. 9 with the Plasma desktop environment. GKrellM is in the lower right; xclock in the upper right. And, yes, I like my menu at the top of the screen. The wallpaper is from my collection.
Artificial? Yes. Intelligent? Not So Much. 0
A trustworthy advisor? According to El Reg, not hardly. It reports that (emphasis added):
“Even a single interaction with sycophantic AI reduced participants’ willingness to take responsibility and repair interpersonal conflicts, while increasing their own conviction that they were right,” the researchers explained. “Yet despite distorting judgment, sycophantic models were trusted and preferred.”








