Geek Stuff category archive
The Off-Line Scam 0
Writing at Psychology Today Blogs of a scam from two centuries ago, Matthew Facciani sees similarities with today’s online scams.
Artificial? Yes. Intelligent? Not So Much. 0
Secure? That bridge in Brooklyn is still on the market.
Security maven Bruce Schneier points out (emphasis added):
Details at the link.
Artificial? Yes. Intelligent? Not So Much. 0
As a matter of fact, let’s make that bleeping stupid.
Artificial? Yes. Intelligent? Not So Much? 0
Hazardous? Per Timothy Cook (no relation to Tim Apple), “New research shows how prolonged AI interactions distort some users’ sense of reality.”
Artificial? Yes. Intelligent? Not So Much. 0
Fostering fantasy? Per Joe Pierre, “When psychosis-proneness meets AI sycophancy, delusional thinking can result.”
Artificial? Yes. Intelligent? Not So Much. 0
Impartial and objective? Per Cornelia C. Walther, “, , , a new study has exposed an unsettling paradox at the core of our assumptions. Involving analysis of nine different LLMs and nearly a half-million prompts, the research shows that these supposedly impartial systems change their fundamental ethical decisions based on a single demographic detail.”
Details at the link.
Devolution, Reprise 0
At Psychology Today Blogs, Matthew Facciani writes about an AI TikTok account that fooled millions into thinking it was real life human being and suggests some steps we can take–not as individuals, but as polity–to protect against such fakery.
He makes three main points:
- The viral “MAGA Megan” TikTok showed clear AI traits yet still fooled large audiences.
- AI fakes spread by aligning with identity, leveraging networks, and gaming algorithms.
- Combating AI misinformation requires media literacy, awareness or our biases, and platform action.
Methinks this a valuable and timely read, especially as Big Tech seems determined to stuff AI down our throats, as illustrated by yesterday’s post about the Zuckerborg.
Facebook Frolics 0
At SFGate, Stephen Council reports on the Zuckerborg’s turn to AI in its quest for assimilation. Council is not sanguine.
Here’s a tiny bit from his piece.
His point that people need more friends gels with recent research into the ill-health effects of isolation. But Zuckerberg’s idea of patching over loneliness with algorithmic avatars is an ugly vision of the world: a purposeful unraveling of the social fabric that gives us community, culture, accountability and love. We need to refuse this vision. The solution to not having enough friends is — needs to be — making more friends. More care and responsibility for our neighbors, not bubbles of solitude.
It’s All about the Algorithm, Reprise 0
At Psychology Today Blogs, Daniel Marston suggests that something much simpler than the “content” offered by the algorithm keeps us glued to our screens. It’s the mere fact that the “content” keeps changing. He cites a study that seems to bear this out:
Now, if you can tear yourself away from watching online videos of persons cleaning their houses, go read the rest of his article . . . .
It’s All about the Algorithm 0
Susanna Newsonen takes a look at how “(S)ocial media hijacks your brain’s reward system, making it hard to log off” and how that erodes persons’ attention spans. A snippet:
Now, go read a book and, remember, “social” media isn’t.
Artificial? Yes. Intelligent? Not So Much. 0
Unbiased and objective? That bridge in Brooklyn is still on the market.
Speaking of Today’s QOTD . . . . 0
Now even your baggage can have baggage.
Artificial? Yes. Intelligent? Not So Much. 0
Truthful? Pigs. Wings.
At Psychology Today Blogs, Timothy Cook (no relation to Tim Apple) offers a four-step process for tricking AI into revealing its biases and fabrications.
Given the hype and hyperbole about these robotic search engines and given how many browsers and websites are trying to hornswoggle us into letting AI bots do our thinking for us, it is a worthwhile read.
Artificial? Yes. Intelligent? Not So Much. 0
A learning aid? Impediment, actually.
Timothy Cook (no relation to Tim Apple) argues that, rather than helping students learn, AI, with its built-in bias towards certainty (often based on stuff AI makes up out of thin air, I will add), will stunt their education. Specifically, it will inhibit their development of critical thinking skills.
He identifies four specific dangers.
Students lose the ability to sit with “I don’t know.” . . .
They learn intellectual dishonesty as a strategy. . . .
They develop intolerance for complexity. . . .
Most dangerously, they lose their authentic voice. . . .
Follow the link for a detailed discussion of this issue.
Republican Thought Police 0
El Reg reports that the Republican thought police are now trying to ban “woke” AI*. Here are a couple of snippets (emphasis added); follow the link to put them in context.
(snip)
“In the LLM world, attempts to ‘un-wokeify’ LLMs have literally produced an AI that named itself MechaHitler,” he said. “This isn’t just a problem in how LLMs are constructed – it’s actually a problem in how humans have constructed ‘truth’ and ideology, and it’s not one that AI is going to fix.”
_________________________
*That is, AI that doesn’t reflect and perpetuate their racism, bigotry, and prejudices.








