Geek Stuff category archive
Artificial? Yes. Intelligent? Not So Much. 0
Impartial and objective? Per Cornelia C. Walther, “, , , a new study has exposed an unsettling paradox at the core of our assumptions. Involving analysis of nine different LLMs and nearly a half-million prompts, the research shows that these supposedly impartial systems change their fundamental ethical decisions based on a single demographic detail.”
Details at the link.
Devolution, Reprise 0
At Psychology Today Blogs, Matthew Facciani writes about an AI TikTok account that fooled millions into thinking it was real life human being and suggests some steps we can take–not as individuals, but as polity–to protect against such fakery.
He makes three main points:
- The viral “MAGA Megan” TikTok showed clear AI traits yet still fooled large audiences.
- AI fakes spread by aligning with identity, leveraging networks, and gaming algorithms.
- Combating AI misinformation requires media literacy, awareness or our biases, and platform action.
Methinks this a valuable and timely read, especially as Big Tech seems determined to stuff AI down our throats, as illustrated by yesterday’s post about the Zuckerborg.
Facebook Frolics 0
At SFGate, Stephen Council reports on the Zuckerborg’s turn to AI in its quest for assimilation. Council is not sanguine.
Here’s a tiny bit from his piece.
His point that people need more friends gels with recent research into the ill-health effects of isolation. But Zuckerberg’s idea of patching over loneliness with algorithmic avatars is an ugly vision of the world: a purposeful unraveling of the social fabric that gives us community, culture, accountability and love. We need to refuse this vision. The solution to not having enough friends is — needs to be — making more friends. More care and responsibility for our neighbors, not bubbles of solitude.
It’s All about the Algorithm, Reprise 0
At Psychology Today Blogs, Daniel Marston suggests that something much simpler than the “content” offered by the algorithm keeps us glued to our screens. It’s the mere fact that the “content” keeps changing. He cites a study that seems to bear this out:
Now, if you can tear yourself away from watching online videos of persons cleaning their houses, go read the rest of his article . . . .
It’s All about the Algorithm 0
Susanna Newsonen takes a look at how “(S)ocial media hijacks your brain’s reward system, making it hard to log off” and how that erodes persons’ attention spans. A snippet:
Now, go read a book and, remember, “social” media isn’t.
Artificial? Yes. Intelligent? Not So Much. 0
Unbiased and objective? That bridge in Brooklyn is still on the market.
Speaking of Today’s QOTD . . . . 0
Now even your baggage can have baggage.
Artificial? Yes. Intelligent? Not So Much. 0
Truthful? Pigs. Wings.
At Psychology Today Blogs, Timothy Cook (no relation to Tim Apple) offers a four-step process for tricking AI into revealing its biases and fabrications.
Given the hype and hyperbole about these robotic search engines and given how many browsers and websites are trying to hornswoggle us into letting AI bots do our thinking for us, it is a worthwhile read.
Artificial? Yes. Intelligent? Not So Much. 0
A learning aid? Impediment, actually.
Timothy Cook (no relation to Tim Apple) argues that, rather than helping students learn, AI, with its built-in bias towards certainty (often based on stuff AI makes up out of thin air, I will add), will stunt their education. Specifically, it will inhibit their development of critical thinking skills.
He identifies four specific dangers.
Students lose the ability to sit with “I don’t know.” . . .
They learn intellectual dishonesty as a strategy. . . .
They develop intolerance for complexity. . . .
Most dangerously, they lose their authentic voice. . . .
Follow the link for a detailed discussion of this issue.
Republican Thought Police 0
El Reg reports that the Republican thought police are now trying to ban “woke” AI*. Here are a couple of snippets (emphasis added); follow the link to put them in context.
(snip)
“In the LLM world, attempts to ‘un-wokeify’ LLMs have literally produced an AI that named itself MechaHitler,” he said. “This isn’t just a problem in how LLMs are constructed – it’s actually a problem in how humans have constructed ‘truth’ and ideology, and it’s not one that AI is going to fix.”
_________________________
*That is, AI that doesn’t reflect and perpetuate their racism, bigotry, and prejudices.
Artificial? Yes. Intelligent? Not So Much. 0
Citing precedent? Nah. Just making stuff up.
So egregiously that it provoked a judge into kicking some lawyers off a case. Here’s a bit from the story at Above the Law:
The Roll-Back 0
Methinks Bruce Schneier makes a valid point.
However much he wants to, Trump is not going to be able to roll back the clock, but he’s going to do a heck of a lot damage along the way.
The Outers of the Outed 0
At El Reg, Brandon Vigliarolo, using the recent incident at a Coldplay concert as a springboard, argues that we are living a a surveillance state of our own creation. A snippet:
One more time, “social” media isn’t.
It’s All about the Algorithm 0
In a longer article looking at how hate metastasizes, Steven Stosny includes this fascinating and not at all surprising tidbit:
The common thread in most of the cultural and political posts sent to me by algorithms has been, you guessed it, hate.
Follow the link for context.
Artificial? Yes. Intelligent? Not So Much. 0
At Psychology Today Blogs, John Nosta wonders whether we should stick with “not so much,” or should we (these are my words, not his) continue to invite the singularity over for dinner.
Where is Neo when you need him?