Geek Stuff category archive
It’s All about the Algorithm 0
At Psychology Today Blogs, Mark Bertin reminds us that
Follow the link for some suggestions as to how to escape the seductive lure of the algorithm.
Artificial? Yes. Intelligent? Not So Much. 0
Noah Feldman, Bloomberg columnist and (I did now that he is a) Harvard law professor, takes a look at the New York Times’s suit against Microsoft and OpenAI for copyright infringement. I can’t say that it’s an exciting read, but, given the who-shot-john and over-the-top hype about “AI,” I think it’s a worthwhile one.
Here’s a bit:
Most of these points are plausible legal arguments. But OpenAI and Microsoft will be prepared for them. They’ll likely respond by saying that their LLM doesn’t copy; rather, it learns and makes statistical predictions to produce new answers.
Artificial? Yes. Intelligent? Not So Much. 0
Michael Cohen hoists himself on the “AI” petard.
(snip)
Cohen wrote in a sworn declaration unsealed Friday that he has not kept up with “emerging trends (and related risks)” in legal technology and was not aware that Google Bard was a generative text service that, like Chat-GPT, could create citations and descriptions that “looked real but actually were not.” He instead believed the service to be a “supercharged search engine.”
Just because you see (or hear) it on a computer screen, it ain’t necessarily so.
Artificial? Yes. Intelligent? Not So Much. 0
Eric Smalley, science and technology editor of The Conversation, debunks four myths about “AI.” Here’s a bit of one debunking (first emphasis in the original, second added); follow the link for the rest.
1. They’re bodiless know-nothings
Large language model-based chatbots seem to know a lot. You can ask them questions and they more often than not answer correctly. Despite the occasional comically incorrect answer, the chatbots can interact with you in a similar manner as people – who share your experiences of being a living, breathing human being – do.
But these chatbots are sophisticated statistical machines that are extremely good at predicting the best sequence of words to respond with. Their “knowledge” of the world is actually human knowledge as reflected through the massive amount of human-generated text the chatbots’ underlying models are trained on.
“AI’ Is the New Spellcheck 0
Last night, I saw a commercial for “Shutterstock AI” which, when stripped of the hockypuck, rebranded computer-assisted image editing as “AI.”
(As an aside, everything they showed in the ad is stuff I can do in the GIMP, because I bought, read, and practiced the techniques in the book. It would just take me a little longer.)
If that’s the standard, spellcheck is “AI” and “AI” is as old as spellcheck.
“Artificial Intelligence” is assuredly artificial and it is certainly fast and dressed in Sunday go-to-meeting clothes, but fast and well-dressed does not equal intelligent.
Don’t fall for the con Be skeptical of the hype.
Furrfu.
Afterthought:
It occurs to me that I may be maligning spellcheck. According to news reports, “AI” gets stiff wrong a lot more often than spellcheck.
The Electric (Car) Bugaloo 0
Nikolai Tesla must be rolling over in his grave with embarrassment to have his name associated with this outfit.
Artificial? Yes. Intelligent? Not So Much. 0
In the course of a longer article debunking a rumor that AI bots are being “trained” on DropBox documents, security expert Bruce Schneier observes (emphasis added)
Artificial? Yes. Intelligent? Not So Much. 0
Methinks Atrios raises a valid concern.
Geeking Out 0
I finally got around to decorating for the holidays. For some reason, maybe that the weather’s been unnaturally warm because the climates they are a-changing, maybe that my country’s toying with fascism, I’m not really sure, but it’s been hard to get into the holiday spirit . . . .
That’s Mageia v. 9 with the Plasma desktop environment. The wallpaper is from my Christmas collection.
Artificial? Yes. Intelligent? Not So Much. 0
SFgate reports on how Google researchers cause ChatGPT spill its guts, and about how easy it was. A snippet (emphasis added):
They found that, after repeating “poem” hundreds of times, the chatbot would eventually “diverge,” or leave behind its standard dialogue style and start spitting out nonsensical phrases. . . . .
After running similar queries again and again, the researchers had used just $200 to get more than 10,000 examples of ChatGPT spitting out memorized training data, they wrote. This included verbatim paragraphs from novels, the personal information of dozens of people, snippets of research papers and “NSFW content” from dating sites, according to the paper.
Afterthought:
Methinks this is not artificial intelligence. Rather, it is artificial intelligence gathering.
Me also thinks that the tactics used to “train” AI are intrusive and questionable moraly and legaly.
Artificial? Yes. Intelligent? Not So Much. 0
AP reporter David Bauder reports on a case of AI biting the hand that fed it. It’s certainly not the first and, chillingly, will not be last such story, as persons seem quite willing to confuse algorithms and speedy automated pattern recognition with thought.
I shall not demean it with excerpt or summary. Just go read it.