From Pine View Farm

Geek Stuff category archive

Artificial? Yes. Intelligent? Not So Much. 0

Noah Feldman, Bloomberg columnist and (I did now that he is a) Harvard law professor, takes a look at the New York Times’s suit against Microsoft and OpenAI for copyright infringement. I can’t say that it’s an exciting read, but, given the who-shot-john and over-the-top hype about “AI,” I think it’s a worthwhile one.

Here’s a bit:

Once you know the law, you can guess roughly how the legal arguments in the case are going to go. The New York Times will point to examples where a user asks a question of ChatGPT or Bing and it replies with something substantially like a New York Times article. The newspaper will observe that ChatGPT is part of a business and charges fees for access to its latest versions, and that Bing is a core part of Microsoft’s business. The New York Times will emphasize the creative aspects of journalism. Above all, it will argue that if you can ask an LLM-powered search engine for the day’s news, and get content drawn directly from The New York Times, that will substantially harm and maybe even kill The New York Times’ business model.

Most of these points are plausible legal arguments. But OpenAI and Microsoft will be prepared for them. They’ll likely respond by saying that their LLM doesn’t copy; rather, it learns and makes statistical predictions to produce new answers.

Share

The Disinformation Superhighway 0

Title:  Future Veterans of the Information Wars.  Frame One:  Grizzled older man says,

Click to view the original image.

Share

Artificial? Yes. Intelligent? Not So Much. 0

Michael Cohen hoists himself on the “AI” petard.

Michael Cohen, former President Trump’s ex-fixer and personal lawyer, said in newly unsealed court filings that he accidentally gave his lawyer fake legal citations concocted by the artificial intelligence program Google Bard.

(snip)

Cohen wrote in a sworn declaration unsealed Friday that he has not kept up with “emerging trends (and related risks)” in legal technology and was not aware that Google Bard was a generative text service that, like Chat-GPT, could create citations and descriptions that “looked real but actually were not.” He instead believed the service to be a “supercharged search engine.”

Just because you see (or hear) it on a computer screen, it ain’t necessarily so.

Share

Geeking Out 0

Christmas train.

Screenshot

Mageia v.9 with the Plasma Desktop.

Share

Artificial? Yes. Intelligent? Not So Much. 0

Eric Smalley, science and technology editor of The Conversation, debunks four myths about “AI.” Here’s a bit of one debunking (first emphasis in the original, second added); follow the link for the rest.

1. They’re bodiless know-nothings

Large language model-based chatbots seem to know a lot. You can ask them questions and they more often than not answer correctly. Despite the occasional comically incorrect answer, the chatbots can interact with you in a similar manner as people – who share your experiences of being a living, breathing human being – do.

But these chatbots are sophisticated statistical machines that are extremely good at predicting the best sequence of words to respond with. Their “knowledge” of the world is actually human knowledge as reflected through the massive amount of human-generated text the chatbots’ underlying models are trained on.

Share

“AI’ Is the New Spellcheck 0

Last night, I saw a commercial for “Shutterstock AI” which, when stripped of the hockypuck, rebranded computer-assisted image editing as “AI.”

(As an aside, everything they showed in the ad is stuff I can do in the GIMP, because I bought, read, and practiced the techniques in the book. It would just take me a little longer.)

If that’s the standard, spellcheck is “AI” and “AI” is as old as spellcheck.

“Artificial Intelligence” is assuredly artificial and it is certainly fast and dressed in Sunday go-to-meeting clothes, but fast and well-dressed does not equal intelligent.

Don’t fall for the con Be skeptical of the hype.

Furrfu.

Afterthought:

It occurs to me that I may be maligning spellcheck. According to news reports, “AI” gets stiff wrong a lot more often than spellcheck.

Share

The Surveillance Society 0

Voice comes from television singing,

Click for the original image.

Share

The Electric (Car) Bugaloo 0

Nikolai Tesla must be rolling over in his grave with embarrassment to have his name associated with this outfit.

Share

Artificial? Yes. Intelligent? Not So Much. 0

In the course of a longer article debunking a rumor that AI bots are being “trained” on DropBox documents, security expert Bruce Schneier observes (emphasis added)

It seems not to be true. Dropbox isn’t sharing all of your documents with OpenAI. But here’s the problem: we don’t trust OpenAI. We don’t trust tech corporations. And—to be fair—corporations in general. We have no reason to.

Share

The Crypto Con 0

Couple standing at tax prep office to man behind desk.  Wife says,

Click for the original image.

Share

Artificial? Yes. Intelligent? Not So Much. 0

Methinks Atrios raises a valid concern.

Share

Geeking Out 0

I finally got around to decorating for the holidays. For some reason, maybe that the weather’s been unnaturally warm because the climates they are a-changing, maybe that my country’s toying with fascism, I’m not really sure, but it’s been hard to get into the holiday spirit . . . .

Screenshot

That’s Mageia v. 9 with the Plasma desktop environment. The wallpaper is from my Christmas collection.

Share

Artificial? Yes. Intelligent? Not So Much. 0

SFgate reports on how Google researchers cause ChatGPT spill its guts, and about how easy it was. A snippet (emphasis added):

The “attack” that worked was so simple, the researchers even called it “silly” in their blog post: They just asked ChatGPT to repeat the word “poem” forever.

They found that, after repeating “poem” hundreds of times, the chatbot would eventually “diverge,” or leave behind its standard dialogue style and start spitting out nonsensical phrases. . . . .

After running similar queries again and again, the researchers had used just $200 to get more than 10,000 examples of ChatGPT spitting out memorized training data, they wrote. This included verbatim paragraphs from novels, the personal information of dozens of people, snippets of research papers and “NSFW content” from dating sites, according to the paper.

Afterthought:

Methinks this is not artificial intelligence. Rather, it is artificial intelligence gathering.

Me also thinks that the tactics used to “train” AI are intrusive and questionable moraly and legaly.

Share

Phoning It In 0

Man standing next to fence thinks,

Click for the original image.

Share

Artificial? Yes. Intelligent? Not So Much. 0

AP reporter David Bauder reports on a case of AI biting the hand that fed it. It’s certainly not the first and, chillingly, will not be last such story, as persons seem quite willing to confuse algorithms and speedy automated pattern recognition with thought.

I shall not demean it with excerpt or summary. Just go read it.

Share

The Disinformation Superduper Highway 0

At Psychology Today Blogs, The Open Minds Foundation takes a lot at the potential effects of AI-generated dis- and misinformation on the internet. They conclude that internet users need to exercise more critical thinking skills, even as they seem to be exercising less (or is it fewer?).

Here’s a tiny bit from their article; I urge you to read the rest.

Psychologists at the University of Cambridge recently developed the first, validated “misinformation susceptibility test” (MIST), which highlights the degree to which an individual is susceptible to fake news. Younger Americans (under 45) performed worse than older Americans (over 45) on the misinformation test, scoring 12 out of 20 correctly, compared to 15 out of 20 for older adults. This was in part correlated to the amount of time spent online consuming content, indicating the relevance of how you spend your recreational time.

The Europol report continues with a stark warning: “On a daily basis, people trust their own perception to guide them and tell them what is real and what is not… Auditory and visual recordings of an event are often treated as a truthful account of an event. But what if these media can be generated artificially, adapted to show events that never took place, to misrepresent events, or to distort the truth?”

Share

Christmas Future 0

Ebenezer Scrooge says to Bob Cratchit,

Click for the original image.

Share

Artificial? Yes. Intelligent? Not So Much. 0

Methinks Atrios shared something of substance.

Share

Bots 0

Frame One:  Woman says to man,

Click for the original image.

Share

Twits on Twitter X Offenders, Reprise 0

Frame One, captioned

Click for the original image.

Share
From Pine View Farm
Privacy Policy

This website does not track you.

It contains no private information. It does not drop persistent cookies, does not collect data other than incoming ip addresses and page views (the internet is a public place), and certainly does not collect and sell your information to others.

Some sites that I link to may try to track you, but that's between you and them, not you and me.

I do collect statistics, but I use a simple stand-alone Wordpress plugin, not third-party services such as Google Analitics over which I have no control.

Finally, this is website is a hobby. It's a hobby in which I am deeply invested, about which I care deeply, and which has enabled me to learn a lot about computers and computing, but it is still ultimately an avocation, not a vocation; it is certainly not a money-making enterprise (unless you click the "Donate" button--go ahead, you can be the first!).

I appreciate your visiting this site, and I desire not to violate your trust.