Geek Stuff category archive
Artificial? Yes. Intelligent? Not So Much. 0
Sarah Silverman, among others, is suing the makers of AI bots for copyright infringement. She has the unmitigated gall to think that Tech Bros shouldn’t just vacuum up the work of others so as to line their own pockets.
A snippet:
“AI needs to be fair and ethical for everyone,” Matthew Butterick, one of the suit’s lawyers, said in a statement. “But Meta is leveraging the work of thousands of authors with no consent, no credit, and no compensation.”
Artificial? Yes. Intelligent? Not So Much. 0
Bruce Schneier points out the ChatGPT and similar “AI” bots work by mining data produced by others, then spitting in back out.
He proposes that those whose data is mined deserve to be reimbursed for their contributions. Follow the link for his reasoning.
Legends in Their Own Minds 0
Methinks Atrios is onto something.
Geeking Out 0
Magiea v. 8 with the Fluxbox window manager. The wallpaper is from my collection.
If I had to pick one thing that keeps me using Fluxbox, it’s the right-click menu.
Anywhere the mouse pointer is on the screen, as long as its not on an application window, a right-click brings up the menu.
Artificial? Yes. Intelligent? Not So Much. 0
At Psychology Today Blogs, Matt Grawitch argues that one of the effects of the growing use of artificial “intelligence” has been paradoxically to highlight the importance of human expertise and research. A snippet:
(snip)
Fast-forward to today and the increasing availability of AI-driven tools for research and decision-making. While many of these tools are very confined in terms of the scope of their capabilities, the introduction of broader AI, like Bard and ChatGPT, makes it possible for people to by-pass the process of researching a topic and building an argument and head straight to the conclusion or decision. The dangers of this, though, have been on full display recently, such as when a professor incorrectly flunked all his students for cheating (because ChatGPT told him they had) or the lawyer who used ChatGPT for legal research only to find that the cases he had cited didn’t exist (he was subsequently sanctioned).
Droning On 0
The marvels of modern technology . . . .
When Merola went outside to investigate, cops say, she spotted a drone hovering near the window. As she approached the drone, it began to fly away, but struck a tree branch and fell to the ground. Merola then grabbed the drone (pictured below) and dunked it in her pool, disabling the quadcopter’s electronics.
According to the report, the pilot has been–er–grounded.
The Bullies’ Pulpit 0
At Psychology Today Blogs, Mark Travers discusses a study about why some persons turn into cyberbullies. The findings were not expected. An excerpt:
“Recreation pertains to impulsive antisocial acts, whereas reward relates to more calculated and premeditated acts that may evolve over time,” said Soares. “Young individuals who partake in antisocial behavior online may be driven by a desire for excitement and the pursuit of positive emotions or social status among their peers.”
Behind That (AI) Curtain 0
At Psychology Today Blogs, Gleb Tsipursky offers some pointers for detecting the deception. He make three main points:
- AI-generated misinformation blurs truth, making it hard to discern fact from fiction.
- People can unmask AI content by scrutinizing it for inconsistencies and a lack of human touch.
- AI content detection tools can spot and neutralize misinformation, protecting against its spread.
Follow the link for a detailed discussion of each one.
Aside:
I will add one bit of advice:
Don’t believe stuff just because you see it on a computer screen.
Artificial? Yes. Intelligent? Not So Much. 0
Bruce Schneier asks whether “we really want to entrust this revolutionary technology (AI–ed.) solely to a small group of US tech companies?”
He goes on to remind us that
Follow the link for the rest of his thoughts.
Twits Own Twitter 0
Elon Musk apparently has the courage of his evictions.
Artificial? Yes. Intelligent? Not So Much. 0
At Above the Law, Ethan Beberness reports that Open AI and ChatGPT just might be getting their day in court–as defendants.
No Place To Hide 0
At the Washington Monthly, Karina Montoya has a long and detailed article about how advertising strategy is changing once again. In the past decade, advertising moved to “social” media, with a disturbing side effect of eroding the business models of legitimate news organizations. Now, she argues, retailers are marketing personal information gathered through loyalty programs, credit card purchases, and the like to advertisers. The entire piece is worth a read, but this particular bit caught my eye:
Corporations, not the government, are your “surveillance state.” And we walk nekkid through its streets every day.
It’s All about the Algorithm 0
Said algorithm engages those eyeballs and sucks them right down into a vortex of vile.
“Social” media isn’t.
Artificial? Yes. Intelligent? Not So Much. 0
In Georgia, Mark Walters, who is apparently a local radio personality of some sort, has sued OpenAI for libel based on falsehoods propagated by ChatGPT, and Techdirt wonders whether the suit has a prayer. Here’s a bit from their article; follow the link for context.
And I’m not sure there are really good answers. First off, only one person actually saw this information, and there’s no indication that he actually believed any of it (indeed, it sounds like he was aware that it was hallucinating), which would push towards it not being defamation and even if it was, there was no harm at all.
Second, even if you could argue that the content was defamatory and created harm, is there actual malice by Open AI? First off, Watson is easily a public figure, so he’d need to show actual malice by OpenAI . . . .
Aside:
Whether Walters wins or loses, I doubt he’ll be the last to want a day in court with ChatGPT.
Full-Face and Profiled 0
The EFF reports on a court’s finding about farcical recognition. A snippet (emphasis added):
Follow the link for details.