Geek Stuff category archive
The Myth of Multitasking 0
At Psychology Today Blogs, Joyce Marter debunks de bunk. A snippet:
Follow the link for context.
Artificial? Yes. Intelligent? Not So Much. 0
The Register reports on a New York law firm that tried to use ChatGPT to justify a ginormous billing.
Geeking Out 0
Mageia v. 9 with the Plasma desktop. Firefox is shaded near the top of the screen under the Plasma menu. Xclock is in the upper right, GKrellM in the lower right. The wallpaper is from my collection.
Recently, I ran an online upgrade from v. 8 to v. 9. The online upgrade from v. 7 to v. 8 went smooth as glass, and this one seemed to also, but, when it was done, I was unable to run updates or install new software from the repos. I poked at the problem for a while, but was unable to resolve it, so last night I installed v. 9 from optical media while listening to a BBC Lord Peter Wimsey mystery at the Old Time Radio Theater.
The installation went quickly and easily, and, as I have a separate /home partition, when I fired it up, all my configuration files were still in place without my having to restore anything from backup (and, no, you can’t do that on Windows). I’m currently cleaning up the dust bunnies, such as, for example, installing the few applications, like Xclock and GKrellM, which are not part of the standard Mageia installation.
Meta: Purged Plugin 0
Thanks to the most excellent detective work of my hosting provider’s tech support staff, it has been deemed necessary to remove the NOAA Weather plugin that was on the sidebar over there —–> for years, as the plugin has not been updated for several years and is no longer compatible with the most recent versions of php, the scripting language that powers WordPress, which in turn powers this geyser of genius. (Or is it a drenching of drivel? Inquiring minds want to know.)
As one who wore a headset for over half a decade, I must say that my hosting provider’s tech support staff is superb.
I know, because I’ve been there.
Artificial? Yes. Intelligent? Not So Much. Dangerous? Certainly. 0
At Psychology Today Blogs, Dr. Marlynn Wei tales a look at the psychological implications of the spread of deepfakes and “AI” clones and lists half a dozen dangers. Here’s one (emphasis in the original):
Research in deepfakes shows that people’s opinions can be swayed by interactions with a digital replica, even when they know it is not the real person. This can create “false” memories of someone. Negative false memories could harm the reputation of the portrayed person. Positive false memories can have complicated and unexpected interpersonal effects as well. Interacting with one’s own AI clone could also result in false memories.
Her article is a worthwhile read, and prends garde a toi.
Artificial? Yes. Intelligent? Not So Much. 0
Via Bruce Schneir, here’s a study that demonstrates that AI can be made more human-like.
That is, it can be “trained” to deceive.
As the song* says
-
The things that you’re liable
To see in your large language model,
They ain’t necessarily so.
_________________
*With apologies to George Gershwin.
Suckered by the Algorithm 0
At Psychology Today Blogs, Bill Sullivan offers yet more evidence that “social” media isn’t, particular for the young. Here’s a bit of his article:
Madonna’s claim that we are living in a material world is backed by convincing data.
Follow the link, then be sure to post it to your Zuckerborg or Muskrat page.
Devolution 0
Via C&L.
Artificial? Yes. Intelligent? Not So Much. 0
Just because you see–er–hear it on a computer–er–device, it ain’t necessarily so.
Details at the link.
Facebook Frolics 0
A Consumer Reports study details the extent to which you have been assimilated by the Zuckerborg and its enablers. Indeed, perhaps the most astounding bit in the report is the number of Zuckerborg enablers who “share” your data with Facebook.
Follow the link for the article.
One more time, “social” media isn’t.
You don’t use “social” media. It uses you.
H/T Bruce Schneier for the head up.
Artificial? Yes. Intelligent? Not So Much. 0
The New York Times reports on a internet user who used “AI” to compose a false and misleading obituary just to get clicks (and advertising revenue), spreading lies and drowning truth along the way.
Just go read it. The “intelligence” may be “artificial,” but the stupid is real.
The Bullies’ Pulpit 0
One more time, “social” media isn’t.
Artificial? Yes. Intelligent? Not So Much. (Updated) 0
I find it ironic that what used to be called “data scraping” somehow morphs into being “training” when the scraper is labeled “AI.”
Aside:
Can this be? Is “AI” the new blow-up doll?
Addendum:
Bruce Schneier offers a hint as to how to out “AI” bots on “social” media.
Frozen 0
It turns out that Teslas don’t seem to like really cold weather. Here’s a bit of the report from The Register:
“Nothing. No juice. Still on zero percent, and this is like three hours being out here after being out here three hours yesterday,” Tesla owner Tyler Beard told Fox 32.
He wasn’t alone. Dozens of cars were reportedly lined up and abandoned at the Tesla supercharging station in Oak Brook along with multiple charging stations around Chicago.
Artificial? Yes. Intelligent? Not So Much. 0
It turns out, when persons as “AI” questions about case law, “AI” tends to just make stuff up “hallucinate,” to use the term from the article at Above the Law.
The Surveillance State Society
0
The EFF reports on a victory for privacy. A snippet:
So it is welcome news that the Federal Trade Commission has brought a successful enforcement action against X-Mode Social (and its successor Outlogic).
The FTC’s complaint illustrates the dangers created by this industry. The company collects our location data through software development kits (SDKs) incorporated into third-party apps, through the company’s own apps, and through buying data from other brokers. The complaint alleged that the company then sells this raw location data, which can easily be correlated to specific individuals.
More at the link.
Aside:
I find it ironic that persons sweat bullets about limited and regulated “government surveillance” while willingly and heedlessly running nekkid before corporate collectors of confidentia–oh, never mind.
Deceptive by Design 0
At Psychology Today Blogs, Penn State professor Patrick L. Plaisance looks at the hazards of designing Chatbots and similar “AI” mechanisms (after, that’s what they are: mechanisms) to interact with users (i. e., people) as if said mechanisms were people. For example, he mentions programming them so that they appear to be typing or speaking a response at a human-like speed when, in actuality, they formed their complete response in nano-seconds.
He makes three main points; follow the link for a detailed discussion of each.
- Anthropomorphic design can be useful, but unethical when it leads us to think the tool is something it’s not.
- Chatbot design can exploit our “heuristic processing,” inviting us to wrongly assign moral responsibility.
- Dishonest human-like features compound the problems of chatbot misinformation and discrimination.