Geek Stuff category archive
The Crypto Con 0
Yes, indeedy-do, it seems that there’s an app for that.
Driven to Destruction 0
The other day, I had a very scary experience.
I was behind a Tesla with a little bumper sticker that said, “I’m probably on autopilot.”
I’m just glad I wasn’t in front of it.
It’s All about the Algorithm 0
At Psychology Today Blogs, Christine Louise Hohlbaum discusses David Donnelly’s documentary, The Cost of Convenience, which explores the extent to which corporate digital surveillance has been woven into our society and economy. Here’s an excerpt:
She ends the article with some suggestions as to how to fight back.
Me, I’m going to keep an eye out for the film.
Twits Own Twitter X Offenders
0
San Francisco judge dismisses Elon Musk’s empty suit.
Some of the judge’s comments, as quoted in the news story, delight the soul.
Much Ado about Not Much of Anything:
What Drives Drivel on the Disinformation Superhighway
0
I found the recent recent who-shot-john about Princess Kate to be–er, what’s the word I’m looking for?–stupid. Here’s a person who’s in the public eye only because of whom she’s married to, and who she’s married to is in the public eye only because he’s descended from folks who use to rich, influential, and powerful, persons who are now rich and not very influential (and, to the extent they are influential, they choose not to exercise influence, for fear the hollowness thereof will be exposed). Yet, persons spent a week or more speculating, questioning, and conspiracy theorizing on “social” media because she had not been seen in public for a couple of months.
At Psychology Today Blogs, Susan Albers takes a look at the the dynamics that powered this spectacular waste of time and energy, concluding that
Methinks her article is worth a read, as it sheds some light on how and why falsehood, irrelevance, and just plain stupid jams up the disinformation superhighway.
Shotspitter 0
The EFF has long warned of the dangers of certain technologies with which law enforcement seems enamored, such as shotspotter and facial recognition. Here’s a bit from their latest article on the topic:
In a subsequent assessment of the event, the Chicago Civilian Office of Police Accountability (“COPA”) concluded that “a firearm was not used against the officers.” Chicago Police Superintendent Larry Snelling placed all attending officers on administrative duty for 30 days and is investigating whether the officers violated department policies.
Follow the link for context.
Artificial? Yes. Intelligent? Not So Much. 0
Also, not your friend despite what they want you to think, as sociologist Joseph E. Davis points out at Psychology Today Blogs, where he points out that
Follow the link for the evidence.
If One Standard Is Good, Two Must Be Better, Disinformation Superhighway Dept. 0
The EFF’s David Greene highlights the hypocrisy, A snippet:
Agencies can’t even pass on information about websites state election officials have identified as disinformation, even if they don’t request that any action be taken, they assert.
Yet just this week the vast majority of those same lawmakers said the government’s interest in removing election interference misinformation from social media justifies banning a site used by 150 million Americans.
Details at the link.
Artificial? Yes. Intelligent? Not So Much. 0
Security maven Bruce Schneier thinks that the devolution of “social” media can help us understand the potentia–and the potential dangers–of artificial “intelligence.” Here’s a bit from the beginning of his article:
The five items he discusses are:
- Advertising.
- Surveillance
- Virality (as in “going viral,” not as in “strong”)
- Lock-in (of
yourdata about you)- Monopolization (or, alternatively, monetization)
Follow the link for his detailed exploration of each.
The Open Doorbell Fallacy 0
Consumer Reports has an appalling report on how insecure video “security” doorbells are.
Here’s how it starts; follow the link for the appalling part.
If the message came from a complete stranger, it would have been alarming. Instead, it was sent by Steve Blair, a CR privacy and security test engineer who had hacked into the doorbell from 2,923 miles away.
Blair had pulled similar images from connected doorbells at other CR employees’ homes and from a device in our Yonkers, N.Y., testing lab. While we expected him to gain access to these devices, it was still a bit shocking to see photos of the journalist’s deck and backyard. After all, video doorbells are supposed to help you keep an eye on strangers at the door, not let other people watch you.
H/T Bruce Schneier.
Artificial? Yes. Intelligent? Not So Much. 0
Under the pretext of a quibble over terminology, psychology professor Gregg Henriques takes a deep dive into why and how AI Chatbots and LLMs get so much so wrong so often. Here’s a tiny bit from his article (emphasis added):
Where do hallucinations like these come from? LLMs like ChatGPT are a type of artificial intelligence that run algorithms that decode content on massive data sets to make predictions about text to generate content. Although the results are often remarkable, it also is the case that LLMs do not really understand the material, at least not like a normal person understand things. This should not surprise us. After all, it is not a person, but a computer that is running a complicated statistical program.
The Myth of Multitasking 0
At Psychology Today Blogs, Joyce Marter debunks de bunk. A snippet:
Follow the link for context.
Artificial? Yes. Intelligent? Not So Much. 0
The Register reports on a New York law firm that tried to use ChatGPT to justify a ginormous billing.
Geeking Out 0
Mageia v. 9 with the Plasma desktop. Firefox is shaded near the top of the screen under the Plasma menu. Xclock is in the upper right, GKrellM in the lower right. The wallpaper is from my collection.
Recently, I ran an online upgrade from v. 8 to v. 9. The online upgrade from v. 7 to v. 8 went smooth as glass, and this one seemed to also, but, when it was done, I was unable to run updates or install new software from the repos. I poked at the problem for a while, but was unable to resolve it, so last night I installed v. 9 from optical media while listening to a BBC Lord Peter Wimsey mystery at the Old Time Radio Theater.
The installation went quickly and easily, and, as I have a separate /home partition, when I fired it up, all my configuration files were still in place without my having to restore anything from backup (and, no, you can’t do that on Windows). I’m currently cleaning up the dust bunnies, such as, for example, installing the few applications, like Xclock and GKrellM, which are not part of the standard Mageia installation.
Meta: Purged Plugin 0
Thanks to the most excellent detective work of my hosting provider’s tech support staff, it has been deemed necessary to remove the NOAA Weather plugin that was on the sidebar over there —–> for years, as the plugin has not been updated for several years and is no longer compatible with the most recent versions of php, the scripting language that powers WordPress, which in turn powers this geyser of genius. (Or is it a drenching of drivel? Inquiring minds want to know.)
As one who wore a headset for over half a decade, I must say that my hosting provider’s tech support staff is superb.
I know, because I’ve been there.
Artificial? Yes. Intelligent? Not So Much. Dangerous? Certainly. 0
At Psychology Today Blogs, Dr. Marlynn Wei tales a look at the psychological implications of the spread of deepfakes and “AI” clones and lists half a dozen dangers. Here’s one (emphasis in the original):
Research in deepfakes shows that people’s opinions can be swayed by interactions with a digital replica, even when they know it is not the real person. This can create “false” memories of someone. Negative false memories could harm the reputation of the portrayed person. Positive false memories can have complicated and unexpected interpersonal effects as well. Interacting with one’s own AI clone could also result in false memories.
Her article is a worthwhile read, and prends garde a toi.
Artificial? Yes. Intelligent? Not So Much. 0
Via Bruce Schneir, here’s a study that demonstrates that AI can be made more human-like.
That is, it can be “trained” to deceive.
As the song* says
-
The things that you’re liable
To see in your large language model,
They ain’t necessarily so.
_________________
*With apologies to George Gershwin.