Geek Stuff category archive
Shotspitter 0
The EFF has long warned of the dangers of certain technologies with which law enforcement seems enamored, such as shotspotter and facial recognition. Here’s a bit from their latest article on the topic:
In a subsequent assessment of the event, the Chicago Civilian Office of Police Accountability (“COPA”) concluded that “a firearm was not used against the officers.” Chicago Police Superintendent Larry Snelling placed all attending officers on administrative duty for 30 days and is investigating whether the officers violated department policies.
Follow the link for context.
Artificial? Yes. Intelligent? Not So Much. 0
Also, not your friend despite what they want you to think, as sociologist Joseph E. Davis points out at Psychology Today Blogs, where he points out that
Follow the link for the evidence.
If One Standard Is Good, Two Must Be Better, Disinformation Superhighway Dept. 0
The EFF’s David Greene highlights the hypocrisy, A snippet:
Agencies can’t even pass on information about websites state election officials have identified as disinformation, even if they don’t request that any action be taken, they assert.
Yet just this week the vast majority of those same lawmakers said the government’s interest in removing election interference misinformation from social media justifies banning a site used by 150 million Americans.
Details at the link.
Artificial? Yes. Intelligent? Not So Much. 0
Security maven Bruce Schneier thinks that the devolution of “social” media can help us understand the potentia–and the potential dangers–of artificial “intelligence.” Here’s a bit from the beginning of his article:
The five items he discusses are:
- Advertising.
- Surveillance
- Virality (as in “going viral,” not as in “strong”)
- Lock-in (of
yourdata about you)- Monopolization (or, alternatively, monetization)
Follow the link for his detailed exploration of each.
The Open Doorbell Fallacy 0
Consumer Reports has an appalling report on how insecure video “security” doorbells are.
Here’s how it starts; follow the link for the appalling part.
If the message came from a complete stranger, it would have been alarming. Instead, it was sent by Steve Blair, a CR privacy and security test engineer who had hacked into the doorbell from 2,923 miles away.
Blair had pulled similar images from connected doorbells at other CR employees’ homes and from a device in our Yonkers, N.Y., testing lab. While we expected him to gain access to these devices, it was still a bit shocking to see photos of the journalist’s deck and backyard. After all, video doorbells are supposed to help you keep an eye on strangers at the door, not let other people watch you.
H/T Bruce Schneier.
Artificial? Yes. Intelligent? Not So Much. 0
Under the pretext of a quibble over terminology, psychology professor Gregg Henriques takes a deep dive into why and how AI Chatbots and LLMs get so much so wrong so often. Here’s a tiny bit from his article (emphasis added):
Where do hallucinations like these come from? LLMs like ChatGPT are a type of artificial intelligence that run algorithms that decode content on massive data sets to make predictions about text to generate content. Although the results are often remarkable, it also is the case that LLMs do not really understand the material, at least not like a normal person understand things. This should not surprise us. After all, it is not a person, but a computer that is running a complicated statistical program.
The Myth of Multitasking 0
At Psychology Today Blogs, Joyce Marter debunks de bunk. A snippet:
Follow the link for context.
Artificial? Yes. Intelligent? Not So Much. 0
The Register reports on a New York law firm that tried to use ChatGPT to justify a ginormous billing.
Geeking Out 0
Mageia v. 9 with the Plasma desktop. Firefox is shaded near the top of the screen under the Plasma menu. Xclock is in the upper right, GKrellM in the lower right. The wallpaper is from my collection.
Recently, I ran an online upgrade from v. 8 to v. 9. The online upgrade from v. 7 to v. 8 went smooth as glass, and this one seemed to also, but, when it was done, I was unable to run updates or install new software from the repos. I poked at the problem for a while, but was unable to resolve it, so last night I installed v. 9 from optical media while listening to a BBC Lord Peter Wimsey mystery at the Old Time Radio Theater.
The installation went quickly and easily, and, as I have a separate /home partition, when I fired it up, all my configuration files were still in place without my having to restore anything from backup (and, no, you can’t do that on Windows). I’m currently cleaning up the dust bunnies, such as, for example, installing the few applications, like Xclock and GKrellM, which are not part of the standard Mageia installation.
Meta: Purged Plugin 0
Thanks to the most excellent detective work of my hosting provider’s tech support staff, it has been deemed necessary to remove the NOAA Weather plugin that was on the sidebar over there —–> for years, as the plugin has not been updated for several years and is no longer compatible with the most recent versions of php, the scripting language that powers WordPress, which in turn powers this geyser of genius. (Or is it a drenching of drivel? Inquiring minds want to know.)
As one who wore a headset for over half a decade, I must say that my hosting provider’s tech support staff is superb.
I know, because I’ve been there.
Artificial? Yes. Intelligent? Not So Much. Dangerous? Certainly. 0
At Psychology Today Blogs, Dr. Marlynn Wei tales a look at the psychological implications of the spread of deepfakes and “AI” clones and lists half a dozen dangers. Here’s one (emphasis in the original):
Research in deepfakes shows that people’s opinions can be swayed by interactions with a digital replica, even when they know it is not the real person. This can create “false” memories of someone. Negative false memories could harm the reputation of the portrayed person. Positive false memories can have complicated and unexpected interpersonal effects as well. Interacting with one’s own AI clone could also result in false memories.
Her article is a worthwhile read, and prends garde a toi.
Artificial? Yes. Intelligent? Not So Much. 0
Via Bruce Schneir, here’s a study that demonstrates that AI can be made more human-like.
That is, it can be “trained” to deceive.
As the song* says
-
The things that you’re liable
To see in your large language model,
They ain’t necessarily so.
_________________
*With apologies to George Gershwin.
Suckered by the Algorithm 0
At Psychology Today Blogs, Bill Sullivan offers yet more evidence that “social” media isn’t, particular for the young. Here’s a bit of his article:
Madonna’s claim that we are living in a material world is backed by convincing data.
Follow the link, then be sure to post it to your Zuckerborg or Muskrat page.
Artificial? Yes. Intelligent? Not So Much. 0
Just because you see–er–hear it on a computer–er–device, it ain’t necessarily so.
Details at the link.
Facebook Frolics 0
A Consumer Reports study details the extent to which you have been assimilated by the Zuckerborg and its enablers. Indeed, perhaps the most astounding bit in the report is the number of Zuckerborg enablers who “share” your data with Facebook.
Follow the link for the article.
One more time, “social” media isn’t.
You don’t use “social” media. It uses you.
H/T Bruce Schneier for the head up.