From Pine View Farm

Geek Stuff category archive

The Crypto Con 0

Yes, indeedy-do, it seems that there’s an app for that.

Share

The Disinformation Superhighway 0

Shining sun labeled

Click for the original image.

Share

Driven to Destruction 0

Couple in car headed down the highway to Hell as Satan looks on.  Driver says,

Click for the original image.

The other day, I had a very scary experience.

I was behind a Tesla with a little bumper sticker that said, “I’m probably on autopilot.”

I’m just glad I wasn’t in front of it.

Share

Artificial? Yes. Intelligent? Not So Much. 0

AI CTO goes all “I don’t know” when asked what data was used to “train” her company’s software. Emma and the crew discuss the potential licensing and copyright issues. (Warning: Mild language.)

Share

It’s All about the Algorithm 0

At Psychology Today Blogs, Christine Louise Hohlbaum discusses David Donnelly’s documentary, The Cost of Convenience, which explores the extent to which corporate digital surveillance has been woven into our society and economy. Here’s an excerpt:

The truth is that every move we make is being recorded, online and off. Through our excessive smartphone usage, so-called digital versions of ourselves are being tracked. Algorithms learn our preferences and keep feeding us more of the same in an endless feedback loop. It has created a deep rift in society, polarizing us to the point of paralysis. In essence, this film is an exploration of how far down the rabbit hole we are. It is about maximizing profits, not optimizing people’s lives.

She ends the article with some suggestions as to how to fight back.

Me, I’m going to keep an eye out for the film.

Share

Twits Own Twitter X Offenders 0

San Francisco judge dismisses Elon Musk’s empty suit.

Some of the judge’s comments, as quoted in the news story, delight the soul.

Share

Much Ado about Not Much of Anything:
What Drives Drivel on the Disinformation Superhighway
0

I found the recent recent who-shot-john about Princess Kate to be–er, what’s the word I’m looking for?–stupid. Here’s a person who’s in the public eye only because of whom she’s married to, and who she’s married to is in the public eye only because he’s descended from folks who use to rich, influential, and powerful, persons who are now rich and not very influential (and, to the extent they are influential, they choose not to exercise influence, for fear the hollowness thereof will be exposed). Yet, persons spent a week or more speculating, questioning, and conspiracy theorizing on “social” media because she had not been seen in public for a couple of months.

At Psychology Today Blogs, Susan Albers takes a look at the the dynamics that powered this spectacular waste of time and energy, concluding that

Tuckman’s theory of group dynamics may help us understand the social media discussion about Princess Kate.

Methinks her article is worth a read, as it sheds some light on how and why falsehood, irrelevance, and just plain stupid jams up the disinformation superhighway.

Share

Shotspitter 0

The EFF has long warned of the dangers of certain technologies with which law enforcement seems enamored, such as shotspotter and facial recognition. Here’s a bit from their latest article on the topic:

On January 25, while responding to a ShotSpotter alert, a Chicago police officer opened fire on an unarmed “maybe 14 or 15” year old child in his backyard. Three officers approached the boy’s house, with one asking “What you doing bro, you good?” They heard a loud bang, later determined to be fireworks, and shot at the child. Fortunately, no physical injuries were recorded. In initial reports, police falsely claimed that they fired at a “man” who had fired on officers.

In a subsequent assessment of the event, the Chicago Civilian Office of Police Accountability (“COPA”) concluded that “a firearm was not used against the officers.” Chicago Police Superintendent Larry Snelling placed all attending officers on administrative duty for 30 days and is investigating whether the officers violated department policies.

Follow the link for context.

Share

Artificial? Yes. Intelligent? Not So Much. 0

Also, not your friend despite what they want you to think, as sociologist Joseph E. Davis points out at Psychology Today Blogs, where he points out that

Machines are not our friends, and they don’t care for us.

Follow the link for the evidence.

Share

If One Standard Is Good, Two Must Be Better, Disinformation Superhighway Dept. 0

The EFF’s David Greene highlights the hypocrisy, A snippet:

In a case being heard Monday (March 18–ed.) at the Supreme Court, 45 Washington lawmakers have argued that government communications with social media sites about possible election interference misinformation are illegal.

Agencies can’t even pass on information about websites state election officials have identified as disinformation, even if they don’t request that any action be taken, they assert.

Yet just this week the vast majority of those same lawmakers said the government’s interest in removing election interference misinformation from social media justifies banning a site used by 150 million Americans.

Details at the link.

Share

Artificial? Yes. Intelligent? Not So Much. 0

Security maven Bruce Schneier thinks that the devolution of “social” media can help us understand the potentia–and the potential dangers–of artificial “intelligence.” Here’s a bit from the beginning of his article:

In particular, five fundamental attributes of social media have harmed society. AI also has those attributes. Note that they are not intrinsically evil. They are all double-edged swords, with the potential to do either good or ill. The danger comes from who wields the sword, and in what direction it is swung. This has been true for social media, and it will similarly hold true for AI. In both cases, the solution lies in limits on the technology’s use.

The five items he discusses are:

  • Advertising.
  • Surveillance
  • Virality (as in “going viral,” not as in “strong”)
  • Lock-in (of your data about you)
  • Monopolization (or, alternatively, monetization)

Follow the link for his detailed exploration of each.

Share

The Great TikTok Misdirection Play 0

Youngster in bed looking at TikTok on a smart phone.  Eye stares through the window saying,

Via Job’s Anger.

Share

The Open Doorbell Fallacy 0

Consumer Reports has an appalling report on how insecure video “security” doorbells are.

Here’s how it starts; follow the link for the appalling part.

On a recent Thursday afternoon, a Consumer Reports journalist received an email containing a grainy image of herself waving at a doorbell camera she’d set up at her back door.

If the message came from a complete stranger, it would have been alarming. Instead, it was sent by Steve Blair, a CR privacy and security test engineer who had hacked into the doorbell from 2,923 miles away.

Blair had pulled similar images from connected doorbells at other CR employees’ homes and from a device in our Yonkers, N.Y., testing lab. While we expected him to gain access to these devices, it was still a bit shocking to see photos of the journalist’s deck and backyard. After all, video doorbells are supposed to help you keep an eye on strangers at the door, not let other people watch you.

H/T Bruce Schneier.

Share

Artificial? Yes. Intelligent? Not So Much. 0

Under the pretext of a quibble over terminology, psychology professor Gregg Henriques takes a deep dive into why and how AI Chatbots and LLMs get so much so wrong so often. Here’s a tiny bit from his article (emphasis added):

For example, when my family was playing around with ChatGPT, we wanted to see if it “knew” who my father was. My dad, Dr. Peter R. Henriques, is a retired professor of history who has written several books on George Washington. ChatGPT respond correctly that my dad was a biographer of Washington; however, it also claimed, wrongly, that he wrote a biography on Henry Clay. This is an example of a hallucination.

Where do hallucinations like these come from? LLMs like ChatGPT are a type of artificial intelligence that run algorithms that decode content on massive data sets to make predictions about text to generate content. Although the results are often remarkable, it also is the case that LLMs do not really understand the material, at least not like a normal person understand things. This should not surprise us. After all, it is not a person, but a computer that is running a complicated statistical program.

Share

The Myth of Multitasking 0

At Psychology Today Blogs, Joyce Marter debunks de bunk. A snippet:

While multitasking may seem like a productivity booster, it can also lead to decreased focus, poorer work quality, and increased stress levels. Multitasking has been proven to reduce productivity and job performance . . . .

Follow the link for context.

Share

Artificial? Yes. Intelligent? Not So Much. 0

The Register reports on a New York law firm that tried to use ChatGPT to justify a ginormous billing.

The judge was not impressed.

Share

Geeking Out 0

Mageia v. 9 with the Plasma desktop. Firefox is shaded near the top of the screen under the Plasma menu. Xclock is in the upper right, GKrellM in the lower right. The wallpaper is from my collection.

Screenshot

Click for a larger image.

Recently, I ran an online upgrade from v. 8 to v. 9. The online upgrade from v. 7 to v. 8 went smooth as glass, and this one seemed to also, but, when it was done, I was unable to run updates or install new software from the repos. I poked at the problem for a while, but was unable to resolve it, so last night I installed v. 9 from optical media while listening to a BBC Lord Peter Wimsey mystery at the Old Time Radio Theater.

The installation went quickly and easily, and, as I have a separate /home partition, when I fired it up, all my configuration files were still in place without my having to restore anything from backup (and, no, you can’t do that on Windows). I’m currently cleaning up the dust bunnies, such as, for example, installing the few applications, like Xclock and GKrellM, which are not part of the standard Mageia installation.

Share

Meta: Purged Plugin 0

Thanks to the most excellent detective work of my hosting provider’s tech support staff, it has been deemed necessary to remove the NOAA Weather plugin that was on the sidebar over there —–> for years, as the plugin has not been updated for several years and is no longer compatible with the most recent versions of php, the scripting language that powers WordPress, which in turn powers this geyser of genius. (Or is it a drenching of drivel? Inquiring minds want to know.)

As one who wore a headset for over half a decade, I must say that my hosting provider’s tech support staff is superb.

I know, because I’ve been there.

Share

Artificial? Yes. Intelligent? Not So Much. Dangerous? Certainly. 0

At Psychology Today Blogs, Dr. Marlynn Wei tales a look at the psychological implications of the spread of deepfakes and “AI” clones and lists half a dozen dangers. Here’s one (emphasis in the original):

4. Creation of false memories

Research in deepfakes shows that people’s opinions can be swayed by interactions with a digital replica, even when they know it is not the real person. This can create “false” memories of someone. Negative false memories could harm the reputation of the portrayed person. Positive false memories can have complicated and unexpected interpersonal effects as well. Interacting with one’s own AI clone could also result in false memories.

Her article is a worthwhile read, and prends garde a toi.

Share

Artificial? Yes. Intelligent? Not So Much. 0

Via Bruce Schneir, here’s a study that demonstrates that AI can be made more human-like.

That is, it can be “trained” to deceive.

As the song* says

    The things that you’re liable
    To see in your large language model,
    They ain’t necessarily so.

_________________

*With apologies to George Gershwin.

Share