From Pine View Farm

Geek Stuff category archive

The Great TikTok Misdirection Play 0

Youngster in bed looking at TikTok on a smart phone.  Eye stares through the window saying,

Via Job’s Anger.

Share

The Open Doorbell Fallacy 0

Consumer Reports has an appalling report on how insecure video “security” doorbells are.

Here’s how it starts; follow the link for the appalling part.

On a recent Thursday afternoon, a Consumer Reports journalist received an email containing a grainy image of herself waving at a doorbell camera she’d set up at her back door.

If the message came from a complete stranger, it would have been alarming. Instead, it was sent by Steve Blair, a CR privacy and security test engineer who had hacked into the doorbell from 2,923 miles away.

Blair had pulled similar images from connected doorbells at other CR employees’ homes and from a device in our Yonkers, N.Y., testing lab. While we expected him to gain access to these devices, it was still a bit shocking to see photos of the journalist’s deck and backyard. After all, video doorbells are supposed to help you keep an eye on strangers at the door, not let other people watch you.

H/T Bruce Schneier.

Share

Artificial? Yes. Intelligent? Not So Much. 0

Under the pretext of a quibble over terminology, psychology professor Gregg Henriques takes a deep dive into why and how AI Chatbots and LLMs get so much so wrong so often. Here’s a tiny bit from his article (emphasis added):

For example, when my family was playing around with ChatGPT, we wanted to see if it “knew” who my father was. My dad, Dr. Peter R. Henriques, is a retired professor of history who has written several books on George Washington. ChatGPT respond correctly that my dad was a biographer of Washington; however, it also claimed, wrongly, that he wrote a biography on Henry Clay. This is an example of a hallucination.

Where do hallucinations like these come from? LLMs like ChatGPT are a type of artificial intelligence that run algorithms that decode content on massive data sets to make predictions about text to generate content. Although the results are often remarkable, it also is the case that LLMs do not really understand the material, at least not like a normal person understand things. This should not surprise us. After all, it is not a person, but a computer that is running a complicated statistical program.

Share

The Myth of Multitasking 0

At Psychology Today Blogs, Joyce Marter debunks de bunk. A snippet:

While multitasking may seem like a productivity booster, it can also lead to decreased focus, poorer work quality, and increased stress levels. Multitasking has been proven to reduce productivity and job performance . . . .

Follow the link for context.

Share

Artificial? Yes. Intelligent? Not So Much. 0

The Register reports on a New York law firm that tried to use ChatGPT to justify a ginormous billing.

The judge was not impressed.

Share

Geeking Out 0

Mageia v. 9 with the Plasma desktop. Firefox is shaded near the top of the screen under the Plasma menu. Xclock is in the upper right, GKrellM in the lower right. The wallpaper is from my collection.

Screenshot

Click for a larger image.

Recently, I ran an online upgrade from v. 8 to v. 9. The online upgrade from v. 7 to v. 8 went smooth as glass, and this one seemed to also, but, when it was done, I was unable to run updates or install new software from the repos. I poked at the problem for a while, but was unable to resolve it, so last night I installed v. 9 from optical media while listening to a BBC Lord Peter Wimsey mystery at the Old Time Radio Theater.

The installation went quickly and easily, and, as I have a separate /home partition, when I fired it up, all my configuration files were still in place without my having to restore anything from backup (and, no, you can’t do that on Windows). I’m currently cleaning up the dust bunnies, such as, for example, installing the few applications, like Xclock and GKrellM, which are not part of the standard Mageia installation.

Share

Meta: Purged Plugin 0

Thanks to the most excellent detective work of my hosting provider’s tech support staff, it has been deemed necessary to remove the NOAA Weather plugin that was on the sidebar over there —–> for years, as the plugin has not been updated for several years and is no longer compatible with the most recent versions of php, the scripting language that powers WordPress, which in turn powers this geyser of genius. (Or is it a drenching of drivel? Inquiring minds want to know.)

As one who wore a headset for over half a decade, I must say that my hosting provider’s tech support staff is superb.

I know, because I’ve been there.

Share

Artificial? Yes. Intelligent? Not So Much. Dangerous? Certainly. 0

At Psychology Today Blogs, Dr. Marlynn Wei tales a look at the psychological implications of the spread of deepfakes and “AI” clones and lists half a dozen dangers. Here’s one (emphasis in the original):

4. Creation of false memories

Research in deepfakes shows that people’s opinions can be swayed by interactions with a digital replica, even when they know it is not the real person. This can create “false” memories of someone. Negative false memories could harm the reputation of the portrayed person. Positive false memories can have complicated and unexpected interpersonal effects as well. Interacting with one’s own AI clone could also result in false memories.

Her article is a worthwhile read, and prends garde a toi.

Share

Artificial? Yes. Intelligent? Not So Much. 0

Via Bruce Schneir, here’s a study that demonstrates that AI can be made more human-like.

That is, it can be “trained” to deceive.

As the song* says

    The things that you’re liable
    To see in your large language model,
    They ain’t necessarily so.

_________________

*With apologies to George Gershwin.

Share

Deepfaked 0

You can read the news story here.

Share

Extra-Special Bonus QOTD 0

Kerry Greenwood:

Email is a trap; it feels private, but it isn’t.

Greenwood, Kerry, Trick or Treat (Scotsdale, AZ, Poisoned Pen Press, 2010) p. 161

Share

Suckered by the Algorithm 0

At Psychology Today Blogs, Bill Sullivan offers yet more evidence that “social” media isn’t, particular for the young. Here’s a bit of his article:

Studies have shown that social media use, especially among younger people, increases materialism—a drive to accumulate and flaunt money and possessions. Such a mindset implies that external goods like Italian sports cars, Cartier watches, or gourmet dining are signs of a person’s status. Materialists gauge their own worth—and the worth of other people—by how much money and bling they’ve acquired, rather than judging internal goods like character. The materialistic mindset fosters attitudes of competition, envy, and greed.

Madonna’s claim that we are living in a material world is backed by convincing data.

Follow the link, then be sure to post it to your Zuckerborg or Muskrat page.

Share

Facebook Frolics 0

Under-valuation frolics.

Share

Devolution 0

Image of man evolving from apes, learning to walk upright, then once again walking hunched over, glued to a cell phone.

Via C&L.

Share

Artificial? Yes. Intelligent? Not So Much. 0

Just because you see–er–hear it on a computer–er–device, it ain’t necessarily so.

The Second Circuit has referred attorney Jae S. Lee to a grievance panel for citing a fake case made up by ChatGPT in a complaint and never checking to see if the case spit out by the AI was real.

Details at the link.

Share

Facebook Frolics 0

A Consumer Reports study details the extent to which you have been assimilated by the Zuckerborg and its enablers. Indeed, perhaps the most astounding bit in the report is the number of Zuckerborg enablers who “share” your data with Facebook.

Follow the link for the article.

Using a panel of 709 volunteers who shared archives of their Facebook data, Consumer Reports found that a total of 186,892 companies sent data about them to the social network. On average, each participant in the study had their data sent to Facebook by 2,230 companies. That number varied significantly, with some panelists’ data listing over 7,000 companies providing their data.

One more time, “social” media isn’t.

You don’t use “social” media. It uses you.

H/T Bruce Schneier for the head up.

Share

Artificial? Yes. Intelligent? Not So Much. 0

The New York Times reports on a internet user who used “AI” to compose a false and misleading obituary just to get clicks (and advertising revenue), spreading lies and drowning truth along the way.

Just go read it. The “intelligence” may be “artificial,” but the stupid is real.

Share

The Bullies’ Pulpit 0

One more time, “social” media isn’t.

Share

Artificial? Yes. Intelligent? Not So Much. (Updated) 0

I find it ironic that what used to be called “data scraping” somehow morphs into being “training” when the scraper is labeled “AI.”

Aside:

Can this be? Is “AI” the new blow-up doll?

Addendum:

Bruce Schneier offers a hint as to how to out “AI” bots on “social” media.

Share

Frozen 0

It turns out that Teslas don’t seem to like really cold weather. Here’s a bit of the report from The Register:

In the Oak Brook suburb of Chicago, Illinois, where temperatures have routinely dipped way below freezing, local media reported public charging stations turning into “car graveyards” because motorists were unable to power their vehicles.

“Nothing. No juice. Still on zero percent, and this is like three hours being out here after being out here three hours yesterday,” Tesla owner Tyler Beard told Fox 32.

He wasn’t alone. Dozens of cars were reportedly lined up and abandoned at the Tesla supercharging station in Oak Brook along with multiple charging stations around Chicago.

Share