From Pine View Farm

Geek Stuff category archive

Artificial? Yes. Intelligent? Not So Much. 0

Fomenting folderol? Well, garbage in, garbage out, as Steven J. Vaughan-Nichols reports at El Reg. Here’s a bit of his article:

Welcome to Garbage In/Garbage Out (GIGO). Formally, in AI circles, this is known as AI model collapse. In an AI model collapse, AI systems, which are trained on their own outputs, gradually lose accuracy, diversity, and reliability. This occurs because errors compound across successive model generations, leading to distorted data distributions and “irreversible defects” in performance. The final result? A Nature 2024 paper stated, “The model becomes poisoned with its own projection of reality.”

In related news, in this week’s Le Show, Harry Shearer reports on how big tech’s fascination with AI is leading to record levels of usage of electricity and of water to cool data centers. The relevant portion starts at about the 38-minute mark.

Also, too.

Share

Artificial? Yes. Intelligent? Not So Much. 0

Occasionally correct? As my old boss used to day, even a blind pig finds an acorn sometime.

Share

Artificial? Yes. Intelligent? Not So Much. 0

Attracting acolytes? Apparently.

Or you can read the transcript.

Afterthought:

No matter how gussied up they may be in Sunday-go-to-meeting clothes, how decked out that may be with bells and whistles and seductive voices, computers are tools that do what their programmers tell them to do.

We forget that at our peril

Share

Artificial? Yes. Intelligent? Not So Much? 0

Intellectually stimulating? Let’s talk about the bridge I’ve got for sale in Brooklyn.

Share

Artificial? Yes. Intelligent? Not So Much. 0

Competent legal advisor? Not according to this judge.

Share

Artificial? Yes. Intelligent? Not So Much. 0

The new blow-up dall? No air pump needed, baby!

Share

Artificial? Yes. Intelligent? Not So Much. 0

Functionally illiterate? Judge for yourself.

Share

Artificial? Yes. Intelligent? Not So Much. 0

Hacked into telling lies? It’s just a computer program, folks. Of course it’s hackable.

Share

Artificial? Yes. Intelligent? Not So Much. 0

Secure and trustworthy? What part of “move fast and break things” don’t you understand?

Share

Artificial? Yes. Intelligent? Not So Much. 0

The stupid? Automated.

The Daryl? Missing in action.

Share

The Disinformation Superhighway 0

At Psychology Today Blogs, Vanessa LoBue takes a look at how dis- and misinformation spread on the disinformation superhighway. She notes that researchers have determined the following:

First, they* found that fake news stories were 70 percent more likely to be retweeted than real news stories. Second, they found that it took real news stories 6 times as long to reach 1,500 people as fake ones (Vosoughi et al., 2018). This makes us all incredibly susceptible to believing fake news stories simply by the fact that we’re more likely to hear them, and we typically don’t hear them just once.

She goes on to offer some techniques to avoid falling for fabrications.

I commend her article as a timely read in this disinformation age.

___________________

*Citations at the link.

Share

Artificial? Yes. Intelligent? Not So Much. 0

Intellectually enhancing? If you buy that, I got a bridge in Brooklyn that’s for sale.

Share

The Crypto Gang 0

Title:  The Trump Gang--Eric, Don Jr., and Jared.  Image:  The Trump gang as wild west outlaws robbing a stage coach driven by Uncle Sam, who says to them,

Click to view the original image.

Also, speaking of the crypto con . . . .

Share

The Ghost in the Matriculation 0

SFGate reports that scammers are “students” out of thin electrons. Here’s a bit:

Some of the disengaged students in Pugh’s courses are what administrators and cybersecurity experts say are “ghost students,” and they’ve been a growing problem for community colleges, particularly since the shift to online instruction during the pandemic. These “ghost students” are artificially intelligent agents or bots that pose as real students in order to steal millions of dollars of financial aid that could otherwise go to actual humans. And as colleges grapple with the problem, Pugh and her colleagues have been tasked with a new and “frustrating” task of weeding out these bots and trying to decide who’s a real person.

Share

The Great Wallpaper of China 0

Screenshot

Mageia v. 9 with the Plasma Desktop. Xclock is in the upper right; GKrellM (my favorite system monitor), in the lower right. The wallpaper is from my collection.

Share

Artificial? Yes. Intelligent? Not So Much. 0

Mendacious? Well, they’re just bots. They do what they’re told.

From El Reg:

Researchers at Carnegie Mellon University, the University of Michigan, and the Allen Institute for AI have looked at the trade-off AI models make between truthfulness and utility, using hypothetical scenarios where the two conflict.

What they found is that AI models will often lie in order to achieve the goals set for them.

Share

Artificial? Yes. Intelligent? Not So Much. 0

The dialog: The daughter of a London bus driver?

The closed caption: The daughter of Alinda Masdrayva?

The stupid: Mind-boggling.

Share

DOGE Bull in the China Shop 0

El Reg reports that DOGE and the Trump maladministration are gutting the United States’s cyber security efforts. A snippet:

When it comes to technology security, let’s face it. We’re lame and we’re lazy. But we don’t normally go out of our way to make it worse. Until now. Until President Donald Trump and his cohort of tech minions, better known as Elon Musk’s Department of Government Efficiency (DOGE), took over.

You might think, if you’re outside the US, who cares? Unfortunately, whether you like it or not, the US has long taken the lead in technical security.

Follow the link for the details, and, remember, persons who don’t know how stuff works should not be put in charge of working that stuff.

Share

Artificial? Yes. Intelligent? Not So Much. 0

Capable of making stuff up out of thin air? You betcha!

Share

Artificial? Yes. Intelligent? Not So Much. 0

Telling you what you want to hear? Just maybe.

At Psychology Today Blogs, John Nosta argues that AI bots are being engineered to tell us what they their algorithms “think” we want to hear. A snippet:

So, when an LLM tells you you’re onto something—or echoes your language back with elegant variation—it activates the same cognitive loop as being understood by another person. But the model isn’t agreeing because it believes you’re right. It’s agreeing because it’s trained to. As I previously noted in my story on cognitive entrapment, “The machine doesn’t challenge you. It adapts to you.”

Share
From Pine View Farm
Privacy Policy

This website does not track you.

It contains no private information. It does not drop persistent cookies, does not collect data other than incoming ip addresses and page views (the internet is a public place), and certainly does not collect and sell your information to others.

Some sites that I link to may try to track you, but that's between you and them, not you and me.

I do collect statistics, but I use a simple stand-alone Wordpress plugin, not third-party services such as Google Analitics over which I have no control.

Finally, this is website is a hobby. It's a hobby in which I am deeply invested, about which I care deeply, and which has enabled me to learn a lot about computers and computing, but it is still ultimately an avocation, not a vocation; it is certainly not a money-making enterprise (unless you click the "Donate" button--go ahead, you can be the first!).

I appreciate your visiting this site, and I desire not to violate your trust.