From Pine View Farm

Geek Stuff category archive

Artificial? Yes. Intelligent? Not So Much. 0

Always sure of itself? Is a hot air balloon high on its own supply?

At Psychology Today Blogs, Mona S. Weissmark cautions, “Don’t be fooled by bloviating bots.” She notes:

People often trust AI because it is authoritative, articulate, and seemingly objective. But confident-sounding information can still be completely wrong. The result is an illusion of credibility.

Follow the link for some tips as to how not be taken in by over confident-sounding erroneous bots.

Share

Artificial? Yes. Intelligent? Not So Much. 0

Hackable? With the right prompt, you can make it stomp.

Share

Artificial? Yes. Intelligent? Not So Much. 0

Bubblelicious? At El Reg, Steven J. Vaughan-Nichols argues that the AI bubble of hysterical hype is about to burst. A snippet:

You see, now that people have been using AI for everything and anything, they’re beginning to realize that its results, while fast and sometimes useful, tend to be mediocre.

Follow the link for context.

Share

Artificial? Yes. Intelligent? Not So Much. 0

A competent source of medical advice? At Psychology Today Blogs, David Weitzner suggests that you’d best duck the quack.

Share

Facebook Frolics 0

The Zuckerborg’s AI is involuntarily assimilating celebrities without their permission.

Words fail me.

Share

Artificial? Yes. Intelligent? Not So Much. 0

An illusion created by an algorithm? At Psychology Today Blogs, John Nosta notes that “AI’s fluency creates the illusion of thought, but no cognition lies behind it.”

He goes on to point out

At its core (dare I say heart), AI is a machine of probability. Word by word, it predicts what is most likely to come next. This continuation is dressed up as conversation, but it isn’t cognition. It is a statistical trick . . . .

His whole piece is worth a read.

Share

Artificial? Yes. Intelligent? Not So Much. 0

Able to take responsibility for its actions? Per Joe Patrice at Above the Law, we may soon find out.

Share

The Off-Line Scam 0

Writing at Psychology Today Blogs of a scam from two centuries ago, Matthew Facciani sees similarities with today’s online scams.

Share

Artificial? Yes. Intelligent? Not So Much. 0

Secure? That bridge in Brooklyn is still on the market.

Security maven Bruce Schneier points out (emphasis added):

Any AI that is working in an adversarial environment—and by this I mean that it may encounter untrusted training data or input—is vulnerable to prompt injection. It’s an existential problem that, near as I can tell, most people developing these technologies are just pretending isn’t there.

Details at the link.

Share

Artificial? Yes. Intelligent? Not So Much. 0

As a matter of fact, let’s make that bleeping stupid.

Share

Artificial? Yes. Intelligent? Not So Much? 0

Hazardous? Per Timothy Cook (no relation to Tim Apple), “New research shows how prolonged AI interactions distort some users’ sense of reality.”

Share

Artificial? Yes. Intelligent? Not So Much. 0

Fostering fantasy? Per Joe Pierre, “When psychosis-proneness meets AI sycophancy, delusional thinking can result.”

Share

The Crypto Con 0

The guilty plea.

Share

Geeking Out 0

Mageia v. 9 with the Plasma desktop. The wallpaper is from my collection.

Screenshot

Share

Artificial? Yes. Intelligent? Not So Much. 0

Impartial and objective? Per Cornelia C. Walther, “, , , a new study has exposed an unsettling paradox at the core of our assumptions. Involving analysis of nine different LLMs and nearly a half-million prompts, the research shows that these supposedly impartial systems change their fundamental ethical decisions based on a single demographic detail.”

Details at the link.

Share

Devolution, Reprise 0

At Psychology Today Blogs, Matthew Facciani writes about an AI TikTok account that fooled millions into thinking it was real life human being and suggests some steps we can take–not as individuals, but as polity–to protect against such fakery.

He makes three main points:

  1. The viral “MAGA Megan” TikTok showed clear AI traits yet still fooled large audiences.
  2. AI fakes spread by aligning with identity, leveraging networks, and gaming algorithms.
  3. Combating AI misinformation requires media literacy, awareness or our biases, and platform action.

Methinks this a valuable and timely read, especially as Big Tech seems determined to stuff AI down our throats, as illustrated by yesterday’s post about the Zuckerborg.

Share

Devolution 0

Nurse:  Darrn Stevens is helping us with social media buzz about our mobile blood drive.  Dana:  Darrn Stevens?  Really.  That juvenile delinquent that Joe is mentoring?  Nurse:  He's a teenage influence, Dana.  That kid has thousands of followers!  Dana (thinking to herself): Finally, the rapid decline of our society, explained.

Click for the original image.

Share

Facebook Frolics 0

At SFGate, Stephen Council reports on the Zuckerborg’s turn to AI in its quest for assimilation. Council is not sanguine.

Here’s a tiny bit from his piece.

But it’s important first to understand Zuckerberg’s approach. He mused on a podcast in April that most people have far fewer friends than they want, so we’ll probably move past the “stigma” around having AI friends and find them “valuable,” especially as they become more humanlike. “You’ll be able to basically have like an always-on video chat” with an AI, he said.

His point that people need more friends gels with recent research into the ill-health effects of isolation. But Zuckerberg’s idea of patching over loneliness with algorithmic avatars is an ugly vision of the world: a purposeful unraveling of the social fabric that gives us community, culture, accountability and love. We need to refuse this vision. The solution to not having enough friends is — needs to be — making more friends. More care and responsibility for our neighbors, not bubbles of solitude.

Share

Artificial? Yes. Intelligent? Not So Much. 0

Woke/ Not if the Republican thought police get their way.

Share

It’s All about the Algorithm, Reprise 0

At Psychology Today Blogs, Daniel Marston suggests that something much simpler than the “content” offered by the algorithm keeps us glued to our screens. It’s the mere fact that the “content” keeps changing. He cites a study that seems to bear this out:

In a study by Ando and colleagues (2025), researchers put a tablet in each marmoset’s cage with nine small, silent videos of other primates. When a marmoset tapped one of the videos, that video zoomed in and chattering sounds played. That was all it took. Within a few weeks, most of the marmosets were tapping regularly. Even when the reward was taken away, some of them kept tapping anyway.

Now, if you can tear yourself away from watching online videos of persons cleaning their houses, go read the rest of his article . . . .

Share
From Pine View Farm
Privacy Policy

This website does not track you.

It contains no private information. It does not drop persistent cookies, does not collect data other than incoming ip addresses and page views (the internet is a public place), and certainly does not collect and sell your information to others.

Some sites that I link to may try to track you, but that's between you and them, not you and me.

I do collect statistics, but I use a simple stand-alone Wordpress plugin, not third-party services such as Google Analitics over which I have no control.

Finally, this is website is a hobby. It's a hobby in which I am deeply invested, about which I care deeply, and which has enabled me to learn a lot about computers and computing, but it is still ultimately an avocation, not a vocation; it is certainly not a money-making enterprise (unless you click the "Donate" button--go ahead, you can be the first!).

I appreciate your visiting this site, and I desire not to violate your trust.