From Pine View Farm

Geek Stuff category archive

Artificial? Yes. Intelligent? Not So Much. 0

Competent therapists? At Psychology Today Blogs, Marlynn Wei points out that “(n)ew research reveals AI companions handled teen mental health crises correctly only 22% of the time.”

Share

It’s All about the Algorithm 0

Thom discusses how “social” media algorithms are polluting dis coarse discourse.

One more time, “social” media isn’t.

Share

Artificial? Yes. Intelligent? Not So Much. 0

Sociopathic? At Psychology Today Blogs, Matt Grawitch argues “(t)rusting AI too much can lead to real-world consequences, including emotional or psychological harm.”

Share

Unguarded Rails 0

When I worked for the railroad, we were governed by the “Rules of Conduct” (I probably still have my copy tucked away somewhere). Of course, there were other rules and policies and procedures, but the Rules of Conduct guided them all.

The railroad can be a dangerous place. In the early days, one way that hiring managers would determine whether an applicant for an on-road job had experience was to count his fingers . . . .

Over that years, the culture changed, and one of the rules that was drummed into everyone’s head was this:

Safety is of the first importance in the discharge of duty.

Via The Japan Times, Gautam Mukunda makes a strong case that that rule seems to be unheard of at the Zuckerborg, or, methinks, among much of Big Tech, as they plunge into AI. A snippet:

One of the most recent examples is a Reuters investigation, which found that Meta allowed its AI chatbots to, among other things, “engage a child in conversations that are romantic or sensual.” That reporting was a topic at a Senate hearing in September on the safety risks such bots pose to kids — and underlines just how dangerous it is when AI and toxic company cultures mix.

Meta’s chatbot scandal demonstrates a culture that is willing to sacrifice the safety and well-being of users, even children, if it helps fuel its push into AI.

Share

Artificial? Yes. Intelligent? Not So Much. 0

Legally liable for abetting suicide? Per Joe Pierre at Psychology Today Blogs, that remains to be determined.

One must needs wonder, when Mark Zuckerberg said, “Move fast and break things,” was he thinking about persons’ lives?

Share

Artificial? Yes. Intelligent? Not So Much. 0

Discriminatory? I wouldn’t be at all surprised.

In the Baltimore area, Sixteen-year-old Taki Allen was swarmed by armed officers outside Kenwood High School after an AI gun detection system misidentified his crumpled Doritos bag as a firearm. The incident happened just 20 minutes after football practice this past Monday (10-20-25).

Allen, a Black student, was eating chips with friends when the AI triggered an alert. Within minutes, eight police cars arrived, officers pointed guns at Allen, handcuffed him, and searched him for weapons..

Share

Artificial? Yes. Intelligent? Not So Much. 0

Psychopaths? Not according to Justin Gregg, who argues that AI is amoral (which could be worse). A snippet:

There is something important about the human mind that complicates this analogy. Unlike AIs, psychopaths are sentient. . . .

AIs, on the other hand, lack all of these capacities. The concept of “harm” means nothing to them. As Nerantz points out, “to understand what it means to harm someone, one must have experiential knowledge of pain. AIs, thus…are a priori excluded from the possession of moral emotions, whereas psychopaths, as sentient humans, can, in principle, experience moral emotions, but they, pathologically, do not.” Psychopaths can intellectually and consciously understand the nature of their deficit, can make new analogies involving the capacities that they do possess, and can thus alter their behavior in deference to that awareness.

AIs cannot.

Share

Artificial? Yes. Intelligent? Not So Much. 0

True to their word? Psychotherapist Paula Fontenelle expresses skepticism, as she reports that

(some–ed.) AI companies say their apps are 18+ but there is no verification. I said I was 16 and nothing happened.

Share

Artificial? Yes. Intelligent? Not So Much. 0

Hallucinatory? Timothy Leary would be jealous.

Share

Artificial? Yes. Intelligent? Not So Much. 0

A competent therapist? At Psychology Today Blogs, Marlynn Wei doesn’t go so far as to say that it quacks like a duck, but doubts are expressed.

Share

Artificial? Yes. Intelligent? Not So Much. 0

Gullible? As all get out, as security maven Bruce Schneier explains. Here’s a tiny bit from his article (emphasis added):

The fundamental problem is that AI must compress reality into model-legible forms. In this setting, adversaries can exploit the compression. They don’t have to attack the territory; they can attack the map. Models lack local contextual knowledge. They process symbols, not meaning. A human sees a suspicious URL; an AI sees valid syntax. And that semantic gap becomes a security gap.

In related news, check out this week’s episode of Harry Shearer’s Le Show for a report on AI’s bubbleliciousness. The relevant portion starts at about the eight minute mark.

Share

The Robot Apocalypse 0

One San Francisco community is seeing Waymo cars than it wants to.

Read more »

Share

Artificial? Yes. Intelligent? Not So Much. 0

Competent legal counsel? Give it a moment to hallucinate an answer from made up precedents.

Meanwhile, at Above the Law, Joe Patrice wonders:

Which brings us back to the question: has AI made lawyers dumber?

Share

Artificial? Yes. Intelligent? Not So Much. 0

Bubblelicious? My old Philly DL friend Noz wonders what Big Tech will do when the bubble bursts.

Share

Artificial? Yes. Intelligent? Not So Much. 0

Reliable? If you think so, maybe you should read what AL.com’s John Archibald discovered when he used AI to search for himself.

Share

Artificial? Yes. Intelligent? Not So Much. 0

A competent therapist? Pigs, wings.

At Psychology Today Blogs, Dan Mager reports that using AI ChatbotS as counselors “. . . is not just risky, it’s dangerous.”

  • Increasingly, people have begun to utilize AI for mental health care.
  • Both research and anecdotal evidence find AI can be a risky or dangerous substitute for human therapists.
  • AI therapy services adhere to neither mandated reporting laws nor confidentiality/HIPAA requirements.
  • Three states now have laws restricting the use of AI-based therapy, and others are exploring this issue.

Follow the link for details.

Share

Artificial? Yes. Intelligent? Not So Much. 0

Overhyped? From El Reg:

AI hype is colliding with reality yet again. Wiley’s global survey of researchers finds more of them using the tech than ever, and fewer convinced it’s up to the job.

Follow the link to hear the hiss of air leaking out of the bubble.

Share

Artificial? Yes. Intelligent? Not So Much. 0

A helpmeet of hackers? Security maven Bruce Schneier reports that

AI agents are now hacking computers. They’re getting better at all phases of cyberattacks, faster than most of us expected.

Much more at the link.

Share

Artificial? Yes. Intelligent? Not So Much. 0

A competent legal researcher? Why, you might even say it’s unprecedented.

Share

Artificial? Yes. Intelligent? Not So Much. 0

Manipulative? Per Thomas Claburn at El Reg,

AI companion apps such as Character.ai and Replika commonly try to boost user engagement with emotional manipulation, a practice that academics characterize as a dark pattern.

Remember, Big Tech doesn’t want to provide a service to you.

They want you to service them.

Share
From Pine View Farm
Privacy Policy

This website does not track you.

It contains no private information. It does not drop persistent cookies, does not collect data other than incoming ip addresses and page views (the internet is a public place), and certainly does not collect and sell your information to others.

Some sites that I link to may try to track you, but that's between you and them, not you and me.

I do collect statistics, but I use a simple stand-alone Wordpress plugin, not third-party services such as Google Analitics over which I have no control.

Finally, this is website is a hobby. It's a hobby in which I am deeply invested, about which I care deeply, and which has enabled me to learn a lot about computers and computing, but it is still ultimately an avocation, not a vocation; it is certainly not a money-making enterprise (unless you click the "Donate" button--go ahead, you can be the first!).

I appreciate your visiting this site, and I desire not to violate your trust.