Geek Stuff category archive
Artificial? Yes. Intelligent? Not So Much. 0
Competent therapists? At Psychology Today Blogs, Marlynn Wei points out that “(n)ew research reveals AI companions handled teen mental health crises correctly only 22% of the time.”
Artificial? Yes. Intelligent? Not So Much. 0
Sociopathic? At Psychology Today Blogs, Matt Grawitch argues “(t)rusting AI too much can lead to real-world consequences, including emotional or psychological harm.”
Unguarded Rails 0
When I worked for the railroad, we were governed by the “Rules of Conduct” (I probably still have my copy tucked away somewhere). Of course, there were other rules and policies and procedures, but the Rules of Conduct guided them all.
The railroad can be a dangerous place. In the early days, one way that hiring managers would determine whether an applicant for an on-road job had experience was to count his fingers . . . .
Over that years, the culture changed, and one of the rules that was drummed into everyone’s head was this:
Safety is of the first importance in the discharge of duty.
Via The Japan Times, Gautam Mukunda makes a strong case that that rule seems to be unheard of at the Zuckerborg, or, methinks, among much of Big Tech, as they plunge into AI. A snippet:
Meta’s chatbot scandal demonstrates a culture that is willing to sacrifice the safety and well-being of users, even children, if it helps fuel its push into AI.
Artificial? Yes. Intelligent? Not So Much. 0
Legally liable for abetting suicide? Per Joe Pierre at Psychology Today Blogs, that remains to be determined.
One must needs wonder, when Mark Zuckerberg said, “Move fast and break things,” was he thinking about persons’ lives?
Artificial? Yes. Intelligent? Not So Much. 0
Discriminatory? I wouldn’t be at all surprised.
Allen, a Black student, was eating chips with friends when the AI triggered an alert. Within minutes, eight police cars arrived, officers pointed guns at Allen, handcuffed him, and searched him for weapons..
Artificial? Yes. Intelligent? Not So Much. 0
Psychopaths? Not according to Justin Gregg, who argues that AI is amoral (which could be worse). A snippet:
AIs, on the other hand, lack all of these capacities. The concept of “harm” means nothing to them. As Nerantz points out, “to understand what it means to harm someone, one must have experiential knowledge of pain. AIs, thus…are a priori excluded from the possession of moral emotions, whereas psychopaths, as sentient humans, can, in principle, experience moral emotions, but they, pathologically, do not.” Psychopaths can intellectually and consciously understand the nature of their deficit, can make new analogies involving the capacities that they do possess, and can thus alter their behavior in deference to that awareness.
AIs cannot.
Artificial? Yes. Intelligent? Not So Much. 0
True to their word? Psychotherapist Paula Fontenelle expresses skepticism, as she reports that
Artificial? Yes. Intelligent? Not So Much. 0
A competent therapist? At Psychology Today Blogs, Marlynn Wei doesn’t go so far as to say that it quacks like a duck, but doubts are expressed.
Artificial? Yes. Intelligent? Not So Much. 0
Gullible? As all get out, as security maven Bruce Schneier explains. Here’s a tiny bit from his article (emphasis added):
In related news, check out this week’s episode of Harry Shearer’s Le Show for a report on AI’s bubbleliciousness. The relevant portion starts at about the eight minute mark.
Artificial? Yes. Intelligent? Not So Much. 0
Competent legal counsel? Give it a moment to hallucinate an answer from made up precedents.
Meanwhile, at Above the Law, Joe Patrice wonders:
Which brings us back to the question: has AI made lawyers dumber?
Artificial? Yes. Intelligent? Not So Much. 0
Bubblelicious? My old Philly DL friend Noz wonders what Big Tech will do when the bubble bursts.
Artificial? Yes. Intelligent? Not So Much. 0
Reliable? If you think so, maybe you should read what AL.com’s John Archibald discovered when he used AI to search for himself.
Artificial? Yes. Intelligent? Not So Much. 0
A competent therapist? Pigs, wings.
At Psychology Today Blogs, Dan Mager reports that using AI ChatbotS as counselors “. . . is not just risky, it’s dangerous.”
- Increasingly, people have begun to utilize AI for mental health care.
- Both research and anecdotal evidence find AI can be a risky or dangerous substitute for human therapists.
- AI therapy services adhere to neither mandated reporting laws nor confidentiality/HIPAA requirements.
- Three states now have laws restricting the use of AI-based therapy, and others are exploring this issue.
Follow the link for details.
Artificial? Yes. Intelligent? Not So Much. 0
Overhyped? From El Reg:
Follow the link to hear the hiss of air leaking out of the bubble.
Artificial? Yes. Intelligent? Not So Much. 0
A helpmeet of hackers? Security maven Bruce Schneier reports that
Much more at the link.
Artificial? Yes. Intelligent? Not So Much. 0
A competent legal researcher? Why, you might even say it’s unprecedented.
Artificial? Yes. Intelligent? Not So Much. 0
Manipulative? Per Thomas Claburn at El Reg,
Remember, Big Tech doesn’t want to provide a service to you.
They want you to service them.







