Give Me a Break category archive
At Psychology Today Blogs, Penn State professor Patrick L. Plaisance looks at the hazards of designing Chatbots and similar “AI” mechanisms (after, that’s what they are: mechanisms) to interact with users (i. e., people) as if said mechanisms were people. For example, he mentions programming them so that they appear to be typing or speaking a response at a human-like speed when, in actuality, they formed their complete response in nano-seconds.
He makes three main points; follow the link for a detailed discussion of each.
- Anthropomorphic design can be useful, but unethical when it leads us to think the tool is something it’s not.
- Chatbot design can exploit our “heuristic processing,” inviting us to wrongly assign moral responsibility.
- Dishonest human-like features compound the problems of chatbot misinformation and discrimination.
We were watching a recent episode of Family Feud on which, during the introductions, one of the contestants described her occupation as beautician and “Instagram influencer.”
Oh, Deere. Something posted on Facebook was of–er–questionable accuracy.
One more time, “social” media isn’t.
At the Inky, Harold Brubaker takes a look at hospital fees for various services that have been recently made available under a new federal regulation strongly opposed by hospitals and insurers. He concludes that they make no sense when exposed to the light. A snippet; follow the link for more.
Those are the prices consumers with high-deductible plans would have to pay to scan their knee and find out how serious the source of their pain is.
And replacing that knee would cost from $12,300 to more than $44,000 under insurance plans that IBC sells to employers and individuals.
The notion, often promoted by persons who call themselves “conservative,” that someone who is sick will comparison-shop for health care has always been fanciful. The reality is that, if there is a choice, a patient will go where his or her doctor says, and, in rural areas, there is often little or no choice from the git-go. Add in a landscape of wildly variable and irrational pricing schemes, comparison shopping for health care becomes an
impossible dream all-too-possible nightmare.
After drawing a distinction between misinformation and disinformation, Aditi Subramaniam offers some reasons as to why we are susceptible to misinformation (think the clickbait headlines that Snopes is so fond of debunking) and some techniques for dealing with it.
She starts by telling a story of her own trip down the rabbit hole of a clickbait headline (follow the link below to see what she discovered about said headline and its tenuous connection to facts, as well as for some hints to help avoid falling down your own rabbit holes). Here’s a bit of her article.
The headline also illustrates the importance of wording in communication. In linguistics, the term “implicature” describes what a sentence is used to mean, or what it implies, rather than what it says literally. Scheming politicians, marketing professionals, lawyers, and even con men of various kinds use implicature and “weasel wording” to say something while meaning something else – allowing them to shirk responsibility for their words.
Writing AL.com, Frances Coleman is taken aback by the proliferation of self-appointed experts, which she thinks can be attributed in large part to “social” media. Follow the link for some examples of said expertise (under the circumstances, though, I shall proffer “expertism” as a more appropriate term).
Many of these self-appointed “experts,” of course, meet the classic definition of the term, in which
- “x” is the mathematical symbol for an unknown quantity,
- “spurt” is a drip under pressure, so, therefore,
- “expert” is an unknown drip under pressure.
(Grammatical error corrected.)
If you have been using the Zoom app to work or school from home, or even just to talk with friends, you should know that El Reg reports that it’s even less secure than previously reported. Here’s a snippet from the latest (emphasis added):
Zoom in its documentation, and in an in-app display message, has claimed its conferencing service is “end-to-end encrypted,” meaning that an intermediary, include Zoom itself, cannot intercept and decrypt users’ communications as it moves between the sender and receiver.
When reports emerged that Zoom Meetings are not actually end-to-end encrypted encrypted, Zoom responded that it wasn’t using the commonly accepted definition of the term.
“While we never intended to deceive any of our customers, we recognize that there is a discrepancy between the commonly accepted definition of end-to-end encryption and how we were using it,” the company said in a blog post.
If you have been Zooming, you owe it to yourself to read the rest. Then pick up a landline.
Zoom’s mealy-mouthing is positively staggering.