Geek Stuff category archive
Suckered by the Algorithm 0
At Psychology Today Blogs, Bill Sullivan offers yet more evidence that “social” media isn’t, particular for the young. Here’s a bit of his article:
Madonna’s claim that we are living in a material world is backed by convincing data.
Follow the link, then be sure to post it to your Zuckerborg or Muskrat page.
Devolution 0
Via C&L.
Artificial? Yes. Intelligent? Not So Much. 0
Just because you see–er–hear it on a computer–er–device, it ain’t necessarily so.
Details at the link.
Facebook Frolics 0
A Consumer Reports study details the extent to which you have been assimilated by the Zuckerborg and its enablers. Indeed, perhaps the most astounding bit in the report is the number of Zuckerborg enablers who “share” your data with Facebook.
Follow the link for the article.
One more time, “social” media isn’t.
You don’t use “social” media. It uses you.
H/T Bruce Schneier for the head up.
Artificial? Yes. Intelligent? Not So Much. 0
The New York Times reports on a internet user who used “AI” to compose a false and misleading obituary just to get clicks (and advertising revenue), spreading lies and drowning truth along the way.
Just go read it. The “intelligence” may be “artificial,” but the stupid is real.
The Bullies’ Pulpit 0
One more time, “social” media isn’t.
Artificial? Yes. Intelligent? Not So Much. (Updated) 0
I find it ironic that what used to be called “data scraping” somehow morphs into being “training” when the scraper is labeled “AI.”
Aside:
Can this be? Is “AI” the new blow-up doll?
Addendum:
Bruce Schneier offers a hint as to how to out “AI” bots on “social” media.
Frozen 0
It turns out that Teslas don’t seem to like really cold weather. Here’s a bit of the report from The Register:
“Nothing. No juice. Still on zero percent, and this is like three hours being out here after being out here three hours yesterday,” Tesla owner Tyler Beard told Fox 32.
He wasn’t alone. Dozens of cars were reportedly lined up and abandoned at the Tesla supercharging station in Oak Brook along with multiple charging stations around Chicago.
Artificial? Yes. Intelligent? Not So Much. 0
It turns out, when persons as “AI” questions about case law, “AI” tends to just make stuff up “hallucinate,” to use the term from the article at Above the Law.
The Surveillance State Society
0
The EFF reports on a victory for privacy. A snippet:
So it is welcome news that the Federal Trade Commission has brought a successful enforcement action against X-Mode Social (and its successor Outlogic).
The FTC’s complaint illustrates the dangers created by this industry. The company collects our location data through software development kits (SDKs) incorporated into third-party apps, through the company’s own apps, and through buying data from other brokers. The complaint alleged that the company then sells this raw location data, which can easily be correlated to specific individuals.
More at the link.
Aside:
I find it ironic that persons sweat bullets about limited and regulated “government surveillance” while willingly and heedlessly running nekkid before corporate collectors of confidentia–oh, never mind.
Deceptive by Design 0
At Psychology Today Blogs, Penn State professor Patrick L. Plaisance looks at the hazards of designing Chatbots and similar “AI” mechanisms (after, that’s what they are: mechanisms) to interact with users (i. e., people) as if said mechanisms were people. For example, he mentions programming them so that they appear to be typing or speaking a response at a human-like speed when, in actuality, they formed their complete response in nano-seconds.
He makes three main points; follow the link for a detailed discussion of each.
- Anthropomorphic design can be useful, but unethical when it leads us to think the tool is something it’s not.
- Chatbot design can exploit our “heuristic processing,” inviting us to wrongly assign moral responsibility.
- Dishonest human-like features compound the problems of chatbot misinformation and discrimination.
It’s All about the Algorithm 0
At Psychology Today Blogs, Mark Bertin reminds us that
Follow the link for some suggestions as to how to escape the seductive lure of the algorithm.
Artificial? Yes. Intelligent? Not So Much. 0
Noah Feldman, Bloomberg columnist and (I did now that he is a) Harvard law professor, takes a look at the New York Times’s suit against Microsoft and OpenAI for copyright infringement. I can’t say that it’s an exciting read, but, given the who-shot-john and over-the-top hype about “AI,” I think it’s a worthwhile one.
Here’s a bit:
Most of these points are plausible legal arguments. But OpenAI and Microsoft will be prepared for them. They’ll likely respond by saying that their LLM doesn’t copy; rather, it learns and makes statistical predictions to produce new answers.
Artificial? Yes. Intelligent? Not So Much. 0
Michael Cohen hoists himself on the “AI” petard.
(snip)
Cohen wrote in a sworn declaration unsealed Friday that he has not kept up with “emerging trends (and related risks)” in legal technology and was not aware that Google Bard was a generative text service that, like Chat-GPT, could create citations and descriptions that “looked real but actually were not.” He instead believed the service to be a “supercharged search engine.”
Just because you see (or hear) it on a computer screen, it ain’t necessarily so.