From Pine View Farm

Geek Stuff category archive

The Surveillance State Society 0

The EFF reports on a victory for privacy. A snippet:

Phone app location data brokers are a growing menace to our privacy and safety. All you did was click a box while downloading an app. Now the app tracks your every move and sends it to a broker, which then sells your location data to the highest bidder, from advertisers to police.

So it is welcome news that the Federal Trade Commission has brought a successful enforcement action against X-Mode Social (and its successor Outlogic).

The FTC’s complaint illustrates the dangers created by this industry. The company collects our location data through software development kits (SDKs) incorporated into third-party apps, through the company’s own apps, and through buying data from other brokers. The complaint alleged that the company then sells this raw location data, which can easily be correlated to specific individuals.

More at the link.

Aside:

I find it ironic that persons sweat bullets about limited and regulated “government surveillance” while willingly and heedlessly running nekkid before corporate collectors of confidentia–oh, never mind.

Share

Deceptive by Design 0

At Psychology Today Blogs, Penn State professor Patrick L. Plaisance looks at the hazards of designing Chatbots and similar “AI” mechanisms (after, that’s what they are: mechanisms) to interact with users (i. e., people) as if said mechanisms were people. For example, he mentions programming them so that they appear to be typing or speaking a response at a human-like speed when, in actuality, they formed their complete response in nano-seconds.

He makes three main points; follow the link for a detailed discussion of each.

  • Anthropomorphic design can be useful, but unethical when it leads us to think the tool is something it’s not.
  • Chatbot design can exploit our “heuristic processing,” inviting us to wrongly assign moral responsibility.
  • Dishonest human-like features compound the problems of chatbot misinformation and discrimination.

Share

Artificial? Yes. Intelligent? Not So Much. 0

“AI” for a Chevy chaser.

Share

Training Day 0

Giant one-eyed monster labeled

Click for the original image.

Share

The Crypto Con 0

Title:  Post-Holidays blues for a crypto-broker.  Image:  Man sitting in front of fireplace staring at an empty Christmas stocking.  Woman says,

Click for the original image.

Share

It’s All about the Algorithm 0

At Psychology Today Blogs, Mark Bertin reminds us that

Our devices’ software is engineered around a concept called persuasive design. Companies channel countless research dollars into maximizing profit gained by influencing where we spend our time online. Tech companies foundationally, intentionally, and continually collect our information while honing methods that can hold and disrupt our attention.

Follow the link for some suggestions as to how to escape the seductive lure of the algorithm.

Share

“An Exercise of Market Power,” This New Gilded Age Dept. 0

Sam and the crew talk with David Dayen about a recent antitrust jury trial in which Google was found guilty.

Share

Artificial? Yes. Intelligent? Not So Much. 0

Noah Feldman, Bloomberg columnist and (I did now that he is a) Harvard law professor, takes a look at the New York Times’s suit against Microsoft and OpenAI for copyright infringement. I can’t say that it’s an exciting read, but, given the who-shot-john and over-the-top hype about “AI,” I think it’s a worthwhile one.

Here’s a bit:

Once you know the law, you can guess roughly how the legal arguments in the case are going to go. The New York Times will point to examples where a user asks a question of ChatGPT or Bing and it replies with something substantially like a New York Times article. The newspaper will observe that ChatGPT is part of a business and charges fees for access to its latest versions, and that Bing is a core part of Microsoft’s business. The New York Times will emphasize the creative aspects of journalism. Above all, it will argue that if you can ask an LLM-powered search engine for the day’s news, and get content drawn directly from The New York Times, that will substantially harm and maybe even kill The New York Times’ business model.

Most of these points are plausible legal arguments. But OpenAI and Microsoft will be prepared for them. They’ll likely respond by saying that their LLM doesn’t copy; rather, it learns and makes statistical predictions to produce new answers.

Share

The Disinformation Superhighway 0

Title:  Future Veterans of the Information Wars.  Frame One:  Grizzled older man says,

Click to view the original image.

Share

Artificial? Yes. Intelligent? Not So Much. 0

Michael Cohen hoists himself on the “AI” petard.

Michael Cohen, former President Trump’s ex-fixer and personal lawyer, said in newly unsealed court filings that he accidentally gave his lawyer fake legal citations concocted by the artificial intelligence program Google Bard.

(snip)

Cohen wrote in a sworn declaration unsealed Friday that he has not kept up with “emerging trends (and related risks)” in legal technology and was not aware that Google Bard was a generative text service that, like Chat-GPT, could create citations and descriptions that “looked real but actually were not.” He instead believed the service to be a “supercharged search engine.”

Just because you see (or hear) it on a computer screen, it ain’t necessarily so.

Share

Geeking Out 0

Christmas train.

Screenshot

Mageia v.9 with the Plasma Desktop.

Share

Artificial? Yes. Intelligent? Not So Much. 0

Eric Smalley, science and technology editor of The Conversation, debunks four myths about “AI.” Here’s a bit of one debunking (first emphasis in the original, second added); follow the link for the rest.

1. They’re bodiless know-nothings

Large language model-based chatbots seem to know a lot. You can ask them questions and they more often than not answer correctly. Despite the occasional comically incorrect answer, the chatbots can interact with you in a similar manner as people – who share your experiences of being a living, breathing human being – do.

But these chatbots are sophisticated statistical machines that are extremely good at predicting the best sequence of words to respond with. Their “knowledge” of the world is actually human knowledge as reflected through the massive amount of human-generated text the chatbots’ underlying models are trained on.

Share

“AI’ Is the New Spellcheck 0

Last night, I saw a commercial for “Shutterstock AI” which, when stripped of the hockypuck, rebranded computer-assisted image editing as “AI.”

(As an aside, everything they showed in the ad is stuff I can do in the GIMP, because I bought, read, and practiced the techniques in the book. It would just take me a little longer.)

If that’s the standard, spellcheck is “AI” and “AI” is as old as spellcheck.

“Artificial Intelligence” is assuredly artificial and it is certainly fast and dressed in Sunday go-to-meeting clothes, but fast and well-dressed does not equal intelligent.

Don’t fall for the con Be skeptical of the hype.

Furrfu.

Afterthought:

It occurs to me that I may be maligning spellcheck. According to news reports, “AI” gets stiff wrong a lot more often than spellcheck.

Share

The Surveillance Society 0

Voice comes from television singing,

Click for the original image.

Share

The Electric (Car) Bugaloo 0

Nikolai Tesla must be rolling over in his grave with embarrassment to have his name associated with this outfit.

Share

Artificial? Yes. Intelligent? Not So Much. 0

In the course of a longer article debunking a rumor that AI bots are being “trained” on DropBox documents, security expert Bruce Schneier observes (emphasis added)

It seems not to be true. Dropbox isn’t sharing all of your documents with OpenAI. But here’s the problem: we don’t trust OpenAI. We don’t trust tech corporations. And—to be fair—corporations in general. We have no reason to.

Share

The Crypto Con 0

Couple standing at tax prep office to man behind desk.  Wife says,

Click for the original image.

Share

Artificial? Yes. Intelligent? Not So Much. 0

Methinks Atrios raises a valid concern.

Share

Geeking Out 0

I finally got around to decorating for the holidays. For some reason, maybe that the weather’s been unnaturally warm because the climates they are a-changing, maybe that my country’s toying with fascism, I’m not really sure, but it’s been hard to get into the holiday spirit . . . .

Screenshot

That’s Mageia v. 9 with the Plasma desktop environment. The wallpaper is from my Christmas collection.

Share

Artificial? Yes. Intelligent? Not So Much. 0

SFgate reports on how Google researchers cause ChatGPT spill its guts, and about how easy it was. A snippet (emphasis added):

The “attack” that worked was so simple, the researchers even called it “silly” in their blog post: They just asked ChatGPT to repeat the word “poem” forever.

They found that, after repeating “poem” hundreds of times, the chatbot would eventually “diverge,” or leave behind its standard dialogue style and start spitting out nonsensical phrases. . . . .

After running similar queries again and again, the researchers had used just $200 to get more than 10,000 examples of ChatGPT spitting out memorized training data, they wrote. This included verbatim paragraphs from novels, the personal information of dozens of people, snippets of research papers and “NSFW content” from dating sites, according to the paper.

Afterthought:

Methinks this is not artificial intelligence. Rather, it is artificial intelligence gathering.

Me also thinks that the tactics used to “train” AI are intrusive and questionable moraly and legaly.

Share