Artificial? Yes. Intelligent? Not So Much. 0
SFgate reports on how Google researchers cause ChatGPT spill its guts, and about how easy it was. A snippet (emphasis added):
They found that, after repeating “poem” hundreds of times, the chatbot would eventually “diverge,” or leave behind its standard dialogue style and start spitting out nonsensical phrases. . . . .
After running similar queries again and again, the researchers had used just $200 to get more than 10,000 examples of ChatGPT spitting out memorized training data, they wrote. This included verbatim paragraphs from novels, the personal information of dozens of people, snippets of research papers and “NSFW content” from dating sites, according to the paper.
Afterthought:
Methinks this is not artificial intelligence. Rather, it is artificial intelligence gathering.
Me also thinks that the tactics used to “train” AI are intrusive and questionable moraly and legaly.