Copywrongs 0
I have noted before in these electrons that, since my earliest days on Usenet and BBSs (that’s “bulletin board systems”–look it up), I have been amazed at how persons willingly believe stuff that they read on a computer screen, when they would not believe the same stuff if it happened before their eyes. Now, with the advent of AI chatbots, we’ve progressed to a point at which persons willingly believe stuff they hear from their computers when they wouldn’t believe the same stuff if it happened before their eyes.
Bloomberg’s Catherine Thorbecke thinks that, as AI spreads, it’s time for the companies that are manufabricating it to come clean about what they are using for their “training” data. She asks
The answer appears to be “yes” to all of the above. But we can’t know for sure because the companies building these systems refuse to say.
The secrecy is increasingly indefensible as AI systems creep into high-stakes environments like schools, hospitals, hiring tools and government services. The more decision-making and agency we hand over to machines, the more urgent it becomes to understand what’s going into them.
I commend the entire article to your attention.







