Artificial? Yes. Intelligent? Not So Much. 0
Sam Uretsky looks at the current iteration of Large Language Models (LLMs). He is not impressed.
A snippet:
The abstract begins, “In reinforcement learning, specification gaming occurs when AI systems learn undesired behaviors that are highly rewarded due to misspecified training goals. Specification gaming can range from simple behaviors like sycophancy to sophisticated and pernicious behaviors like reward-tampering, where a model directly modifies its own reward mechanism.” That is, if the program of the AI includes rewards for giving the answers that please the questioner, the LLM will tell a white lie to get a reward, the way a white rat in a maze will learn to get a treat.
Follow the link for context.
Aside:
Just because you shouldn’t believe it just because you see it on a computer screen, you shouldn’t believe it just because it comes out of a computer’s speakers.