Deceptive by Design 0
At Psychology Today Blogs, Penn State professor Patrick L. Plaisance looks at the hazards of designing Chatbots and similar “AI” mechanisms (after, that’s what they are: mechanisms) to interact with users (i. e., people) as if said mechanisms were people. For example, he mentions programming them so that they appear to be typing or speaking a response at a human-like speed when, in actuality, they formed their complete response in nano-seconds.
He makes three main points; follow the link for a detailed discussion of each.
- Anthropomorphic design can be useful, but unethical when it leads us to think the tool is something it’s not.
- Chatbot design can exploit our “heuristic processing,” inviting us to wrongly assign moral responsibility.
- Dishonest human-like features compound the problems of chatbot misinformation and discrimination.