In a recent story meeting, an editor here in the U.S. shared her excitement about using a chatbot for mental health-type advice. She had hardly used chatbots before, but after her teenage daughter recommended them for dealing with interpersonal issues, she found them truly compelling. She was taken by the quality of the answers and the overall pleasantness of the exchange. As many others have, she found the chatbot very personable, despite it not being a person at all.
To makers of these large language models, this “designed to please users” feature is most definitely a good thing. Attractive and engaging products will help command people’s attention and perhaps compel them to pay for a monthly subscription.
But as readers of this newsletter may expect, this facility with language comes with a dark side. When people cannot tell the difference between a human speaker and software built to act like a human, all manner of perils await, write three researchers who recently published a study on what they call “anthropomorphic conversational agents.”
This blurring between human and digital interaction “opens the door to manipulation at scale, to spread disinformation, or create highly effective sales tactics,” the authors write. Regulation would be an obvious response, but how that might take shape is not obvious. They also call for a better understanding of the traps that “seductive” systems pose.
|