The Conversation

In a recent story meeting, an editor here in the U.S. shared her excitement about using a chatbot for mental health-type advice. She had hardly used chatbots before, but after her teenage daughter recommended them for dealing with interpersonal issues, she found them truly compelling. She was taken by the quality of the answers and the overall pleasantness of the exchange. As many others have, she found the chatbot very personable, despite it not being a person at all.

To makers of these large language models, this “designed to please users” feature is most definitely a good thing. Attractive and engaging products will help command people’s attention and perhaps compel them to pay for a monthly subscription.

But as readers of this newsletter may expect, this facility with language comes with a dark side. When people cannot tell the difference between a human speaker and software built to act like a human, all manner of perils await, write three researchers who recently published a study on what they call “anthropomorphic conversational agents.”

This blurring between human and digital interaction “opens the door to manipulation at scale, to spread disinformation, or create highly effective sales tactics,” the authors write. Regulation would be an obvious response, but how that might take shape is not obvious. They also call for a better understanding of the traps that “seductive” systems pose.

Martin LaMonica

Director of Editorial Projects and Newsletters
The Conversation U.S.

Lead story

Evidence shows AI systems are already too much like humans. Will that be a problem?

Sandra Peter, University of Sydney; Jevin West, University of Washington; Kai Riemer, University of Sydney

On the internet, nobody knows you’re a chatbot.

Work

Being honest about using AI at work makes people trust you less, research finds

Oliver Schilke, University of Arizona; Martin Reimann, University of Arizona

They say honesty is the best policy − but when it comes to using AI on the job, research suggests that can backfire.

Technology

AI-driven motion capture is transforming sports and exercise science

Habib Noorbhai, University of Johannesburg

The revolutionary approach tracks movement directly from video footage without the need for cumbersome suits and expensive laboratories.

Education

Philosopher Hannah Arendt provokes us to rethink what education is for in the era of AI

Paul Tarc, Western University

Responding to AI proactively means departing from old answers to the question of education’s purpose.

Regulation

Regulating AI seems like an impossible task, but ethically and economically, it’s a vital one

Jun Du, Aston University; Cher Li, Aston University; Xingyi Liu, Aston University

Rapid expansion raises concerns about who benefits and who bears the risks.

Quote of the week 💬

More from The Conversation