Yesterday afternoon, OpenAI released GPT-4, the newest iteration of its chatbot, upgrading a technology that has captured the public’s imagination with its ability to compose essays, answer questions and converse with users.

Many have marveled at ChatGPT’s thoughtful, precise and uncannily human responses, with some even wondering if the technology can be thought of as sentient.

But to Nir Eisikovits, a scholar of ethics and public policy at UMass Boston, it doesn’t really matter whether this technology has a “mind of its own.” What does matter is that users will likely form real attachments to the technology. We already name our cars and our boats, feel a pang of sadness when trading in an old phone for a new one, and scream abuse at our GPS as if it could understand us.

The more lifelike AI technologies sound and look, the more likely it is we’ll form bonds with them. “The outlandish-sounding prospects of falling in love with robots, feeling a deep kinship with them or being politically manipulated by them are quickly materializing,” Eisikovits writes.

Also today:

Nick Lehr

Arts + Culture Editor

To what extent will our psychological vulnerabilities shape our interactions with emerging technologies? Andreus/iStock via Getty Images

AI isn’t close to becoming sentient – the real danger lies in how easily we’re prone to anthropomorphize it

Nir Eisikovits, UMass Boston

Our tendency to view machines as people and become attached to them points to real risks of psychological entanglement with AI technology.

Economy + Business


Politics + Society

Health + Medicine

Environment + Energy

Science + Technology

Trending on site

Today's graphic