Did you see that photo of Pope Francis rocking a puffy white designer coat? Maybe you also saw headlines informing you that the pontiff’s styling was entirely the creation of an artificial intelligence image generator. The problem is, the image went viral and many people took it at face value, and not everyone who saw the image subsequently read the headlines.

Fake images of celebrities might not seem very consequential, but the potential of generative AI for deeply harmful fraud and misinformation is readily apparent. These AI models are so powerful in part because they are trained on vast amounts of text and images on the internet, which also raises issues of intellectual property protections and data privacy.

People will soon be interacting with these AI systems in myriad ways, from using them to search the web and write emails to developing relationships with them. Not surprisingly, there’s a growing movement for government to regulate the technology. It’s not at all clear, however, how to do so.

Three experts on technology policy, Penn State’s S. Shyam Sundar, Texas A&M’s Cason Schmit and UCLA’s John Villasenor, provide different perspectives on the challenges to building guardrails along the road to our brave new AI future.

Also today:

Eric Smalley

Science + Technology Editor

The new generation of AI tools makes it a lot easier to produce convincing misinformation. Photo by Olivier Douliery/AFP via Getty Images

Regulating AI: 3 experts explain why it’s difficult to do and important to get right

S. Shyam Sundar, Penn State; Cason Schmit, Texas A&M University; John Villasenor, University of California, Los Angeles

Powerful new AI systems could amplify fraud and misinformation, leading to widespread calls for government regulation. But doing so is easier said than done and could have unintended consequences.

Politics + Society

Science + Technology

Arts + Culture

Environment + Energy

Education

Health + Medicine

Trending on site

Reader Comments 💬