Top headlines

Lead story

Until very recently, if you were looking for information about some scientific topic – whether COVID-19, climate change or genetically modified foods – your first stop was likely a search engine. What did Dr. Google have to say?

But now, more and more people are posing their questions to ChatGPT and other generative artificial intelligence platforms. Rather than searching the internet for information related to your query, these AI chatbots create their own answer by predicting likely word combinations.

And that could be a major problem if you’re hoping for a factual answer to your question, write Gale Sinatra and Barbara K. Hofer. As scholars who focus on science denial, Sinatra and Hofer write that they’re concerned by the way “generative AI may blur the boundaries between truth and fiction for those seeking authoritative scientific information.” Since the burden to discern accuracy falls to the AI chatbot user, they suggest some tips to help you navigate the new information landscape.

[Sign up here to our topic-specific weekly emails.]

Maggie Villiger

Senior Science + Technology Editor

Approach all information with some initial skepticism. Guillermo Spelucin/Moment via Getty Images

ChatGPT and other generative AI could foster science denial and misunderstanding – here’s how you can be on alert

Gale Sinatra, University of Southern California; Barbara K. Hofer, Middlebury

Generative AIs may make up information they serve you, meaning they may potentially spread science misinformation. Here’s how to check the accuracy of what you read in an AI-enhanced media landscape.

Politics + Society

Health + Medicine

Ethics + Religion

Economy + Business

Science + Technology

Trending on site

Today's graphic