The Conversation

If you’re anywhere within shouting distance of the hype machine at the heart of Silicon Valley, you’ve likely heard the term AI agent or its jargon variant agentic AI. According to the buzz, agents are the next wave of AI, the “it” thing of 2025.

AI agents take actions on their own, such as querying a search engine, filling out a web form or even finding and booking a flight. There is something appealing about the prospect of AI minions taking care of tedious tasks for you. All too often it feels like computers have you jumping through hoops as you work your way through step-by-step processes to get things done. It would be nice to have the promise of automation fulfilled.

But convenience is as much a siren song as a benefit of technology in the internet age. It’s a pattern that has repeated, most prominently with social media and now generative AI. Technology lures you with genuinely exciting capabilities but also puts you at risk. Big Tech’s offerings have been shown to violate your privacy, manipulate your behavior, leave you vulnerable to criminals and steer you wrong with bad information.

Will AI agents follow suit? Do they pose the risk of new harms and vulnerabilities? I sure won’t be too quick to trust an AI agent with my credit card or access to my bank records. And how tech companies strike the balance between winning in the marketplace and keeping the public safe might not be the way you or I would.

What can you do? One thing is to keep informed. Learn what AI agents do, who controls them and how they’re affecting people’s lives. To that end, the editors at The Conversation will be bringing you down-to-earth explanations and insights from the latest research as the year of the AI agent unfolds.

Eric Smalley

Science + Technology Editor

Lead story

What is an AI agent? A computer scientist explains the next wave of artificial intelligence tools

Brian O'Neill, Quinnipiac University

The latest buzz phrase coming from technology companies is ‘AI agents.’ A computer scientist explains what that means – and how ChatGPT and your Roomba fit into the picture.

Technology

AI will continue to grow in 2025. But it will face major challenges along the way

Daswin de Silva, La Trobe University

Tighter regulation and a lack of high-quality, authentic training data are just some of the problems AI developers will need to grapple with next year.

Work

AI won’t take your job – but that doesn’t mean you should ignore it

Marcel Lukas, University of St Andrews

As workplaces are transformed by AI, workers who can use the technology well will remain attractive to employers.

Science

Blood tests are currently one-size-fits-all − machine learning can pinpoint what’s truly ‘normal’ for each patient

Brody H. Foy, University of Washington

A narrower, more personalized ‘normal range’ could help doctors better diagnose and treat disease in individual patients.

Education

Warm and friendly or competent and straightforward? What students want from AI chatbots in the classroom

Shahper Richter, University of Auckland, Waipapa Taumata Rau; Inna Piven, University of Auckland, Waipapa Taumata Rau; Patrick Dodd, University of Auckland, Waipapa Taumata Rau

AI chatbots are increasingly being used in the classroom. But different styles of communication appeal to different students, and this can guide how the technology is rolled out.

Quote of the week 💬

More from The Conversation