If you’re anywhere within shouting distance of the hype machine at the heart of Silicon Valley, you’ve likely heard the term AI agent or its jargon variant agentic AI. According to the buzz, agents are the next wave of AI, the “it” thing of 2025.
AI agents take actions on their own, such as querying a search engine, filling out a web form or even finding and booking a flight. There is something appealing about the prospect of AI minions taking care of tedious tasks for you. All too often it feels like computers have you jumping through hoops as you work your way through step-by-step processes to get things done. It would be nice to have the promise of automation fulfilled.
But convenience is as much a siren song as a benefit of technology in the internet age. It’s a pattern that has repeated, most prominently with social media and now generative AI. Technology lures you with genuinely exciting capabilities but also puts you at risk. Big Tech’s offerings have been shown to violate your privacy, manipulate your behavior, leave you vulnerable to criminals and steer you wrong with bad information.
Will AI agents follow suit? Do they pose the risk of new harms and vulnerabilities? I sure won’t be too quick to trust an AI agent with my credit card or access to my bank records. And how tech companies strike the balance between winning in the marketplace and keeping the public safe might not be the way you or I would.
What can you do? One thing is to keep informed. Learn what AI agents do, who controls them and how they’re affecting people’s lives. To that end, the editors at The Conversation will be bringing you down-to-earth explanations and insights from the latest research as the year of the AI agent unfolds.
|