The Conversation

Life is full of boringly complicated tasks – from tax returns to travel bookings. But AI agents might soon be able to take care of these chores on your behalf. These tools won’t need fine-tuned prompts to operate – simply tell an AI agent to buy you a home insurance policy, and it will do the rest, negotiating with other agents as it completes the task.

Meta chief Mark Zuckerberg claims AI agents will soon outnumber the global human population, and companies such as Google, Salesforce and others are racing to roll them out. OpenAI, maker of ChatGPT, on Thursday introduced its own "computer-using agent" that can perform tasks, such as navigating through websites, autonomously.

But Uri Gal from the University of Sydney warns this technological frontier comes with profound risks. Interactions between AI agents can be complex, riddled with biases and competing interests, and hard to monitor. They could make mistakes, with nobody held accountable. They will also need access to sensitive information. Are we really ready to surrender human agency at such an unprecedented scale?

Signe Dean

Science + Technology Editor,
The Conversation Australia

Lead story

‘AI agents’ promise to arrange your finances, do your taxes, book your holidays – and put us all at risk

Uri Gal, University of Sydney

AI systems that can autonomously make decisions on our behalf will be a huge time saver – but we must deploy them with care.

Business

Silicon Valley’s bet on AI defence startups and what it means for the future of war – podcast

Gemma Ware, The Conversation

Political theorist Elke Schwarz talks to The Conversation Weekly podcast on her new research about venture capital investment into defence start-ups.

Politics

Could AI replace politicians? A philosopher maps out three possible futures

Ted Lechterman, IE University

From impartial debate mediators to a full-blown ‘algocracy’, we have to think carefully about how AI will impact politics.

Technology

Opening the black box: how ‘explainable AI’ can help us understand how algorithms work

David Martens, University of Antwerp; Sofie Goethals, University of Antwerp

AI systems can appear to be black boxes – often, even experts don’t know how systems reach their conclusions. The nascent field of “explainable AI” aims to address this problem.

Ethics

Medieval theology has an old take on a new problem − AI responsibility

David Danks, University of California, San Diego; Mike Kirby, University of Utah

Autonomous AI is still designed by people − so who or what is really responsible for its actions? For centuries, theologians have posed similar questions about mankind and God.

Quote of the week 💬

More from The Conversation