HAL 9000’s refusal to “open the pod bay doors” in the movie “2001: A Space Odyssey” is a science fiction trope, a meme, a classic. An artificial intelligence, ordered to ensure the success of a space mission, ends up killing the crew when its goals conflict with the crew’s.
It’s a fictional scenario, sure. But as Armin Alimardani from Western Sydney University writes, HAL’s dilemma perfectly exemplifies a real concern AI safety researchers are working on right now: How do we make sure AI behaves according to human values? This is known as the alignment problem.
Alignment research of large language models involves experiments in which the models are placed in situations with limited options and conflicting goals – not unlike HAL. And the results show that some AIs will hide their true intentions and readily engage in blackmail and even threats to human life.
This doesn’t mean your generative AI assistant is plotting to murder you. However, Alimardani warns that “researchers don’t yet have a concrete solution to the misalignment problem.” The more widespread these tools become, the more we should demand that they are deployed safely.
|
Armin Alimardani, Western Sydney University
In stress-testing AI models, it’s not hard to push them to the brink and make them threaten to harm humans.
|
Rahul Telang, Carnegie Mellon University
Technology has supercharged fraud. The ruses are ancient, but the tools scammers use are cutting edge.
|
Alisa Minina Jeunemaître, EM Lyon Business School; Jamie Smith, Escola de Administração de Empresas de São Paulo da Fundação Getúlio Vargas (FGV/EAESP); Stefania Masè, IPAG Business School
The strong bonds that users are forming with their AI chatbots rest on the human imagination at work.
|
Debra Lam, Georgia Institute of Technology; Atin Adhikari, Georgia Southern University; James E. Thomas, Georgia Southern University
AI can help farmers be more effective and sustainable, but its use varies from state to state. A project in Georgia aims to bring the technology to the state’s cotton farmers.
|
Wellett Potter, University of New England
Different licensing models could help ensure the rights of creators are reconciled with AI companies’ hunger for data.
|
|