Data poisoning is when someone deliberately corrupts the data used to train an AI system, with the goal of making the system give incorrect results. It’s a bit like a blend of two other threats: disinformation and hacking. Disinformation involves mixing bad information with good, and hacking typically involves inserting malicious code into software.
In the age of AI, when machine learning technologies – systems that find patterns in data – are everywhere from chatbots to home security systems, data poisoning is a grave concern. It gives people with bad intent a means to corrupt information or, worse, put others in danger. Think self-driving cars.
Defending against data poisoning is a major thrust of AI research. Florida International University computer scientists M. Hadi Amini and Ervin Moore explain how data poisoning works, how researchers are fighting it, and how they’ve married two techniques – federated learning and blockchain – to advance the field.
|
M. Hadi Amini, Florida International University; Ervin Moore, Florida International University
Data poisoning corrupts AI systems by teaching them with bad data. There’s no silver bullet to protect against it, but researchers are building defenses.
|
Anné H. Verhoef, North-West University
What would Jesus do? Chatbots generated by artificial intelligence bring a new kind of challenge to religion.
|
David Joyner, Georgia Institute of Technology
Vegans have multiple reasons for spurning animal products: ethical, environmental and health concerns. The same issues are keeping some people from using AI.
|
Daswin de Silva, La Trobe University
More autonomous AI systems that can use tools and work in teams are becoming increasingly common.
|
Melise Panetta, Wilfrid Laurier University
The future of work isn’t about humans being replaced by robots, but about learning how to use the technology to enhance skills and creating new entry points into the professional world.
|
|