Autonomous AI robots waging war isn’t since fiction anymore. An ex-Google worker believes AI robots can accidentally start world wars or cause mass atrocities.
The words of wisdom come from Laura Nolan, a software engineer who left Google last year in protest against the tech giant’s involvement in Project Maven.
The venture was a Pentagon project intended to use Google AI resources that would have drastically improved drone video recognition technology in military drones.
However, Google was forced out of the project after facing a protest from workers and employees quitting the company. One of the people left was Laura Nolan, who is now part of a global campaign to Stop Killer Robots.
The primary argument against autonomous drones
Nolan believes that autonomous drones should be completely outlawed by all the governments of the world. According to her, these killer robots have the capability to “start a flash war, destroy a nuclear power station, and cause mass atrocities.”
While briefing UN Diplomats in New York, she said, “There could be large-scale accidents because these things will start to behave in unexpected ways.”
Laura’s main argument is that these autonomous drones are “far too unpredictable and dangerous” to be left uncontrolled. She also factored in potential technical errors in drones that can be caused by weather and a faulty radar system that could produce disastrous results.
Another argument she puts up is that these killer robots don’t have the common sense to judge right from wrong, unlike what a human can do in a tense situation.
“How does the killing machine out there on its own fly about distinguish between the 18-year-old combatants and the 18-year-old who is hunting for rabbits?”, said Laura.
Nolan studied computer science at Trinity College Dublin before Google recruited her to be part of the Project Mavin. As of now, there is no information on whether Google is still involved in building autonomous drones.
What are your views on AI killer robots? Should they be banned?