Story by Sarah Tyrrell
From self-driving cars, to digital personal assistants, to remote controlled drone pilots, modern developments in artificial intelligence technology have changed nearly every aspect of our daily lives- and the research is still developing. With these advancements comes the concern that autonomous robots, differentiated from human-controlled drones, could soon be used in warfare.
In this semester’s third installment of the Controversial Issues in Security and Resilience Studies speaker series, Northeastern Professors Denise Garcia and Taskin Padir were invited to speak on their research and expertise regarding the future of artificial intelligence and the role of robots in warfare and in our everyday lives.
According to the Future of Life Institute, “Autonomous weapons are artificial intelligence systems that are programmed to kill.” Proponents of these systems argue that they would be more efficient and accurate on the battlefield and would put less soldiers in danger. However, even if these weapons are more efficient, are they worth pursuing? Professor Garcia thinks not. She argues, along with other critics, that making killing easier would only lower the threshold for violence. In other words, force would be used more often if the risk to soldier’s lives would not have to be taken into consideration.
One of the most frightening, yet probable, outcomes is that the use of autonomous robots will lead to another arms race. We’ve seen time and time again the danger of arms races, and their ability to perpetuate existing tensions. The international community simply does not have the legal framework, or the infrastructure, to deal with such a powerful threat, Professor Garcia says. “It would have a disintegrating effect on international law and treaties.”
The technology needed to produce these destructive systems is already being developed and utilized by countries for various purposes. Russia, for example, has built mobile guard robots capable of detecting and attacking targets without human involvement. In Israel, they’ve created the Iron Dome, a weapon system designed to intercept incoming missiles and rockets. Sentry drones, used by South Korea on their border, are equipped with heat and motion sensors capable of identifying a target two miles away.
Professor Garcia advocates for limits on autonomy systems so that they are in accordance with international law, as well as a preventative ban that regulates the development of these technologies. The UN member states have already met three times to discuss this issue and are scheduled to meet twice more. Professor Garcia herself testified last April.
The private sector and the scientific community are at the forefront of this effort. Companies like Amazon and Google are devising ethical standards, while robot developers such as ClearPath are pledging to only produce such technology for civil purposes. Some of the world’s leading intellectuals, including Elon Musk, Bill Gates, and Stephen Hawking, have all signed an open letter pioneered by the Future of Life Institute aimed towards encouraging and outlining the research priorities for beneficial artificial intelligence. The letter explicitly states that given the great potential of artificial intelligence, “it is important to research how to reap its benefits while avoiding potential pitfalls.”
Taskin Padir, Associate Professor in the Department of Electrical and Computer Engineering, agrees that autonomous robots yield great consequences, yet he supports investments and efforts to develop artificial intelligence because he is a firm believer in their societal benefits. Undoubtedly, there are certain things that humans cannot do that robots can, and vice versa. Professor Padir suggests that utilizing robots to assist in fields where humans are lacking will maximize efficiency and boost productivity.
He uses the example of the Fukushima nuclear disaster in Japan. A tsunami, caused by an earthquake, led to hydrogen building up in the reactors on the plant. The explosion caused by this buildup created more damage than either natural disaster. According to Professor Padir, a human could not have prevented the explosion, but a robot could have. A humanoid and human-controlled robot could have gotten inside the plant in time to stop the reactor exploding without any risk to human lives. In fact, Professor Padir worked with teams that were developing a robot to be able to do just that.
Another area that has the potential to benefit from humanoid robots is space missions. “Valkyrie”, a NASA robot currently residing in Northeastern’s new ISEC building, was designed to maintain the equipment left on Mars between missions when humans cannot get there. Leaving the technology there unattended for a long period of time would run the risk of damage or destruction, especially considering Mars’ magnitude of sandstorms.
These robots, autonomous or not, could benefit society by stepping in where humans simply cannot go. They still can’t replace our astronauts or our doctors, but they can certainly provide assistance. Artificial intelligence is a rapidly growing industry and its potential uses are immense, however with these great innovations comes the even greater responsibility to ensure such powerful technology does not turn lethal.