Elon Musk and over 100 AI experts are urging the UN to ban killer robots before it’s too late

In Politics & Society, Science & Technology
Scroll this

Elon Musk has gathered global support for his warning against the perils of uncontrolled development of artificial intelligence (AI) which poses the risk of opening the door to killer robot technology.

Musk has been the most vocal and sometimes lone voice in warning that rapid developments in AI hold dire consequences for mankind. He has recently been joined by Stephen Hawking, Steve Wozniak, Bill Gates, and many other leaders in science and technology, including AI researchers, who have expressed their concern about the risks posed by AI in the media.

In an open letter signed by Musk and more than 100 leaders and experts in AI the UN is urged to commit to an outright ban on killer robot technology.

The group’s open letter urges the UN to “prevent an arms race in these weapons, to protect civilians from their misuse, and to avoid the destabilizing effects of these technologies.”

“Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend”, writes the group and continues: “These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways.”

Why is the subject suddenly on everybody’s radar? Why is it a problem now?

The clue is in the conclusion of the open letter:

“We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.”

This is the problem: the prospect of killer robots is very real; it’s not science fiction.

Due to rapid advancement in AI technology development and recent breakthroughs, many AI milestones have been reached much earlier that experts expected. These developments have brought many experts to the inescapable conclusion that superintelligence, surpassing that of human intelligence, is possible in our lifetime.

We have a real problem.

As AI has the potential to become more intelligent than humans, we have no way of knowing how autonomous weapons might eventually behave. Scientists have never before developed anything like this.

The letter was made public on August 21, to coincide with the world’s largest conference on AI – IJCAI 2017 taking place in Melbourne, Australia.

The seriousness of the matter is underscored by the signature of the open letter by AI and robotics companies from some 26 countries as well as independent scientists, and leaders like Stephen Hawking, Noam Chomsky, and Apple co-founder Steve Wozniak.