An AI weapons race may create a world where everyone stays inside out of fear of being ‘chased down by swarms of slaughterbots,’ warns founding Skype engineer

An AI weapons race may create a world where everyone stays inside out of fear of being ‘chased down by swarms of slaughterbots,’ warns founding Skype engineer
  • Jaan Tallinn helped build Skype and is the founder of the Future of Life Institute.
  • He recently warned of the risks of an AI arms race, describing theoretical anonymous “slaughterbots.”

“We might just be creating a world where it’s no longer safe to be outside because you might be chased down by swarms of slaughterbots.”

Those words of warning came from Jaan Tallinn, a founding engineer of Skype, in a recent video interview with Al Jazeera.

The Estonian computer programmer is a founder of the Cambridge Centre for the Study of Existential Risk and the Future of Life Institute, two organizations dedicated to the study and mitigation of existential risks, particularly risk brought about from the development of advanced AI technologies.

Complimentary Tech Event

Transform talent with learning that works

Capability development is critical for businesses who want to push the envelope of innovation.Discover how business leaders are strategizing around building talent capabilities and empowering employee transformation.Know More

Tallinn’s reference to killer robots draws from the 2017 short film, “Slaughterbots,” which was released by the Future of Life Institute as part of a campaign warning about the dangers of weaponized artificial intelligence. The film depicts a dystopian future in which the world has been overtaken by militarized killer drones powered by AI.

As AI technology develops, Tallinn is especially afraid of the implications that military use might have for the future of AI.

“Putting AI in the military makes it very hard for humanity to control AI’s trajectory, because at this point you are in a literal arms race,” Tallinn said in the interview. “When you’re in an arms race, you don’t have much maneuvering room when it comes to thinking about how to approach this new technology. You just have to go where the capabilities are and where the strategic advantage is.”

On top of that, AI warfare could make attacks very difficult to attribute, he said.

“The natural evolution for fully automated warfare,” Tallinn continued, “is swarms of miniaturized drones that anyone with money can produce and release without attribution.”

When contacted by Insider, the Future of Life Institute told Insider it agreed with Tallinn’s remarks on his fears of weaponized AI.

These fears have existed for years — the Future for Life Institute was founded almost a decade ago in 2014, quickly gaining the attention of figures like Elon Musk, who donated $10 million to the institute in 2015. But the issue has felt a lot more pressing recently, with the release of ChatGPT and other AI models available to the public, and current fears about AI taking over people’s jobs. Now AI researchers, tech moguls, celebrities, and regular people alike are worried.

Even director Christopher Nolan is warning that AI could be reaching its “Oppenheimer moment,” Insider previously reported — in other words, researchers are questioning their responsibility for developing technology that might have unintended consequences.

Earlier this year hundreds of people including Elon Musk, Apple cofounder Steve Wozniak, Stability AI CEO Emad Mostaque, researchers at Alphabet’s AI lab DeepMind, and notable AI professors signed an open letter issued by the Future of Life Institute calling for a six-month pause on advanced AI development. (Meanwhile, Musk was quietly racing to hire and launch his own generative AI initiative to compete with OpenAI, Insider’s Kali Hays first reported, which he recently announced as xAI.)

“Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources,” the letter reads. “Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control.

Leave a Reply

Your email address will not be published. Required fields are marked *