With the increasing development of AI, the warning of a possible “apocalypse” scenario akin to Hollywood seems plausible. However, what are the actual risks for humanity?

The warnings are coming from all angles: artificial intelligence poses an existential risk to humanity and must be shackled before it is too late.

But what are these disaster scenarios, and how are machines supposed to wipe out humanity?

Students visit an educational base of AI in Handan, in China’s northern Hebei province, on May 25th, 2023. (Photo by AFP)
Paperclips of doom

Most disaster scenarios start in the same place: machines will outstrip human capacities, escape human control, and refuse to be switched off.
“Once we have machines that have a self-preservation goal, we are in trouble,” AI academic Yoshua Bengio told an event this month.

But because these machines do not yet exist, imagining how they could doom humanity is often left to philosophy and science fiction.

Philosopher Nick Bostrom has written about an “intelligence explosion” that will happen when superintelligent machines begin designing their machines.

Bostrom’s ideas have been dismissed by many as science fiction, not least because he has separately argued that humanity is a computer simulation and supported theories close to eugenics.

Yet, his thoughts on AI have been hugely influential, inspiring both Elon Musk and Professor Stephen Hawking.

 The Terminator

If superintelligent machines are to destroy humanity, they surely need a physical form.

Arnold Schwarzenegger’s red-eyed cyborg, sent from the future to end human resistance by an AI in the movie “The Terminator”, has proved a seductive image, particularly for the media.

But experts have rubbished the idea.

The group has warned that giving machines the power to decide on life and death is an existential risk.

A kid walks at the Tekniska museum where an AI exhibition is showcased on June 8th, 2023, in Stockholm. (Photo by Jonathan NACKSTRAND / AFP)

Robot expert Kerstin Dautenhahn, from Waterloo University in Canada, played down those fears.

She told AFP that AI was unlikely to give machines higher reasoning capabilities or imbue them with a desire to kill all humans.

“Robots are not evil,” she said, conceding programmers could make them do evil things.

Deadlier chemicals

A less overtly sci-fi scenario sees “bad actors” using AI to create toxins or new viruses and unleashing them on the world.

A group of scientists using AI to help discover new drugs ran an experiment where they tweaked their AI to search for harmful molecules instead.

They generated 40,000 potentially poisonous agents in less than six hours, as the Nature Machine Intelligence journal reported.

 Species overtaken

The rules of Hollywood dictate that epochal disasters must be sudden, immense, and dramatic, but what if humanity’s end was slow, quiet, and not definitive?

“At the bleakest end, our species might come to an end with no successor,” philosopher Huw Price says in a promotional video for Cambridge University’s Centre for the Study of Existential Risk.

But he said there were “less bleak possibilities” where humans augmented by advanced technology could survive.

The imagined apocalypse is often framed in evolutionary terms.

Geoffrey Hinton spent his career building machines that resemble the human brain, latterly for Google, talks in similar terms of “superintelligences” simply overtaking humans.

Miroslava Salazar with AFP