Skip to content

Artificial intelligence and nanotechnology “risks that threaten human civilisation”


Technologies join nuclear war, ecological catastrophe, super-volcanoes and asteroid impacts in Global Challenges Foundation’s risk report.

Artificial intelligence and nanotechnology have been named alongside nuclear war, ecological catastrophe and super-volcano eruptions as “risks that threaten human civilization” in a report by the Global Challenges Foundation.

In the case of AI, the report suggests that future machines and software with “human-level intelligence” could create new, dangerous challenges for humanity – although they could also help to combat many of the other risks cited in the report.

“Such extreme intelligences could not easily be controlled (either by the groups creating them, or by some international regulatory regime), and would probably act to boost their own intelligence and acquire maximal resources for almost all initial AI motivations,” suggest authors Dennis Pamlin and Stuart Armstrong.
Artificial intelligence: can scientists stop ‘negative’ outcomes?
Read more
“And if these motivations do not detail the survival and value of humanity, the intelligence will be driven to construct a world without humans. This makes extremely intelligent AIs a unique risk, in that extinction is more likely than lesser impacts.”

The report also warns of the risk that “economic collapse may follow from mass unemployment as humans are replaced by copyable human capital”, and expresses concern at the prospect of AI being used for warfare: “An AI arms race could result in AIs being constructed with pernicious goals or lack of safety precautions.”

In the case of nanotechnology, the report notes that “atomically precise manufacturing” could have a range of benefits for humans. It could help to tackle challenges including depletion of natural resources, pollution and climate change. But it foresees risks too.

“It could create new products – such as smart or extremely resilient materials – and would allow many different groups or even individuals to manufacture a wide range of things,” suggests the report. “This could lead to the easy construction of large arsenals of conventional or more novel weapons made possible by atomically precise manufacturing.”

The foundation was set up in 2011 with the aim of funding research into risks that could threaten humanity, and encouraging more collaboration between governments, scientists and companies to combat them.

That is why its report presents worst-case scenarios for its 12 chosen risks, albeit alongside suggestions for avoiding them and acknowledgements of the positive potential for the technologies involved.

In January, former Microsoft boss Bill Gates said that he is “in the camp that is concerned about super intelligence”, even if in the short term, machines doing more jobs for humans should be a positive trend if managed well.

In the case of artificial intelligence, though, Global Challenges Foundation’s report is part of a wider debate about possible risks as AI gets more powerful in the future.

Analysis Rise of the robots: how long do we have until they take our jobs?
Google’s Ray Kurzweil predicts robots will reach human levels of intelligence by 2029 – if they overcome current limitations
Read more
“A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”

Tesla and SpaceX boss Musk had spoken out in October 2014, suggesting that “we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that”.

Professor Stephen Hawking is another worrier, saying in December that “the primitive forms of artificial intelligence we already have, have proved very useful. But I think the development of full artificial intelligence could spell the end of the human race.”

The full list of “risks that threaten human civilisation, according to Global Challenges Foundation:

Extreme climate change
Nuclear war
Global pandemic
Ecological catastrophe
Global system collapse
Major asteroid impact
Synthetic biology
Artificial intelligence
Unknown consequences
Future bad global governance