Superintelligence is a comprehensive overview explaining the paths to developing artificial intelligence, the dangers inherent to this development and the strategies that could be used to mitigate the risk of a catastrophic singularity where artificial superintelligence wipes out human existence.
Offering a wide ranging analysis of past, present and future, Nick Bostrom spells out the emerging arms race in artificial intelligence and the implications for human society.
Unpicking the dangers of racing ahead with artificial intelligence to secure a decisive strategic advantage, Bostrom spells out the issues of control, capabilities and value acquisition which make it hard to develop AI in an ethical manner, despite the inevitability that artificial intelligence will shape this century and with potentially apocalyptical consequences.
Below are some of the key insights from the book.
- We are set to see an explosion in artificial intelligence akin to the exponential growth in world GDP following the industrial revolution.
- The history of AI is that the hard problems have become easy and the easy problems have turned out to be hard (e.g. visual perception vs. chess)
- The expected arrival date for human level machine intelligence is a 50% chance by 2050 and a 90% chance by 2100.
- From HLMI it will then take less than 30 years to develop superintelligence in some form. This could happen via AI, whole brain emulation, brain-machine interface or biological conditioning similar to evolution.
- Superintelligence could be defined by speed, quality or collective output many levels of magnitude more powerful than humans or HLMI.
- Given the potential benefits, there will be an arms race to develop superintelligence and potentially a singleton will emerge outcompeting all other projects.
- Programming the potential goals of superintelligence could create unwanted results (e.g. Paving the whole world with paperclip factories to maximise paperclip output)
- Superintelligence would also be aware of human control and could potentially withhold a treacherous turn until certain of successful escape.
- Potential control methods including boxing, incentives, stunting and tripwires to switch off the programme if it attempts to reprogramme itself.
- Forms of superintelligence including Oracles (Q&A), Genies (command execution) and the more complex to control Sovereigns (open-ended).
- It’s extremely difficult to load human ethical values into an artificial intelligence (e.g. Asimov fundamental laws of robotics)
- Even if we could load values correctly, its then difficult to establish coherent extrapolated volition where an AI learns to match human decision making to a satisfactory normative standard.
- In the wider context, the benefits of AI are clearly ‘winner takes all’ but it would be safer to establish collaboration between competing teams where possible.
- Ultimately, these are not abstract philosophical problems but very real potentially existential issues facing humanity in the next few decades. Our decisions will dramatically shape the world in which we live.