Oxford futurist Nick Bostrom argues that artificial intelligence (AI) offers the promise of a safer, richer and smarter world, but that humanity may not be able to bring AI’s promise to fulfillment. The more Bostrom deconstructs public assumptions about AI, the more you’ll come to think that humankind totally lacks the resources and imagination to consider how to shift from a world that people lead to a world that some superintelligent AI agent could threaten or dominate. Bostrom handily explores the possibilities of – and the concerns related to – such a “singleton.” For instance, he asks what if such an agent could develop into a one-world government with uncertain moral principles. His book is informed and dense, with multiple Xs and Ys to ponder. The specter of what-if carries his narrative, an in-depth treatise designed for the deeply intrigued, not for the lightly interested. getAbstract recommends Bostrom’s rich, morally complex speculation to policy makers, futurists, students, investors and high-tech thinkers.
About the Author
Oxford University professor Nick Bostrom is a founding director of the Future of Humanity Institute.
Instant access to over 22,000 book summaries
Discover your next favorite book with getAbstract.
See prices >>
Stay up-to-date with emerging trends in less time.
Learn more >>
Customers who read this summary also read
Comment on this summary
2 months agoGreat read. Especially if you want a simple overview of some of the possible future scenarios - and their various implications and risks - in attaining genuine "superintelligence" using AI.
2 years agoWould have preferred a more in-depth summary
6 years agoReally cool info on an extremely interesting topic.