Oxford futurist Nick Bostrom argues that artificial intelligence (AI) offers the promise of a safer, richer and smarter world, but that humanity may not be able to bring AI’s promise to fulfillment. The more Bostrom deconstructs public assumptions about AI, the more you’ll come to think that humankind totally lacks the resources and imagination to consider how to shift from a world that people lead to a world that some superintelligent AI agent could threaten or dominate. Bostrom handily explores the possibilities of – and the concerns related to – such a “singleton.” For instance, he asks what if such an agent could develop into a one-world government with uncertain moral principles. His book is informed and dense, with multiple Xs and Ys to ponder. The specter of what-if carries his narrative, an in-depth treatise designed for the deeply intrigued, not for the lightly interested. getAbstract recommends Bostrom’s rich, morally complex speculation to policy makers, futurists, students, investors and high-tech thinkers.
In this summary, you will learn
- How the phenomenon known as artificial intelligence (AI) evolves,
- What steps scientists contemplate for using and controlling AI, and
- How humankind remains unprepared to deal with this technology.
About the Author
Oxford University professor Nick Bostrom is a founding director of the Future of Humanity Institute.
By the same author
Customers who read this summary also read
Comment on this summary
3 years agoReally cool info on an extremely interesting topic.