Science-fiction novelists aren’t the only people who envision a future ruled by superintelligent machines. Nick Bostrom and his fellow computer scientists and philosophers also imagine what shape the machine era might take. They’re concerned that superintelligent machines won’t share or care about human values. Bostrom makes a strong case for ensuring that they do. getAbstract believes techies and laypeople alike will find his talk compelling.
In this summary, you will learn
- How machine learning has evolved,
- How superintelligent machines could become harmful to humankind in the future, and
- What scientists can do to ensure that artificial intelligence absorbs and practices human values.
About the Speaker
Nick Bostrom is head of the Future of Humanity Institute and author of the book Superintelligence.
Get the key points from this video in 10 minutes.
For your company
We help you build a culture of continuous learning.
Comment on this summary
By the same author
Oxford UP, 2014
Customers who read this summary also read
TED Conferences LLC, 2017
Peter Stone et al.
AI 100 Standford University, 2016
TED Conferences LLC, 2016
The Guardian, 2016