Summary of Can We Build AI Without Losing Control Over It?

Looking for the video?
We have the summary! Get the key insights in just 5 minutes.

Can We Build AI Without Losing Control Over It? summary
Start getting smarter:
or see our plans




  • Innovative
  • Overview
  • Visionary


Movies like The Terminator and The Matrix have desensitized people to a real threat: The age of superintelligent artificial intelligence is approaching, and its emergence may not bode well for society. Philosopher and neuroscientist Sam Harris wryly provides a much-needed reality check and underlines just how unprepared humankind is to meet such a challenge. getAbstract recommends Harris’s doomsday forecast to computer programmers charged with developing responsible AI and to anyone who wants a glimpse at a dystopian future.

About the Speaker

Philosopher and neuroscientist Sam Harris is the author of The End of Faith and The Moral Landscape.



The rise of superintelligent artificial intelligence (AI) is imminent, but most people ignore the gravity of the crisis. In science fiction, AI becomes a threat when robots rebel. In reality, problems will materialize when AI’s goals diverge from humanity’s. At that point, AI could treat humans the way humans treat ants; machines won’t hate people but could vanquish anyone who stands in their way. If this scenario seems outlandish, consider three factors: First, “intelligence is a matter of information processing in physical systems.” Narrow intelligence is present in today’s machines, so the foundations for AI already...

More on this topic

By the same author

The Moral Landscape
The End of Faith

Customers who read this summary also read

How to Get Empowered, Not Overpowered, by AI
How We Can Build AI to Help Humans, Not Hurt Us
Framing AI for Business
“Human-compatible AI” by Stuart Russell (Data Driven Paris)
3 Principles for Creating Safer AI
Confessions of an AI Optimist

Related Channels

Comment on this summary