Summary of Can We Build AI Without Losing Control Over It?

TED Conferences LLC, more...

Watch the video

Can We Build AI Without Losing Control Over It? summary
It’s time for humanity to start taking “death by science fiction” more seriously.

Rating

8 Overall

8 Importance

8 Innovation

8 Style

Recommendation

Movies like The Terminator and The Matrix have desensitized people to a real threat: The age of superintelligent artificial intelligence is approaching, and its emergence may not bode well for society. Philosopher and neuroscientist Sam Harris wryly provides a much-needed reality check and underlines just how unprepared humankind is to meet such a challenge. getAbstract recommends Harris’s doomsday forecast to computer programmers charged with developing responsible AI and to anyone who wants a glimpse at a dystopian future.

In this summary, you will learn

  • Why, barring a global catastrophe, the emergence of superintelligent artificial intelligence (AI) is inevitable
  • How sophisticated AI threatens humanity
  • Why society must start taking this issue more seriously
 

Summary

The rise of superintelligent artificial intelligence (AI) is imminent, but most people ignore the gravity of the crisis. In science fiction, AI becomes a threat when robots rebel. In reality, problems will materialize when AI’s goals diverge from humanity’s. At that point, AI could treat humans the ...
Get the key points from this video in less than 10 minutes. Learn more about our products or log in

About the Speaker

Philosopher and neuroscientist Sam Harris is the author of The End of Faith and The Moral Landscape.


Comment on this summary

More on this topic

By the same author

Customers who read this summary also read

More by category