Summary of Can We Build AI Without Losing Control Over It?

Looking for the video?
We have the summary! Get the key insights in just 5 minutes.

Can We Build AI Without Losing Control Over It? summary
Start getting smarter:
or see our plans

Rating

8 Overall

8 Importance

8 Innovation

8 Style

Recommendation

Movies like The Terminator and The Matrix have desensitized people to a real threat: The age of superintelligent artificial intelligence is approaching, and its emergence may not bode well for society. Philosopher and neuroscientist Sam Harris wryly provides a much-needed reality check and underlines just how unprepared humankind is to meet such a challenge. getAbstract recommends Harris’s doomsday forecast to computer programmers charged with developing responsible AI and to anyone who wants a glimpse at a dystopian future.

In this summary, you will learn

  • Why, barring a global catastrophe, the emergence of superintelligent artificial intelligence (AI) is inevitable;
  • How sophisticated AI threatens humanity; and
  • Why society must start taking this issue more seriously.
 

About the Speaker

Philosopher and neuroscientist Sam Harris is the author of The End of Faith and The Moral Landscape.

 

Summary

The rise of superintelligent artificial intelligence (AI) is imminent, but most people ignore the gravity of the crisis. In science fiction, AI becomes a threat when robots rebel. In reality, problems will materialize when AI’s goals diverge from humanity’s. At that point, AI could treat humans the ...

Get the key points from this video in 10 minutes.

For you

Find the right subscription plan for you.

For your company

We help you build a culture of continuous learning.

 or log in

Comment on this summary

More on this topic

By the same author

Customers who read this summary also read

More by category