Join getAbstract to access the summary!

How to Stay Smart in a Smart World

Join getAbstract to access the summary!

How to Stay Smart in a Smart World

Why Human Intelligence Still Beats Algorithms

MIT Press,

15 min read
7 take-aways
Audio & text

What's inside?

Artificial intelligence is transforming society. Learn why humans need to stay in charge.


Editorial Rating

9

Qualities

  • Eye Opening
  • Concrete Examples
  • Engaging

Recommendation

As digital technology transforms the world, experts debate the future role of human intelligence. Rather than embrace artificial intelligence with open arms or fear its dominance, psychologist Gerd Gigerenzer recommends walking a middle road: Allow AI to do what it does well, avoid trusting it in areas where it performs poorly, and stay alert to the risks it poses. Offering a wealth of examples – including self-driving cars, dating apps and chess – he illustrates how AI works best in stable environments with well-defined rules. Since the world is far from stable, humans’ cognitive skills will always have a vital role to play.

Summary

Artificial intelligence excels in stable environments with rules circumscribed by human intelligence.

Artificial intelligence works best when given large amounts of data, well-defined rules and a stable environment. If those conditions are met, then AI can calculate numbers, find associations and detect patterns faster and better than humans in some instances. That’s why AI does so well at games. In 1997, IBM’s Deep Blue algorithm beat the reigning chess master Garry Kasparov. And in 2017, Google’s AlphaGo beat the reigning Go master Ke Jie. To do both, AI learned the game rules, trained with human experts and used brute calculation to determine the best possible next move.

Alongside games, another (relatively) stable environment is outer space – planets and stars don’t change overnight. Since astronomers understand planetary movement and possess considerable astronomical data, NASA scientists used AI to help its MESSENGER probe land in Mercury’s orbit in March 2011 at the exact spot it predicted six years before. Down here on Earth, AI can help academics detect inconsistencies in large data sets. In addition, it can help militaries intercept large amounts of foreign...

About the Author

Gerd Gigerenzer is a psychologist known for his work on bounded rationality. He directs the Harding Center for Risk Literacy at the University of Potsdam and is a partner at Simply Rational – The Decision Institute. 


Comment on this summary