Rating

8

Recommendation

Headlines about artificial intelligence make lofty claims about machines outperforming humans. However, the technology isn’t there yet, as computer science professor and author Melanie Mitchell explains in an engaging essay for Aeon. Humans have inherent “core knowledge” that AI simply doesn’t have, and instilling those understandings is a challenge when developing better systems for the future. Those in the AI sector should keep Mitchell’s message on their radar.

Summary

AI starts out as a “blank slate” that doesn’t have the well-rounded “core knowledge” of a human being.

People understand that if a ball rolls out into the road, a child might soon follow. And that a snowman isn’t going to dislodge itself from a snowbank to cross the street. AI doesn’t get those concepts. Self-driving cars, for instance, are often rear-ended when they suddenly step on the brakes for an object that human drivers wouldn’t worry about.

While AI may perform better than humans in certain capacities, such as language processing, the technology can make unexpected errors that a human ...

About the Author

Melanie Mitchell is a computer science professor at Portland State University and Santa Fe Institute. Her books include Complexity: A Guided Tour and Artificial Intelligence: A Guide for Thinking Humans.


More on this topic

Biased AI Is Another Sign We Need to Solve the Cybersecurity Diversity Problem
7
How to Teach Artificial Intelligence Some Common Sense
8
Expanding AI’s Impact with Organizational Learning
8
“The Discourse Is Unhinged”
7
The Panopticon Is Already Here
9
The Malicious Use of Artificial Intelligence
8

Related Channels

Comment on this summary