“Human-compatible AI” by Stuart Russell (Data Driven Paris)
You need AI to help you live the life you want, and developers will make sure you get it.
Read or listen offlineAmazon Kindle
Some people view the upcoming age of artificial intelligence (AI) as a threat to humankind. How long will it take for machines to take over the world? And will they care about ordinary, mortal human beings? During a meeting of the tech community Data Driven, AI expert Stuart Russell claims that people don’t need to worry: Developers can make sure that AI will always be beneficial for humans. AI systems will need to learn about individual preferences by observing humans and then assist them in living the life they want. Stay tuned if you’d like to know how this is going to be done.
- Artificial intelligence is advancing but has yet to learn how to make decisions based on “multiple levels of abstraction,” like humans.
- The potential benefit for economic, military and political applications serves as an incentive for people to develop human-level AI.
- Robots need to observe the behavior of individual humans to learn about their preferences and then support them.
- Robots need to deal with uncertainty and learn how to interpret irrational human behavior.
- The future of AI should be viewed as beneficial for humankind.
Artificial intelligence is advancing but has yet to learn how to make decisions based on “multiple levels of abstraction,” like humans.
Routine tasks currently performed by humans will eventually get automated. While currently existing voice assistants merely employ AI for their narrow range of tasks, engineers are constructing ever more versatile systems that one day may serve as “assistants, tutors, health monitors/coaches.”
“We will get to human-level AI because it has such economic, military, political value. It’s going to be very hard to stop us reaching that.”
Reaching the level of human intelligence in AI systems remains a challenge. Current systems may be able to beat human chess or Go players, but they still can’t understand languages. And they don’t even begin to compete with the “multiple levels of abstraction” that humans use with ease in every decision they make.
The potential benefit for economic, military and political applications serves as an incentive for people to develop human-level AI.
Some people, including Stephen Hawking and Elon Musk, are concerned that humans may develop AI that causes a variety of problems. Will robots take over people’s jobs? Will machines take over Earth and destroy human civilization and life? People will most likely create human-level AI irrespective of any danger it may pose. It will be impossible to ignore the enormous benefits AI may bring to human life.
“The robot has to be able to learn information about our preferences. And the source of information about human preferences is actually human behavior. Every time we make a choice, we reveal information about our underlying preferences.”
The challenge will be to create AI systems that are “beneficial to the extent that their actions can be expected to achieve our goals” as opposed to creating machines that use their intelligence to achieve their own objectives.
Robots need to observe the behavior of individual humans to learn about their preferences and then support them.
AI systems don’t have to adopt norms to generally serve humanity. They should rather determine how individuals want to live their lives and assist them in doing that. People express their preferences in the way they behave, so AI systems must be able to observe human behavior and learn from it.
Robots need to deal with uncertainty and learn how to interpret irrational human behavior.
The recipe for beneficial AI seems easy: Let an AI system watch your behavior, learn your underlying preferences and help you live the life you want. The problem is that humans aren’t always rational. They change their preferences erratically and sometimes find happiness at the cost of other people’s happiness.
“The math is all fine, the humans are the problem. Because humans are not rational…We may not even have consistent, stable preferences.”
AI systems must be able to deal with uncertainty about its objectives. If the risks associated with pursuing a certain objective can’t be reliably determined, they should be free to refrain from pursuing that objective.
The future of AI should be viewed as beneficial for humankind.
There is little doubt that AI will one day reach or even surpass human intelligence. But people don’t need to worry too much. Engineers can create AI that is “provably beneficial” for humans.
About the Speaker
Stuart Jonathan Russell is Professor of Computer Science and holds the Smith-Zadeh Chair in Engineering at the University of California, Berkeley. He is also the founder and leader of the Center for Human-Compatible Artificial Intelligence (CHAI) at Berkeley.
This document is restricted to personal use only.
Did you like this summary?Watch the video
By the same author
In our Journal
2 years ago
There Will Always Be Lawyers
AI can crunch the data, but good deals require wily humans. In an effort to increase skills in its artificial intelligence capabilities, Facebook developed a project to teach their chatbots to negotiate. It didn’t go as planned. The task required two chatbots to negotiate a division of items between the two of them. The items […]
3 years ago
“Access to Much Greater Intelligence Will Be a Step-Change in Our Civilization”
Leading computer scientist Stuart Russell on the major breakthroughs towards the first human-level AI, its foreseeable economic and social effects – and the risks of massive pullbacks on the way. Stuart, your book Human Compatible on Artificial Intelligence addresses a wide audience. Why do you think the future of AI is something everyone should be […]
Comment on this summary