Join getAbstract to access the summary!

How AI Can Be a Force for Good

Join getAbstract to access the summary!

How AI Can Be a Force for Good

An ethical framework will help to harness the potential of AI while keeping humans in control

Science,

5 min read
4 take-aways
Audio & text

What's inside?

To harness AI’s benefits and mitigate its risks, the technology must be aligned with ethical principles. 


auto-generated audio
auto-generated audio

Editorial Rating

9

Qualities

  • Innovative
  • Scientific
  • Overview

Recommendation

Would the world be a different place if people in the 19th century had been able to foresee the industrial revolution’s devastating impact on the environment? Of course, no one will ever know the answer, but people can try to do better when it comes to the technological revolution that is taking the 21st century by storm: Artificial Intelligence (AI). In a Science magazine article, Mariarosaria Taddeo and Luciano Floridi discuss the importance of developing ethical guidelines to pre-empt the potential negative consequences of AI. The article will engage anyone concerned with the social and ethical impact of AI.

Take-Aways

  • The Artificial Intelligence (AI) revolution has created the need for ethical rules guiding the technology’s development and usage.
  • A first set of rules governing AI must address the delegation of tasks and the ascription of responsibility.
  • A second set of AI-related ethical guidelines must help safeguard human self-determination.
  • Establishing a set of universal ethical principles and finding viable ways to implement them will be the next important step in the AI revolution.

Summary

The Artificial Intelligence (AI) revolution has created the need for ethical rules guiding the technology’s development and usage.

Due to its autonomy and capacity to learn, AI is more than just another new technology. AI is increasingly inserting itself into people’s lives and has started to influence their behavior. The technology poses new ethical challenges that society must confront before AI has advanced to a point where individual rights and social welfare are compromised. Leaders from civil society, politics, the private sector and academia must develop guidelines that will allow AI to become an aid to human endeavors without undermining human dignity.

A first set of rules governing AI must address the delegation of tasks and the ascription of responsibility.

The strength of AI is that it can perform tasks and draw conclusions autonomously. For example, AI outperforms humans at detecting breast cancer or at catching and neutralizing cyberattacks. But delegating tasks to intelligent machines can also lead to unintended negative outcomes. The AI-driven COMPAS software used by US law enforcement to determine the risk of granting parole has been found to be racially biased. AI developers must come up with ways to predict possible errors before they occur. By improving their understanding of how AI systems make decisions, developers will be better able to pre-empt mistakes and potential misuses.

“Ethics plays a key role in this process by ensuring that regulations of AI harness its potential while mitigating its risks.”

A related challenge is how to assign responsibility for AI decision errors. AI operates under the concept of “distributed agency,” meaning that its actions and decisions are shaped by a variety of human and non-human inputs, including software algorithms and the choices made by designers and end-users. An appropriate framework for designating responsibility in the AI space is the legal concept of “faultless responsibility.” This framework assigns responsibility to all parties involved in the system and thus incentivizes each actor, from developer to end-user, to respect ethical boundaries.

A second set of AI-related ethical guidelines must help safeguard human self-determination.

AI-powered devices are penetrating and shaping people’s everyday lives. Ethical guidelines need to be in place that help ensure “trust and transparency” in private, hospital or school settings, and that help protect employee rights in the workplace. Furthermore, AI developers must ensure that their devices won’t undermine people’s ability to make free choices and shape their own lives.

Establishing a set of universal ethical principles and finding viable ways to implement them will be the next important step in the AI revolution.

Efforts are underway to establish a set of universal ethical principles that govern the development and use of AI across all cultures and settings. Several international initiatives on AI ethics have begun to converge on four guiding principles: “beneficence, nonmaleficence, autonomy, and justice.” Moreover, the Universal Declaration of Human Rights may be able to provide additional guidance. Putting these abstract principles into practice is the next task. The European Parliament has recently set up the AI4People project tasked with hammering out practical solutions for embedding ethics in AI systems. Developing “foresight methodologies” will be a crucial component in mitigating AI-related risks. A case in point are “impact assessment analyses,” which systematically evaluate how a new technology may affect people’s fundamental rights in specific settings. Foresight methodologies will not only help prevent potential ethical breeches, but also assist with the development of ethical solutions that will allow AI to become “a force for good.”

About the Authors

Mariarosaria Taddeo and Luciano Floridi are experts at the Oxford Internet Institute and the Alan Turing Institute in the UK. 

This document is restricted to personal use only.

Did you like this summary?

Read the article

Comment on this summary