Join getAbstract to access the summary!

Design AI so that it’s Fair

Join getAbstract to access the summary!

Design AI so that it’s Fair

Identify sources of inequity, de-bias training data and develop algorithms that are robust to skews in data, urge James Zou and Londa Schiebinger.

Nature,

5 min read
4 take-aways
Audio & text

What's inside?

Scientists and ethicists discuss how to mitigate gender, race and ethnic biases in AI systems.


Editorial Rating

8

Qualities

  • Analytical
  • Scientific
  • Concrete Examples

Recommendation

Identifying intrinsic biases toward gender, race and culture in artificial intelligence systems requires a structured analysis of data collection and machine learning, and of how results are parsed by humans. AI-supported decisions by government, corporations and scholars can be unintentionally tainted by such inequities. In a detailed report citing numerous real-world examples, James Zou and Londa Schiebinger offer solid suggestions about how to adjust AI algorithms, aiming for fairness to millions who may be affected. The article is required reading for anyone involved in business and policy decisions, and concerned by the ethics of AI systems.

Take-Aways

  • Artificial intelligence (AI) applications tend to discriminate based on gender, ethnicity, race and income.
  • AI bias is partly due to the basic methodology of machine learning programs.
  • Algorithms can be better designed to overcome intrinsic biases within AI data sets.
  • Ethicists and scientists question whether AI training data should represent the world as it is, or as it should be.

Summary

Artificial intelligence (AI) applications tend to discriminate based on gender, ethnicity, race and income.

As AI algorithms become more sophisticated, biases creep into the design process, requiring systemic solutions. Because most AI tasks demand huge data sets, programmers must examine the gathering, organization and processing of billions of images and words. When neural networks scrape global websites such as Google Images or Wikipedia for information, unintended inequalities can occur.

“More than 45% of ImageNet data, which fuels research in computer vision, comes from the United States, home to only 4% of the world’s population.”

For example, India and China comprise 36% of the world’s people, but only provide about 3% of ImageNet’s 14 million pictures. This explains why its vision algorithms identify a woman in a white wedding dress as “bride,” but label a traditional Indian bride photo “costume” or “performance art.”

AI bias is partly due to the basic methodology of machine learning programs.

To maximize training data accuracy, programmers optimize AI programs to recognize specific data bits that appear most frequently. For example, Google Translate defaults to use of the masculine pronoun “he” rather than “she” because “he” statistically appears twice as often as “she” in text searches. This points to a need for data sets with improved social awareness and ethnic/gender complexity.

“We encourage the organizers of machine-learning conferences, such as the International Conference on Machine Learning, to request standardized metadata as an essential component of the submission and peer-review process.”

Annotators must summarize the labeling and collection of demographic information, and how the AI classifies images such as faces. Crowdsourcing data sources should provide information about participants. Today, reputable journals ask authors to provide detailed information about any AI data used.

Algorithms can be better designed to overcome intrinsic biases within AI data sets.

In an “AI audit,” the machine itself recognizes inequities and stereotypes. The auditor then modifies the relationships between words and pictures. For example, the AI auditor could be instructed to find a correlation between the word “woman” and the words “queen” and “homemaker,” and correct for any identified bias.

“Unless the appropriate categories are captured, it’s difficult to know what constraints to impose on the model, or what corrections to make.”

Historical databases can be “de-biased” by examining how stereotypical language has evolved over time in historical texts.

Ethicists and scientists question whether AI training data should represent the world as it is, or as it should be.

AI could assess job candidates, but also help decide whether the candidate would easily assimilate into the job. It’s up to humans to decide which data set is most fair and useful. AI researchers need to collaborate with experts in social science, medicine, law and other disciplines to evaluate the relationships between algorithms and training data. 

“As computer scientists, ethicists, social scientists and others strive to improve the fairness of data and of AI, all of us need to think about appropriate notions of fairness.”

AI has the potential to sustain or worsen inequalities and discrimination in society, while it continues to transform communications, politics, media and daily life in general.

About the Authors

James Zou is assistant professor of biomedical data science and (by courtesy) of computer science and electrical engineering, Stanford University. Londa Schiebinger is the John L. Hinds Professor of History of Science and director of Gendered Innovations in Science, Health & Medicine, Engineering, and Environment, Stanford University.

This document is restricted to personal use only.

Did you like this summary?

Read the article

Comment on this summary