Join getAbstract to access the summary!

Design AI so that it’s Fair

Join getAbstract to access the summary!

Design AI so that it’s Fair

Identify sources of inequity, de-bias training data and develop algorithms that are robust to skews in data, urge James Zou and Londa Schiebinger.

Nature,

5 min read
4 take-aways
Audio & text

What's inside?

Scientists and ethicists discuss how to mitigate gender, race and ethnic biases in AI systems.


Editorial Rating

8

Qualities

  • Analytical
  • Scientific
  • Concrete Examples

Recommendation

Identifying intrinsic biases toward gender, race and culture in artificial intelligence systems requires a structured analysis of data collection and machine learning, and of how results are parsed by humans. AI-supported decisions by government, corporations and scholars can be unintentionally tainted by such inequities. In a detailed report citing numerous real-world examples, James Zou and Londa Schiebinger offer solid suggestions about how to adjust AI algorithms, aiming for fairness to millions who may be affected. The article is required reading for anyone involved in business and policy decisions, and concerned by the ethics of AI systems.

Summary

Artificial intelligence (AI) applications tend to discriminate based on gender, ethnicity, race and income.

As AI algorithms become more sophisticated, biases creep into the design process, requiring systemic solutions. Because most AI tasks demand huge data sets, programmers must examine the gathering, organization and processing of billions of images and words. When neural networks scrape global websites such as Google Images or Wikipedia for information, unintended inequalities can occur.

For example, India and China comprise 36% of the world’s people, but only provide about 3% of ImageNet’s 14 million pictures. This explains why its vision algorithms identify a woman in a white ...

About the Authors

James Zou is assistant professor of biomedical data science and (by courtesy) of computer science and electrical engineering, Stanford University. Londa Schiebinger is the John L. Hinds Professor of History of Science and director of Gendered Innovations in Science, Health & Medicine, Engineering, and Environment, Stanford University.


Comment on this summary