- Concrete Examples
Identifying intrinsic biases toward gender, race and culture in artificial intelligence systems requires a structured analysis of data collection and machine learning, and of how results are parsed by humans. AI-supported decisions by government, corporations and scholars can be unintentionally tainted by such inequities. In a detailed report citing numerous real-world examples, social scientists and engineers offer solid suggestions about how to adjust AI algorithms, aiming for fairness to millions who may be affected. The article is required reading for anyone involved in business and policy decisions, and concerned by the ethics of AI systems.
About the Authors
James Zou is assistant professor of biomedical data science and (by courtesy) of computer science and electrical engineering, Stanford University. Londa Schiebinger is the John L. Hinds Professor of History of Science and director of Gendered Innovations in Science, Health & Medicine, Engineering, and Environment, Stanford University.