Summary of Machine Bias

Looking for the article?
We have the summary! Get the key insights in just 5 minutes.

Machine Bias summary
Start getting smarter:
or see our plans

Rating

8 Overall

8 Applicability

7 Innovation

8 Style

Recommendation

Imagine a computer program that can predict if a criminal will strike again. Now, imagine that program isn’t accurate. ProPublica senior reporters and editors poke holes in one of the dozens of risk assessment programs that determine the fate of defendants across the United States. They found that one popular tool incorrectly flagged black defendants as likely reoffenders twice as often as it did for white defendants, exposing some to longer incarceration times. getAbstract recommends this revealing analysis to anyone enthusiastic or skeptical about the use of algorithms in the American judicial system. 

In this summary, you will learn

  • How risk assessment programs suffer from inherent bias,
  • What factors influence risk scores, and
  • How risk scores affect sentencing.
 

About the Authors

Julia Angwin, Jeff Larson and Lauren Kirchner hold senior reporting and editing positions at ProPublica. Surya Mattu is a contributing researcher.

 

Summary

Police arrested Brisha Borden and Vernon Prater separately for petty theft. But a computer program said Borden, who’s black, was more likely to reoffend, although Prater, who’s white, had a worse record. While Borden stayed out of trouble, Prater ended up serving time in prison for subsequent burglary and theft. 

Get the key points from this article in 10 minutes.

For you

Find the right subscription plan for you.

For your company

We help you build a culture of continuous learning.

 or log in

Comment on this summary

More on this topic

By the same authors

Customers who read this summary also read

More by category