Join getAbstract to access the summary!

Practical Fairness

Join getAbstract to access the summary!

Practical Fairness

Achieving Fair and Secure Data Models

O'Reilly,

15 min read
6 take-aways
Audio & text

What's inside?

Software that learns from data increasingly influences daily life, raising profound questions about its assessments and decisions.



Editorial Rating

9

Qualities

  • Scientific
  • Concrete Examples
  • For Experts

Recommendation

Machine learning is becoming ubiquitous. This branch of artificial intelligence works by teaching a computer program what correct output looks like. This powerful method raises questions regarding fair outcomes for the people machine learning (ML) affects. Software engineer and attorney Aileen Nielsen examines different kinds of fairness and how training data and algorithms can promote them. For those developing machine learning models, she provides useful examples in Python.

Take-Aways

  • Fairness is about who gets what and how that’s decided. 
  • To get fair results, start with fair data. 
  • Train your data model to increase fairness at different stages of the process.
  • Privacy and fairness are vulnerable to attacks.
  • Fair models should go into fair products. 
  • The market will not provide fairness without the correct laws. 

Summary

Fairness is about who gets what, and how that’s decided. 

Every new technology creates victims along with progress. Information technology improves life but can prey on users’ time and attention or become a tool for nefarious purposes. Often, unfairness takes the form of violations of community norms. This happens when large-scale use of a technology, from drones to bots targeting dating apps, becomes a nuisance.

Software developers should pay attention to the difference between equity and equality, security and privacy. Not paying attention to fairness will expose companies to legal trouble and consumer backlash. Laws set by the United States, Europe and China provide standards for some of these aspects. However, a fairness mandate need not dampen innovation; it can stimulate ideas in mathematics, computer science and law.

People tend to prefer equity over equality. Equity implies that people should not receive different treatment for belonging to a certain group – direct discrimination. Neither should a specific group enjoy or suffer good or bad consequences – indirect discrimination.

“We write ML algorithms not so we can treat people equally but rather so we can treat them equitably – that is, according to their merit on a metric specific to a given task or purpose.”

But equity itself is not straightforward. In some cases, a method that is fair according to one metric may not be fair according to another. This applies to privacy metrics. One metric could be the probability of an adversary successfully extracting private information from the output of a model. Another could be the amount of privileged information such an adversary could obtain. And even the best security measures can be undercut by human error.

The three aspects of fairness – antidiscrimination, security and privacy – link together socially, mathematically and technologically. For example, the threat of a terrorist attack pits privacy against security. In this example and elsewhere, no one can ensure fairness in all respects. No software solution will ever exist that makes models generated by machine learning automatically fair.

To get fair results, start with fair data. 

Ensuring the fairness of a model starts with fair data. This means data of high quality obtained without foul play and that is suited to the model’s intended purposes. Data integrity also demands correct data labeling. For example, no one should use data that determines a person’s health care spending and label that spending, “health problems.” This means machine learning would regard people who have money to spend on health care as less healthy. 

“Technology is neither good nor bad; nor is it neutral.” (historian Melvin Kranzberg)

Data quality suffers from biased sampling. For example, police in the United States stop and search more cars driven by Black people than cars driven by white people. As a consequence, traffic stop reports feature more Black than white people.

Data can also be harmful if it is incomplete. This occurs, for example, when the software for a self-driving car receives training only with data for regions around San Francisco. 

Train your data model to increase fairness at different stages of the process.

To increase a model’s fairness, engineers can take measures affecting the data, the processing of the data or the output of the model. Pre-processing of the data proves the more flexible and powerful of these options. But no standard way of performing pre-processing exists.

“Pre-processing offers the most opportunities for downstream metrics.”

One method to increase fairness is to delete parts of data that someone might exploit to discriminate against people – for example, their gender. To make this work, engineers must delete data that could indirectly lead to discrimination. For example, a zip code could be utilized as an indicator of race.

Another possibility is attaching weightings to different data about a person. The weights would ensure any outcome is non-discriminatory. A problem with treating data in this way is that individual fairness can lead to unfairness for a group, or vice versa. Techniques that balance the two include “learned fair representation” and “optimized pre-processing.” Both techniques transform data instead of deleting it. The developer chooses the desired combination of types of fairness.

Another way to train a model not to discriminate is “adversarial de-biasing.” This works by having a second model analyze the output of the first. For example, for a model that tries to predict if a person will use many medical services, the adversarial model will try to determine whether that person is a member of a group that suffers discrimination. This finding then becomes part of the input for the next training iteration.

Sometimes neither pre-processing data nor training a model for fairness is possible or allowable. As a last resort, users can process the output of a model to make it fairer. In general, this boils down to making results more random. Some people would regard this post-processing as replacing one kind of unfairness with another. But least it provides one element of fairness – transparency. An individual whom the model affects can determine the result without post-processing.

To gauge whether a model generates fair outcomes, audit it. In a “black box audit,” feed the model data and study the output. In a “white-box audit,” inspect the code to determine what the model does. If the code of a model proves difficult to analyze, a black-box approach may be necessary.

“Black-box auditing is a wider-ranging exercise that has more potential future benefits but will also rely on more action downstream once findings are known, while post-processing takes immediate corrective action.”

An alternative method features building a model that uses the output of the audited model as input. This second model can perform a straightforward analysis of the results. It can also act as an adversarial model, and attempt to prove the audited model biased in some way. 

Whatever a model decides, people regard it as unfair if it seems arbitrary. One way to avoid this is through using interpretable models, white-box models whose processing people understand. Another method is to have black-box models that explain the basis on which they made a decision. What comprises a worthy explanation depends on the audience, whether end users, government regulators or model developers. And even when a model provides an explanation, that doesn’t mean the result itself is correct or fair. 

Privacy and fairness are vulnerable to attacks. 

Privacy is hard to pin down, technically, legally and socially. New technologies may undercut the anonymization of data. New concepts emerge, such as the “right to be forgotten,” in which you can request erasure of your data from the public domain.

Privacy violations take a number of forms. In a “membership attack,” an adversary discovers that a person’s data was part of the training data for a model. This might reveal information about this person. In “model inversion,” an attacker extracts part of the training data. To defend against such attacks, the training data utilizes a special treatment for a model, such as “k-anonymity,” which masks the identity of individuals in a data set. Or, the training data uses special algorithms such as “differential privacy,” which adds “noise” to the data to make a privacy breach harder. 

“In addition to being a moving target because of evolving technical and mathematical knowledge, privacy is an evolving legal norm.”

Other attacks against models aim at subverting their output. In an “evasion attack,” attackers feed the model data that forces it to err. In a well-known example, a seemingly abstract pattern, when added to a picture of a panda, convinced a model that the combined image was of a gibbon. Such an attack can cause real damage. Small stickers attached to a stop sign prevented a model from recognizing it. This flawed model could be present in a self-driving car.

Adversarial training can harden models against such attacks. In a “poisoning attack,” the goal of the attacker is to make the model malfunction or to classify certain data in a desired way. Researchers have so far not succeeded in defending against poisoning attacks. Many digital products and important machine learning models in use today suffer exposure to them. 

Fair models should go into fair products. 

Machine learning engineers are not the only people who must consider fairness. Their work ends up in consumer products, reports or software that clients use. The product must satisfy the reasonable expectations of customers. To this end, marketing should be clear and honest. The product should not harm those who have contributed data. 

Companies launching new products should ask whether they are adding value. For example, an app that makes it possible to sell the parking space someone is vacating proves problematic. It brings a public good into the commercial domain but does not create extra parking spaces. 

“While we don’t like to admit it, the truth is that some machine learning models probably shouldn’t be developed and some products most certainly shouldn’t be designed and launched on the basis of those models.”

Companies should think hard about how their product could be misused. For example, a breathalyzer sold to the general public might encourage drinking to the limit or enable drinking games. Companies should not roll out updates too frequently. This gives the impression of delivering cutting-edge technology but impedes the beneficial cycle of customer feedback and redesign.

Even if a product works well, it can have fairness problems if it works better for some than for others. For example, heart rate monitors that send light through the skin work better for white people, as voice recognition systems in smartphones work better for men. Many software products use “dark patterns” to sneakily influence their users, for example, to buy something. One researcher found 1,818 different dark patterns on 11,000 websites. They include mentioning additional costs only at the last moment; suggesting there is limited time or inventory; and raising difficulties for those attempting to cancel a sale. 

The market will not provide fairness without the correct laws. 

Market competition is not going to force companies to deliver fairness in their products. The law must step in. People have values they hold dear, but still buy products that violate them because they may not understand a product’s fairness problems. Ideas about unfairness and discrimination will change over time, as will laws. Laws will specify a minimum level of fairness; companies may exceed this with good intentions or to prevent regulation.

Two major laws concerning data use are the EU’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA). The GDPR gives citizens the right to data portability, the right to erasure and the right to correct errors in personal data. This applies to all data of anyone held by a company established in the EU. In addition, companies outside the EU holding data of EU citizens have to abide by these rules.

“The GDPR provided an impetus for many organizations to examine their organizational culture with respect to privacy, transparency and accountability.”

The CCPA is similar to the GDPR, but applies only to data – from the previous 12 months – of residents of California. Possible fines are much lower under the California law. 

The GDPR forbids allowing algorithms to take important decisions affecting EU citizens. In the United States, a law providing similar guidelines for the use of algorithms did not pass. US judges are starting to regard data breaches to represent direct harm to consumers, and allowing consumers to sue for damages. 

In some places, specific laws regulate aspects of fairness in the digital realm. California, for example, sets rules for chatbots. If a company deploys one, it must disclose to a user that he or she is not communicating with a human.

As the machine learning revolution marches on, both technology and the law regarding fairness will evolve. 

About the Author

Software engineer and lawyer Aileen Nielsen combines work at a deep learning start-up with a fellowship in law and technology at ETH Zürich.

This document is restricted to personal use only.

Did you like this summary?

Buy book or audiobook

Comment on this summary