Inherent TradeOffs in the Fair Determination of Risk Scores
Abstract
Recent discussion in the public sphere about algorithmic classification has involved tension between competing notions of what it means for a probabilistic classification to be fair to different groups. We formalize three fairness conditions that lie at the heart of these debates, and we prove that except in highly constrained special cases, there is no method that can satisfy these three conditions simultaneously. Moreover, even satisfying all three conditions approximately requires that the data lie in an approximate version of one of the constrained special cases identified by our theorem. These results suggest some of the ways in which key notions of fairness are incompatible with each other, and hence provide a framework for thinking about the tradeoffs between them.
1 Introduction
There are many settings in which a sequence of people comes before a decisionmaker, who must make a judgment about each based on some observable set of features. Across a range of applications, these judgments are being carried out by an increasingly wide spectrum of approaches ranging from human expertise to algorithmic and statistical frameworks, as well as various combinations of these approaches.
Along with these developments, a growing line of work has asked how we should reason about issues of bias and discrimination in settings where these algorithmic and statistical techniques, trained on large datasets of past instances, play a significant role in the outcome. Let us consider three examples where such issues arise, both to illustrate the range of relevant contexts, and to surface some of the challenges.
A set of example domains.
First, at various points in the criminal justice system, including decisions about bail, sentencing, or parole, an officer of the court may use quantitative risk tools to assess a defendant’s probability of recidivism — future arrest — based on their past history and other attributes. Several recent analyses have asked whether such tools are mitigating or exacerbating the sources of bias in the criminal justice system; in one widelypublicized report, Angwin et al. analyzed a commonly used statistical method for assigning risk scores in the criminal justice system — the COMPAS risk tool — and argued that it was biased against AfricanAmerican defendants [2, 23]. One of their main contentions was that the tool’s errors were asymmetric: AfricanAmerican defendants were more likely to be incorrectly labeled as higherrisk than they actually were, while white defendants were more likely to be incorrectly labeled as lowerrisk than they actually were. Subsequent analyses raised methodological objections to this report, and also observed that despite the COMPAS risk tool’s errors, its estimates of the probability of recidivism are equally well calibrated to the true outcomes for both AfricanAmerican and white defendants [1, 10, 13, 17].
Second, in a very different domain, researchers have begun to analyze the ways in which different genders and racial groups experience advertising and commercial content on the Internet differently [9, 26]. We could ask, for example: if a male user and female user are equally interested in a particular product, does it follow that they’re equally likely to be shown an ad for it? Sometimes this concern may have broader implications, for example if women in aggregate are shown ads for lowerpaying jobs. Other times, it may represent a clash with a user’s leisure interests: if a female user interacting with an advertising platform is interested in an activity that tends to have a maledominated viewership, like professional football, is the platform as likely to show her an ad for football as it is to show such an ad to an interested male user?
A third domain, again quite different from the previous two, is medical testing and diagnosis. Doctors making decisions about a patient’s treatment may rely on tests providing probability estimates for different diseases and conditions. Here too we can ask whether such decisionmaking is being applied uniformly across different groups of patients [16, 27], and in particular how medical tests may play a differential role for conditions that vary widely in frequency between these groups.
Providing guarantees for decision procedures.
One can raise analogous questions in many other domains of fundamental importance, including decisions about hiring, lending, or school admissions [24], but we will focus on the three examples above for the purposes of this discussion. In these three example domains, a few structural commonalities stand out. First, the algorithmic estimates are often being used as “input” to a larger framework that makes the overall decision — a risk score provided to a human expert in the legal and medical instances, and the output of a machinelearning algorithm provided to a larger advertising platform in the case of Internet ads. Second, the underlying task is generally about classifying whether people possess some relevant property: recidivism, a medical condition, or interest in a product. We will refer to people as being positive instances if they truly possess the property, and negative instances if they do not. Finally, the algorithmic estimates being provided for these questions are generally not pure yesno decisions, but instead probability estimates about whether people constitute positive or negative instances.
Let us suppose that we are concerned about how our decision procedure might operate differentially between two groups of interest (such as AfricanAmerican and white defendants, or male and female users of an advertising system). What sorts of guarantees should we ask for as protection against potential bias?
A first basic goal in this literature is that the probability estimates provided by the algorithm should be wellcalibrated: if the algorithm identifies a set of people as having a probability of constituting positive instances, then approximately a fraction of this set should indeed be positive instances [8, 14]. Moreover, this condition should hold when applied separately in each group as well [13]. For example, if we are thinking in terms of potential differences between outcomes for men and women, this means requiring that a fraction of men and a fraction of women assigned a probability should possess the property in question.
A second goal focuses on the people who constitute positive instances (even if the algorithm can only imperfectly recognize them): the average score received by people constituting positive instances should be the same in each group. We could think of this as balance for the positive class, since a violation of it would mean that people constituting positive instances in one group receive consistently lower probability estimates than people constituting positive instances in another group. In our initial criminal justice example, for instance, one of the concerns raised was that white defendants who went on to commit future crimes were assigned risk scores corresponding to lower probability estimates in aggregate; this is a violation of the condition here. There is a completely analogous property with respect to negative instances, which we could call balance for the negative class. These balance conditions can be viewed as generalizations of the notions that both groups should have equal false negative and false positive rates.
It is important to note that balance for the positive and negative classes, as defined here, is distinct in crucial ways from the requirement that the average probability estimate globally over all members of the two groups be equal. This latter global requirement is a version of statistical parity [12, 4, 21, 22]. In some cases statistical parity is a central goal (and in some it is legally mandated), but the examples considered so far suggest that classification and risk assessment are much broader activities where statistical parity is often neither feasible nor desirable. Balance for the positive and negative classes, however, is a goal that can be discussed independently of statistical parity, since these two balance conditions simply ask that once we condition on the “correct” answer for a person, the chance of making a mistake on them should not depend on which group they belong to.
The present work: Tradeoffs among the guarantees.
Despite their different formulations, the calibration condition and the balance conditions for the positive and negative classes intuitively all seem to be asking for variants of the same general goal — that our probability estimates should have the same effectiveness regardless of group membership. One might therefore hope that it would be feasible to achieve all of them simultaneously.
Our main result, however, is that these conditions are in general incompatible with each other; they can only be simultaneously satisfied in certain highly constrained cases. Moreover, this incompatibility applies to approximate versions of the conditions as well.
In the remainder of this section we formulate this main result precisely, as a theorem building on a model that makes the discussion thus far more concrete.
1.1 Formulating the Goal
Let’s start with some basic definitions. As above, we have a collection of people each of whom constitutes either a positive instance or a negative instance of the classification problem. We’ll say that the positive class consists of the people who constitute positive instances, and the negative class consists of the people who constitute negative instances. For example, for criminal defendants, the positive class could consist of those defendants who will be arrested again within some fixed time window, and the negative class could consist of those who will not. The positive and negative classes thus represent the “correct” answer to the classification problem; our decision procedure does not know them, but is trying to estimate them.
Feature vectors.
Each person has an associated feature vector , representing the data that we know about them. Let denote the fraction of people with feature vector who belong to the positive class. Conceptually, we will picture that while there is variation within the set of people who have feature vector , this variation is invisible to whatever decision procedure we apply; all people with feature vector are indistinguishable to the procedure. Our model will assume that the value for each is known to the procedure.^{1}^{1}1Clearly the case in which the value of is unknown is an important version of the problem as well; however, since our main results establish strong limitations on what is achievable, these limitations are only stronger because they apply even to the case of known .
Groups.
Each person also belongs to one of two groups, labeled or , and we would like our decisions to be unbiased with respect to the members of these two groups.^{2}^{2}2We focus on the case of two groups for simplicity of exposition, but it is straightforward to extend all of our definitions to the case of more than two groups. In our examples, the two groups could correspond to different races or genders, or other cases where we want to look for the possibility of bias between them. The two groups have different distributions over feature vectors: a person of group has a probability of exhibiting the feature vector . However, people of each group have the same probability of belonging to the positive class provided their feature vector is . In this respect, contains all the relevant information available to us about the person’s future behavior; once we know , we do not get any additional information from knowing their group as well.^{3}^{3}3As we will discuss in more detail below, the assumption that the group provides no additional information beyond does not restrict the generality of the model, since we can always consider instances in which people of different groups never have the same feature vector , and hence implicitly conveys perfect information about a person’s group.
Risk Assignments.
We say that an instance of our problem is specified by the parameters above: a feature vector and a group for each person, with a value for each feature vector, and distributions giving the frequency of the feature vectors in each group.
Informally, risk assessments are ways of dividing people up into sets based on their feature vectors (potentially using randomization), and then assigning each set a probability estimate that the people in this set belong to the positive class. Thus, we define a risk assignment to consist of a set of “bins” (the sets), where each bin is labeled with a score that we intend to use as the probability for everyone assigned to bin . We then create a rule for assigning people to bins based on their feature vector ; we allow the rule to divide people with a fixed feature vector across multiple bins (reflecting the possible use of randomization). Thus, the rule is specified by values : a fraction of all people with feature vector are assigned to bin . Note that the rule does not have access to the group of the person being considered, only their feature vector . (As we will see, this does not mean that the rule is incapable of exhibiting bias between the two groups.) In summary, a risk assignment is specified by a set of bins, a score for each bin, and values that define a mapping from people with feature vectors to bins.
Fairness Properties for Risk Assignments.
Within the model, we now express the three conditions discussed at the outset, each reflecting a potentially different notion of what it means for the risk assignment to be “fair.”

Calibration within groups requires that for each group , and each bin with associated score , the expected number of people from group in who belong to the positive class should be a fraction of the expected number of people from group assigned to .

Balance for the negative class requires that the average score assigned to people of group 1 who belong to the negative class should be the same as the average score assigned to people of group 2 who belong to the negative class. In other words, the assignment of scores shouldn’t be systematically more inaccurate for negative instances in one group than the other.

Balance for the positive class symmetrically requires that the average score assigned to people of group 1 who belong to the positive class should be the same as the average score assigned to people of group 2 who belong to the positive class.
Why Do These Conditions Correspond to Notions of Fairness?.
All of these are natural conditions to impose on a risk assignment; and as indicated by the discussion above, all of them have been proposed as versions of fairness. The first one essentially asks that the scores mean what they claim to mean, even when considered separately in each group. In particular, suppose a set of scores lack the first property for some bin , and these scores are given to a decisionmaker; then if people of two different groups both belong to bin , the decisionmaker has a clear incentive to treat them differently, since the lack of calibration within groups on bin means that these people have different aggregate probabilities of belonging to the positive class. Another way of stating the property of calibration within groups is to say that, conditioned on the bin to which an individual is assigned, the likelihood that the individual is a member of the positive class is independent of the group to which the individual belongs. This means we are justified in treating people with the same score comparably with respect to the outcome, rather than treating people with the same score differently based on the group they belong to.
The second and third ask that if two individuals in different groups exhibit comparable future behavior (negative or positive), they should be treated comparably by the procedure. In other words, a violation of, say, the second condition would correspond to the members of the negative class in one group receiving consistently higher scores than the members of the negative class in the other group, despite the fact that the members of the negative class in the higherscoring group have done nothing to warrant these higher scores.
We can also interpret some of the prior work around our earlier examples through the lens of these conditions. For example, in the analysis of the COMPAS risk tool for criminal defendants, the critique by Angwin et al. focused on the risk tool’s violation of conditions (B) and (C); the counterarguments established that it satisfies condition (A). While it is clearly crucial for a risk tool to satisfy (A), it may still be important to know that it violates (B) and (C). Similarly, to think in terms of the example of Internet advertising, with male and female users as the two groups, condition (A) as before requires that our estimates of adclick probability mean the same thing in aggregate for men and women. Conditions (B) and (C) are distinct; condition (C), for example, says that a female user who genuinely wants to see a given ad should be assigned the same probability as a male user who wants to see the ad.
1.2 Determining What is Achievable: A Characterization Theorem
When can conditions (A), (B), and (C) be simultaneously achieved? We begin with two simple cases where it’s possible.

Perfect prediction. Suppose that for each feature vector , we have either or . This means that we can achieve perfect prediction, since we know each person’s class label (positive or negative) for certain. In this case, we can assign all feature vectors with to a bin with score , and all with to a bin with score . It is easy to check that all three of the conditions (A), (B), and (C) are satisfied by this risk assignment.

Equal base rates. Suppose, alternately, that the two groups have the same fraction of members in the positive class; that is, the average value of is the same for the members of group 1 and group 2. (We can refer to this as the base rate of the group with respect to the classification problem.) In this case, we can create a single bin with score equal to this average value of , and we can assign everyone to bin . While this is not a particularly informative risk assignment, it is again easy to check that it satisfies fairness conditions (A), (B), and (C).
Our first main result establishes that these are in fact the only two cases in which a risk assignment can achieve all three fairness guarantees simultaneously.
Theorem 1.1
Consider an instance of the problem in which there is a risk assignment satisfying fairness conditions (A), (B), and (C). Then the instance must either allow for perfect prediction (with equal to or for all ) or have equal base rates.
Thus, in every instance that is more complex than the two cases noted above, there will be some natural fairness condition that is violated by any risk assignment. Moreover, note that this result applies regardless of how the risk assignment is computed; since our framework considers risk assignments to be arbitrary functions from feature vectors to bins labeled with probability estimates, it applies independently of the method — algorithmic or otherwise — that is used to construct the risk assignment.
The conclusions of the first theorem can be relaxed in a continuous fashion when the fairness conditions are only approximate. In particular, for any we can define approximate versions of each of conditions (A), (B), and (C) (specified precisely in the next section), each of which requires that the corresponding equalities between groups hold only to within an error of . For any , we can also define a approximate version of the equal base rates condition (requiring that the base rates of the two groups be within an additive of each other) and a approximate version of the perfect prediction condition (requiring that in each group, the average of the expected scores assigned to members of the positive class is at least ; by the calibration condition, this can be shown to imply a complementary bound on the average of the expected scores assigned to members of the negative class).
In these terms, our approximate version of Theorem 1.1 is the following.
Theorem 1.2
There is a continuous function , with going to as goes to , so that the following holds. For all , and any instance of the problem with a risk assignment satisfying the approximate versions of fairness conditions (A), (B), and (C), the instance must satisfy either the approximate version of perfect prediction or the approximate version of equal base rates.
Thus, anything that approximately satisfies the fairness constraints must approximately look like one of the two simple cases identified above.
Finally, in connection to Theorem 1.1, we note that when the two groups have equal base rates, then one can ask for the most accurate risk assignment that satisfies all three fairness conditions (A), (B), and (C) simultaneously. Since the risk assignment that gives the same score to everyone satisfies the three conditions, we know that at least one such risk assignment exists; hence, it is natural to seek to optimize over the set of all such assignments. We consider this algorithmic question in the final technical section of the paper.
To reflect a bit further on our main theorems and what they suggest, we note that our intention in the present work isn’t to make a recommendation on how conflicts between different definitions of fairness should be handled. Nor is our intention to analyze which definitions of fairness are violated in particular applications or datasets. Rather, our point is to establish certain unavoidable tradeoffs between the definitions, regardless of the specific context and regardless of the method used to compute risk scores. Since each of the definitions reflect (and have been proposed as) natural notions of what it should mean for a risk score to be fair, these tradeoffs suggest a striking implication: that outside of narrowly delineated cases, any assignment of risk scores can in principle be subject to natural criticisms on the grounds of bias. This is equally true whether the risk score is determined by an algorithm or by a system of human decisionmakers.
Special Cases of the Model.
Our main results, which place strong restrictions on when the three fairness conditions can be simultaneously satisfied, have more power when the underlying model of the input is more general, since it means that the restrictions implied by the theorems apply in greater generality. However, it is also useful to note certain special cases of our model, obtained by limiting the flexibility of certain parameters in intuitive ways. The point is that our results apply a fortiori to these more limited special cases.
First, we have already observed one natural special case of our model: cases in which, for each feature vector , only members of one group (but not the other) can exhibit . This means that contains perfect information about group membership, and so it corresponds to instances in which risk assignments would have the potential to use knowledge of an individual’s group membership. Note that we can convert any instance of our problem into a new instance that belongs to this special case as follows. For each feature vector , we create two new feature vectors and ; then, for each member of group who had feature vector , we assign them , and for each member of group who had feature vector , we assign them . The resulting instance has the property that each feature vector is associated with members of only one group, but it preserves the essential aspects of the original instance in other respects.
Second, we allow risk assignments in our model to split people with a given feature vector over several bins. Our results also therefore apply to the natural special case of the model with integral risk assignments, in which all people with a given feature must go to the same bin.
Third, our model is a generalization of binary classification, which only allows for 2 bins. Note that although binary classification does not explicitly assign scores, we can consider the probability that an individual belongs to the positive class given that they were assigned to a specific bin to be the score for that bin. Thus, our results hold in the traditional binary classification setting as well.
DataGenerating Processes.
Finally, there is the question of where the data in an instance of our problem comes from. Our results do not assume any particular process for generating the positive/negative class labels, feature vectors, and group memberships; we simply assume that we are given such a collection of values (regardless of where they came from), and then our results address the existence or nonexistence of certain risk assignments for these values.
This increases the generality of our results, since it means that they apply to any process that produces data of the form described by our model. To give an example of a natural generative model that would produce instances with the structure that we need, one could assume that each individual starts with a “hidden” class label (positive or negative), and a feature vector is then probabilistically generated for this individual from a distribution that can depend on their class label and their group membership. (If feature vectors produced for the two groups are disjoint from one another, then the requirement that the value of is independent of group membership given necessarily holds.) Since a process with this structure produces instances from our model, our results apply to data that arises from such a generative process.
It is also interesting to note that the basic setup of our model, with the population divided across a set of feature vectors for which race provides no additional information, is in fact a very close match to the information one gets from the output of a wellcalibrated risk tool. In this sense, one setting for our model would be the problem of applying postprocessing to the output of such a risk tool to ensure additional fairness guarantees. Indeed, since much of the recent controversy about fair risk scores has involved risk tools that are wellcalibrated but lack the other fairness conditions we consider, such an interpretation of the model could be a useful way to think about how one might work with these tools in the context of a broader system.
1.3 Further Related Work
Mounting concern over discrimination in machine learning has led to a large body of new work seeking to better understand and prevent it. Barocas and Selbst survey a range of ways in which dataanalysis algorithms can lead to discriminatory outcomes [3], and review articles by Romei and Ruggieri [25] and Zliobaite [30] survey dataanalytic and algorithmic methods for measuring discrimination.
Kamiran and Calders [21] and Hajian and DomingoFerrer [18] seek to modify datasets to remove any information that might permit discrimination. Similarly, Zemel et al. look to learn fair intermediate representations of data while preserving information needed for classification [29]. Joseph et al. consider how fairness issues can arise during the process of learning, modeling this using a multiarmed bandit framework [20].
One common notion of fairness is “statistical parity” – equal fractions of each group should be treated as belonging to the positive class [4, 21, 22]. Recent papers have also considered approximate relaxations of statistical parity, motivated by the formulation of disparate impact in the U.S. legal code [12, 28]. Work in these directions has developed learning algorithms that penalize violations of statistical parity [4, 22]. As noted above, we consider definitions other than statistical parity that take into account the class membership (positive or negative) of the people being classified.
Dwork et al. propose a framework based on a taskspecific externally defined similarity metric between individuals, seeking to achieve fairness through the goal that “similar people [be] treated similarly” [11]. They strive towards individual fairness, which is a stronger notion of fairness than the definitions we use; however, our approach shares some of the underlying motivation (though not the specifics) in that our balance conditions for the positive and negative classes also reflect the notion that similar people should be treated similarly.
Much of the applied work on risk scores, as noted above, focuses on calibration as a central goal [8, 10, 13]. In particular, responding to the criticism of their risk scores as displaying asymmetric errors for different groups, Dietrich et al. note that empirically, both in their domain and in similar settings, it is typically difficult to achieve symmetry in the error rates across groups when base rates differ significantly. Our formulation of the balance conditions for the positive and negative classes, and our result showing the incompatibility of these conditions with calibration, provides a theoretical basis for such observations.
In recent work concurrent with ours, Hardt et al. consider the natural analogues of our conditions (B) and (C), balance for the negative and positive classes, in the case of classifiers that output binary “yes/no” predictions rather than realvalued scores as in our case [19]. Since they do not require an analogue of calibration, it is possible to satisfy the two balance constraints simultaneously, and they provide methods for optimizing performance measures of the prediction rule subject to satisfying these two constraints. Also concurrent with our work and that of Hardt et al., Chouldechova [5] and CorbettDavies et al. [6] (and see also [7]) consider binary prediction subject to these same analogues of the balance conditions for the negative and positive classes, together with a form of calibration adapted to binary prediction (requiring that for all people given a positive label, the same fraction of people in each group should truly belong to the positive class). Among other results, they show that no classification rule satisfying the required constraints is possible. Finally, a recent paper of Friedler et al. [15] defines two axiomatic properties of feature generation and shows that no mechanism can be fair under these two properties.
2 The Characterization Theorems
Starting with the notation and definitions from the previous section, we now give a proof of Theorem 1.1.
Informal overview.
Let us begin with a brief overview of the proof, before going into a more detailed version of it. For this discussion, let denote the number of people in group , and be the number of people in group who belong to the positive class.
Roughly speaking, the proof proceeds in two steps. First, consider a single bin . By the calibration condition, the expected total score given to the group people in bin is equal to the expected number of group people in bin who belong to the positive class. Summing over all bins, we find that the total score given to all people in group (that is, the sum of the scores received by everyone in group ) is equal to the total number of people in the positive class in group , which is .
Now, let be the average score given to a member of the negative class, and let be the average score given to a member of the positive class. By the balance conditions for the negative and positive classes, these values of and are the same for both groups.
Given the values of and , the total number of people in the positive class , and the total score given out to people in group — which, as argued above, is also — we can write the total score as
This defines a line for each group in the two variables and , and hence we obtain a system of two linear equations (one for each group) in the unknowns and .
If all three conditions — calibration, and balance for the two classes — are to be satisfied, then we must be at a set of parameters that represents a solution to the system of two equations. If the base rates are equal, then and hence the two lines are the same; in this case, the system of equations is satisfied by any choice of and . If the base rates are not equal, then the two lines are distinct, and they intersect only at the point , which implies perfect prediction — an average score of for members of the negative class and for members of the positive class. Thus, the three conditions can be simultaneously satisfied if and only if we have equal base rates or perfect prediction.
This concludes the overview of the proof; in the remainder of the section we describe the argument at a more detailed level.
Definitions and notation.
Recall from our notation in the previous section that an fraction of the people in group have feature vector ; we thus write for the number of people in group with feature vector . Many of the components of the risk assignment and its evaluation can be written in terms of operations on a set of underlying matrices and vectors, which we begin by specifying.

First, let denote the number of feature vectors in the instance, and let be a vector indexed by the possible feature vectors, with the coordinate in position equal to . For group , let also be a vector indexed by the possible feature vectors, with the coordinate in position equal to . Finally, it will be useful to have a representation of as a diagonal matrix; thus, let be a diagonal matrix with .

We now specify a risk assignment as follows. The risk assignment involves a set of bins with associated scores; let be a vector indexed by the bins, with the coordinate in position equal to the score of bin . Let be a diagonal matrix version of : it is a matrix with . Finally, let be the matrix of values, specifying the fraction of people with feature vector who get mapped to bin under the assignment procedure.
There is an important point to note about the values. If all of them are equal to or , this corresponds to a procedure in which all people with the same feature vector get assigned to the same bin. When some of the values are not equal to or , the people with vector are being divided among multiple bins. In this case, there is an implicit randomization taking place with respect to the positive and negative classes, and with respect to the two groups, which we can think of as follows. Since the procedure cannot distinguish among people with vector , in the case that it distributes these people across multiple bins, the subset of people with vector who belong to the positive and negative classes, and to the two groups, are divided up randomly across these bins in proportions corresponding to . In particular, if there are group people with vector , the expected number of these people who belong to the positive class and are assigned to bin is .
Let us now proceed with the proof of Theorem 1.1, starting with the assumption that our risk assignment satisfies conditions (A), (B), and (C).
Calibration within groups.
We begin by working out some useful expressions in terms of the matrices and vectors defined above. We observe that is a vector in whose coordinate corresponding to feature vector equals the number of people in group who have feature vector and belong to the positive class. is a vector in whose coordinate corresponding to bin equals the expected number of people in group assigned to bin .
By further multiplying these vectors on the right, we get additional useful quantities. Here are two in particular:

is a vector in whose coordinate corresponding to bin equals the expected sum of the scores assigned to all group people in bin . That is, using the subscript to denote the coordinate corresponding to bin , we can write by the definition of the diagonal matrix .

is a vector in whose coordinate corresponding to bin equals the expected number of group people in the positive class who are placed in bin .
Now, condition (A), that the risk assignment is calibrated within groups, implies that the two vectors above are equal coordinatewise, and so we have the following equation for all :
(1) 
Calibration condition (A) also has an implication for the total score received by all people in group . Suppose we multiply the two sides of (1) on the right by the vector whose coordinates are all , obtaining
(2) 
The lefthandside is the number of group people in the positive class. The righthandside, which we can also write as , is equal to the sum of the expected scores received by all group people. These two quantities are thus the same, and we write their common value as .
Fairness to the positive and negative classes.
We now want to write down vector equations corresponding to the fairness conditions (B) and (C) for the negative and positive classes. First, recall that for the dimensional vector , the coordinate corresponding to bin equals the expected number of group people in the positive class who are placed in bin . Thus, to compute the sum of the expected scores received by all group people in the positive class, we simply need to take the inner product with the vector , yielding . Since is the total number of group people in the positive class, the average of the expected scores received by a group person in the positive class is the ratio . Thus, condition (C), that members of the positive class should receive the same average score in each group, can be written
(3) 
Applying strictly analogous reasoning but to the fractions of people in the negative class, we can write condition (B), that members of the negative class should receive the same average score in each group, as
(4) 
The portion of the score received by the positive class.
We think of the ratios on the two sides of (3), and equivalently (5), as the average of the expected scores received by a member of the positive class in group : the numerator is the sum of the expected scores received by the members of the positive class, and the denominator is the size of the positive class. Let us denote this fraction by ; we note that this is the quantity used in the informal overview of the proof at the start of the section. By (2), we can alternately think of the denominator as the sum of the expected scores received by all group people. Hence, the two sides of (3) and (5) can be viewed as representing the ratio of the sum of the expected scores in the positive class of group to the sum of the expected scores in group as a whole. (3) requires that ; let us denote this common value by .
Now, we observe that corresponds to a case in which the sum of the expected scores in just the positive class of group is equal to the sum of the expected scores in all of group . In this case, it must be that all members of the negative class are assigned to bins of score . If any members of the positive class were assigned to a bin of score , this would violate the calibration condition (A); hence all members of the positive class are assigned to bins of positive score. Moreover, these bins of positive score contain no members of the negative class (since they’ve all been assigned to bins of score ), and so again by the calibration condition (A), the members of the positive class are all assigned to bins of score . Finally, applying the calibration condition once more, it follows that the members of the negative class all have feature vectors with and the members of the positive class all have feature vectors with . Hence, when we have perfect prediction.
Some Comments on the Connection to Statistical Parity.
Earlier we noted that conditions (B) and (C) — the balance conditions for the positive and negative classes — are quite different from the requirement of statistical parity, which asserts that the average of the scores over all members of each group be the same.
When the two groups have equal base rates, then the risk assignment that gives the same score to everyone in the population achieves statistical parity along with conditions (A), (B), and (C). But when the two groups do not have equal base rates, it is immediate to show that statistical parity is inconsistent with both the calibration condition (A) and with the conjunction of the two balance conditions (B) and (C). To see the inconsistency of statistical parity with the calibration condition, we take Equation (1) from the proof above, sum the coordinates of the vectors on both sides, and divide by , the number of people in group . Statistical parity requires that the righthand sides of the resulting equation be the same for , while the assumption that the two groups have unequal base rates implies that the lefthand sides of the equation must be different for . To see the inconsistency of statistical parity with the two balance conditions (B) and (C), we simply observe that if the average score assigned to the positive class and to the negative class are the same in the two groups, then the average of the scores over all members of the two groups cannot be the same provided they do not contain the same proportion of positiveclass and negativeclass members.
3 The Approximate Theorem
In this section we prove Theorem 1.2. First, we must first give a precise specification of the approximate fairness conditions:
(A’)  
(B’)  
(C’) 
For (B’) and (C’), we also require that these hold when and are interchanged.
We also specify the approximate versions of perfect prediction and equal base rates in terms of , which is a function that goes to 0 as goes to 0.

Approximate perfect prediction. and

Approximately equal base rates.
A brief overview of the proof of Theorem 1.2 is as follows. It proceeds by first establishing an approximate form of Equation (1) above, which implies that the total expected score assigned in each group is approximately equal to the total size of the positive class. This in turn makes it possible to formulate approximate forms of Equations (3) and (4). When the base rates are close together, the approximation is too loose to derive bounds on the predictive power; but this is okay since in this case we have approximately equal base rates. Otherwise, when the base rates differ significantly, we show that most of the expected score must be assigned to the positive class, giving us approximately perfect prediction.
The remainder of this section provides the full details of the proof.
Total scores and the number of people in the positive class.
First, we will show that the total score for each group is approximately , the number of people in the positive class. Define . Using (A’), we have
Similarly, we can lower bound as
Combining these, we have
(7) 
The portion of the score received by the positive class.
The relationship between the base rates.
We can apply this to (B’) to relate and , using the observation that the score not received by people of the positive class must fall instead to people of the negative class. Examining the left inequality of (B’), we have
Thus, the left inequality of (B’) becomes
(9) 
By definition, and , so this becomes
(10) 
If the base rates differ.
Let and be the respective base rates, i.e. and . Assume that (otherwise we can switch and in the above analysis), and assume towards contradiction that the base rates differ by at least , meaning . Using (10),
Recall that , so
Let . Note that we assumed that and differ by an additive . Therefore if the fairness conditions are met and the base rates are not within an additive , then and . This completes the proof of Theorem 1.2.
4 Reducing Loss with Equal Base Rates
In a risk assignment, we would like as much of the score as possible to be assigned to members of the positive class. With this in mind, if an individual receives a score of , we define their individual loss to be if they belong to the negative class, and if they belong to the positive class. The loss of the risk assignment in group is then the sum of the expected individual losses to each member of group . In terms of the matrixvector products used in the proof of Theorem 1.1, one can show that the loss for group may be written as
and the total loss is just the weighted sum of the losses for each group.
Now, let us say that a fair assignment is one that satisfies our three conditions (A), (B), and (C). As noted above, when the base rates in the two groups are equal, the set of fair assignments is nonempty, since the calibrated risk assignment that places everyone in a single bin is fair. We can therefore ask, in the case of equal base rates, whether there exists a fair assignment whose loss is strictly less than that of the trivial onebin assignment. It is not hard to show that this is possible if and only if there is any assignment using more than one bin; we will call such an assignment a nontrivial assignment.
Note that the assignment that minimizes loss is simply the one that assigns each to a separate bin with a score of , meaning is the identity matrix. While this assignment, which we refer to as the identity assignment , is wellcalibrated, it may violate fairness conditions (B) and (C). It is not hard to show that the loss for any other assignment is strictly greater than the loss for . As a result, unless the identity assignment happens to be fair, every fair assignment must have larger loss than that of , forcing a tradeoff between performance and fairness.
4.1 Characterization of WellCalibrated Solutions
To better understand the space of feasible solutions, suppose we drop the fairness conditions (B) and (C) for now and study risk assignments that are simply wellcalibrated, satisfying (A). As in the proof of Theorem 1.1, we write for the average of the expected scores assigned to members of the positive class in group , and we define the fairness difference to be . If this is nonnegative, we say the risk assignment weakly favors group 1; if it is nonpositive, it weakly favors group 2. Since a risk assignment is fair if and only if , it is fair if and only if the fairness difference is 0.
We wish to characterize when nontrivial fair risk assignments are possible. First, we observe that without the fairness requirements, the set of possible fairness differences under wellcalibrated assignments is an interval.
Lemma 4.1
If group 1 and group 2 have equal base rates, then for any two nontrivial wellcalibrated risk assignments with fairness differences and and for any , there exists a nontrivial wellcalibrated risk assignment with fairness difference .
Proof: The basic idea is that we can effectively take convex combinations of wellcalibrated assignments to produce any wellcalibrated assignment “in between” them. We carry this out as follows.
Let and be the allocation matrices for assignments with fairness differences and respectively, where . Choose such that , meaning . Then, is a nontrivial wellcalibrated assignment with fairness difference .
First, we observe that is a valid assignment because each row sums to 1 (meaning everyone from every gets assigned to a bin), since each row of sums to and each row of sums to . Moreover, it is nontrivial because every nonempty bin created by and is a nonempty bin under .
Let and be the respective bin labels for assignments and . Define .
Finally, let . Define and analogously. Note that .
We observe that is calibrated because
Finally, we show that the fairness difference is . Let and be the portions of the total expected score received by the positive class from each group respectively. Define similarly.
Corollary 4.2
There exists a nontrivial fair assignment if and only if there exist nontrivial wellcalibrated assignments and such that weakly favors group 1 and weakly favors group 2.
Proof: If there is a nontrivial fair assignment, then it weakly favors both group 1 and group 2, proving one direction.
To prove the other direction, observe that the fairness differences and of and are nonnegative and nonpositive respectively. Since the set of fairness differences achievable by nontrivial wellcalibrated assignments is an interval by Lemma 4.1, there exists a nontrivial wellcalibrated assignment with fairness difference 0, meaning there exists a nontrivial fair assignment.
It is an open question whether there is a polynomialtime algorithm to find a fair assignment of minimum loss, or even to determine whether a nontrivial fair solution exists.
4.2 NPCompleteness of NonTrivial Integral Fair Risk Assignments
As discussed in the introduction, risk assignments in our model are allowed to split people with a given feature vector over several bins; however, it is also of interest to consider the special case of integral risk assignments, in which all people with a given feature must go to the same bin. For the case of equal base rates, we can show that determining whether there is a nontrivial integral fair assignment is NPcomplete. The proof uses a reduction from the Subset Sum problem and is given in the Appendix.
The basic idea of the reduction is as follows. We have an instance of Subset Sum with numbers and a target number ; the question is whether there is a subset of the ’s that sums to . As before, denotes the average of the expected scores received by members of the positive class in group . We first ensure that there is exactly one nontrivial way to allocate the people of group 1, allowing us to control . The fairness conditions then require that , which we can use to encode the target value in the instance of Subset Sum. For every input number in the Subset Sum instance, we create and , close to each other in value and far from all other values, such that grouping and together into a bin corresponds to choosing for the subset, while not grouping them corresponds to not taking . This ensures that group 2 can be assigned with the correct value of if and only if there is a solution to the Subset Sum instance.
5 Conclusion
In this work we have formalized three fundamental conditions for risk assignments to individuals, each of which has been proposed as a basic measure of what it means for the risk assignment to be fair. Our main results show that except in highly constrained special cases, it is not possible to satisfy these three constraints simultaneously; and moreover, a version of this fact holds in an approximate sense as well.
Since these results hold regardless of the method used to compute the risk assignment, it can be phrased in fairly clean terms in a number of domains where the tradeoffs among these conditions do not appear to be wellunderstood. To take one simple example, suppose we want to determine the risk that a person is a carrier for a disease , and suppose that a higher fraction of women than men are carriers. Then our results imply that in any test designed to estimate the probability that someone is a carrier of , at least one of the following undesirable properties must hold: (a) the test’s probability estimates are systematically skewed upward or downward for at least one gender; or (b) the test assigns a higher average risk estimate to healthy people (noncarriers) in one gender than the other; or (c) the test assigns a higher average risk estimate to carriers of the disease in one gender than the other. The point is that this tradeoff among (a), (b), and (c) is not a fact about medicine; it is simply a fact about risk estimates when the base rates differ between two groups.
Finally, we note that our results suggest a number of interesting directions for further work. First, when the base rates between the two underlying groups are equal, our results do not resolve the computational tractability of finding the most accurate risk assignment, subject to our three fairness conditions, when the people with a given feature vector can be split across multiple bins. (Our NPcompleteness result applies only to the case in which everyone with a given feature vector must be assigned to the same bin.) Second, there may be a number of settings in which the cost (social or otherwise) of false positives may differ greatly from the cost of false negatives. In such cases, we could imagine searching for risk assignments that satisfy the calibration condition together with only one of the two balance conditions, corresponding to the class for whom errors are more costly. Determining when two of our three conditions can be simultaneously satisfied in this way is an interesting open question. More broadly, determining how the tradeoffs discussed here can be incorporated into broader families of proposed fairness conditions suggests interesting avenues for future research.
References
 [1] Propublica analysis. https://docs.google.com/document/d/1pKtyl8XmJH7Z09lxkb70n6fa2Fiitd7ydbxgCT_wCXs/edit?pref=2&pli=1.
 [2] Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. Machine bias: There’s software used across the country to predict future criminals. And it’s biased against blacks. ProPublica, May 23, 2016. https://www.propublica.org/article/machinebiasriskassessmentsincriminalsentencing.
 [3] Solon Barocas and Andrew D Selbst. Big data’s disparate impact. California Law Review, 104, 2016.
 [4] Toon Calders and Sicco Verwer. Three naive bayes approaches for discriminationfree classification. Data Mining and Knowledge Discovery, 21(2):277–292, 2010.
 [5] Alexandra Chouldechova. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. arXiv preprint arXiv:1610.07524, October 2016.
 [6] Sam CorbettDavies, Emma Pierson, Avi Feller, and Sharad Goel. A computer program used for bail and sentencing decisions was labeled biased against blacks. it’s actually not that clear. Washington Post, October 7, 2016.
 [7] Sam CorbettDavies, Emma Pierson, Avi Feller, Sharad Goel, and Aziz Huq. Fair algorithms and the equal treatment principle. Working paper, to appear.
 [8] Cynthia S. Crowson, Elizabeth J. Atkinson, and Terry M. Therneau. Assessing calibration of prognostic risk scores. Statistical Methods in Medical Research, 25(4):1692–1706, 2016.
 [9] Amit Datta, Michael Carl Tschantz, and Anupam Datta. Automated experiments on ad privacy settings. Proceedings on Privacy Enhancing Technologies, 2015(1):92–112, 2015.
 [10] William Dieterich, Christina Mendoza, and Tim Brennan. COMPAS risk scales: Demonstrating accuracy equity and predictive parity. Technical report, Northpointe, July 2016. http://www.northpointeinc.com/northpointeanalysis.
 [11] Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard S. Zemel. Fairness through awareness. In Innovations in Theoretical Computer Science 2012, Cambridge, MA, USA, January 810, 2012, pages 214–226, 2012.
 [12] Michael Feldman, Sorelle A. Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian. Certifying and removing disparate impact. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’15, pages 259–268, New York, NY, USA, 2015. ACM.
 [13] Anthony Flores, Christopher Lowenkamp, and Kristin Bechtel. False positives, false negatives, and false analyses: A rejoinder to “machine bias: There’s software used across the country to predict future criminals. and it’s biased against blacks.”. Technical report, Crime & Justice Institute, September 2016. http://www.crj.org/cji/entry/falsepositivesfalsenegativesandfalseanalysesarejoinder.
 [14] Dean P. Foster and Rakesh V. Vohra. Asymptotic calibration. Biometrika, 85(2):379–390, 1998.
 [15] Sorelle A Friedler, Carlos Scheidegger, and Suresh Venkatasubramanian. On the (im) possibility of fairness. arXiv preprint arXiv:1609.07236, September 2016.
 [16] Howard N. Garb. Race bias, social class bias, and gender bias in clinical judgment. Clinical Psychology: Science and Practice, 4(2):99–120, 1997.
 [17] Abe Gong. Ethics for powerful algorithms (1 of 4). Medium, July 12, 2016. https://medium.com/@AbeGong/ethicsforpowerfulalgorithms1of3a060054efd84#.dhsd2ut3i.
 [18] Sara Hajian and Josep DomingoFerrer. A methodology for direct and indirect discrimination prevention in data mining. IEEE transactions on knowledge and data engineering, 25(7):1445–1459, 2013.
 [19] Moritz Hardt, Eric Price, and Nathan Srebro. Equality of opportunity in supervised learning. In Advances in Neural Information Processing Systems, 2016.
 [20] Matthew Joseph, Michael Kearns, Jamie Morgenstern, and Aaron Roth. Fairness in learning: Classic and contextual bandits. In Advances in Neural Information Processing Systems, 2016.
 [21] Faisal Kamiran and Toon Calders. Classifying without discriminating. 2009 2nd International Conference on Computer, Control and Communication, IC4 2009, 2009.
 [22] Toshihiro Kamishima, Shotaro Akaho, and Jun Sakuma. Fairnessaware learning through regularization approach. Proceedings  IEEE International Conference on Data Mining, ICDM, pages 643–650, 2011.
 [23] Jeff Larson, Surya Mattu, Lauren Kirchner, and Julia Angwin. How we analyzed the COMPAS recidivism algorithm. ProPublica, May 23, 2016. https://www.propublica.org/article/howweanalyzedthecompasrecidivismalgorithm.
 [24] Executive Office of the President. Big data: A report on algorithmic systems, opportunity, and civil rights. Technical report, May 2016.
 [25] Andrea Romei and Salvatore Ruggieri. A multidisciplinary survey on discrimination analysis. The Knowledge Engineering Review, 29(05):582–638, 2014.
 [26] Latanya Sweeney. Discrimination in online ad delivery. Communications of the ACM, 56(5):44–54, 2013.
 [27] David R. Williams and Selina A. Mohammed. Discrimination and racial disparities in health: Evidence and needed research. J. Med. Behav., 32(1), 2009.
 [28] Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, and Krishna Gummadi. Learning fair classifiers. arXiv preprint arXiv:1507.05259, 2015.
 [29] Richard S Zemel, Yu Wu, Kevin Swersky, Toniann Pitassi, and Cynthia Dwork. Learning Fair Representations. Proceedings of the 30th International Conference on Machine Learning, 28:325–333, 2013.
 [30] Indre Zliobaite. A survey on measuring indirect discrimination in machine learning. arXiv preprint arXiv:1511.00148, 2015.
Appendix: NPCompleteness of NonTrivial Integral Fair Risk Assignments
We can reduce to the integral assignment problem, parameterized by , and , from subset sum as follows.
Suppose we have an instance of the subset sum problem specified by numbers and a target ; the goal is to determine whether a subset of the add up to . We create an instance of the integral assignment problem with . if and 0 otherwise. if and 0 otherwise. We make the following definitions:
()  
()  
With this definition, the subset sum instance has a solution if and only if the integral assignment instance given by has a solution.
Before we prove this, we need the following lemma.
Lemma 5.1
For any ,
Proof:
Now, we can prove that the integral assignment problem is NPhard.
Proof: First, we observe that for any nontrivial solution to the integral assignment instance, there must be two bins such that and . In other words, the people with and must be split up. If not, then all the people of group 1 would be in the same bin, meaning that bin must be labeled with the base rate . In order to maintain fairness, the same would have to be done for all the people of group 2, resulting in the trivial solution. Moreover, and must be labeled respectively because those are the fraction of people of group 1 in those bins who belong to the positive class.
This means that as defined above. We know that a wellcalibrated assignment is fair only if , so we know .
Next, we observe that because all of the positive ’s are , so is just the average of , which is by symmetry.
Let be the partition of corresponding to the assignment, meaning that for a given , there is a bin containing all people with such that . The label on that bin is
Furthermore, bin contains positive fraction. Using this, we can come up with an expression for .
Setting this equal to , we have
Subtracting both sides from and using Lemma 5.1, we have
(11) 
Thus, is a fair nontrivial assignment if and only if (11) holds.
Next, we show that there exists that satisfies (11) if and only if there there exists some such that .