Economists like to draw triangles. In trade, you can’t have high tariffs, no retaliation, and unchanged prices. In monetary policy, you can’t fix interest rates, fix the money supply, and promise perfect stabilization. In hiring under unequal starting conditions, there is a similar triangle that most debates about fairness in hiring glide past.
When firms turn to algorithms to allocate scarce jobs, they are pulled toward three attractive goals: strong efficiency (pick the candidates most likely to perform well), strong representation (make outcomes roughly mirror group shares), and strong formal neutrality (apply the same rules mechanically to everyone).
The problem is simple but uncomfortable: they cannot get all three at once. They can pick any two, but the third will move in the wrong direction. That is the “fairness trilemma,” and once you see it, a lot of confusion about hiring algorithms and equity and inclusion initiatives starts to look less like a mystery and more like standard price theory. You can find the formal statement and proof in my working paper, “The Fairness Trilemma: An Impossibility Theorem for Algorithmic Governance.”
The old promise
For a while, the story many firms told about hiring was simple. Bias lived in people’s heads. Inefficiency lived in gut judgment. The fix was obvious: standardize, automate, measure. Replace discretion with data, and hiring would become both fairer and more effective.
That story powered a wave of investment in DEI programs and algorithmic hiring tools. Vendors promised something unusually attractive in both public policy and corporate governance: moral improvement without trade-offs. Better outcomes for disadvantaged groups, no loss of performance, and fewer uncomfortable conversations about discretion or power.
Algorithmic hiring systems were sold as a way out of the bind. Scrape résumés and applications, learn what predicts performance, enforce “fairness” mathematically, and let the model do the balancing.
But algorithms do not remove discretion. They relocate it—to model design, to data choices, to the definition of “fairness” itself. And they tend to relocate it to places that are harder to see and harder to contest.
A parable in three corners
The now-famous story of Amazon’s experimental hiring algorithm is a useful parable. Trained on historical résumés and hiring decisions, the system learned that applicants whose profiles resembled those of past male hires were more likely to be scored highly for technical roles. In practice, it downgraded résumés that looked “female-coded,” reflecting a male-dominated tech workforce.
In a narrow technical sense, the model was not malfunctioning. It optimized predictive performance on the data it was given. It applied the same scoring rule to all applicants. It was efficient and formally neutral. What it could not do was generate representative outcomes from non-representative data.
At that point, the firm faced three options that map cleanly onto the trilemma. It could keep the model and accept unequal outcomes (efficiency + neutrality, weak representation), add fairness constraints to push outcomes toward parity and accept lower predictive accuracy (efficiency + representation, weaker neutrality), or reintroduce human judgment and overrides to correct the pattern (representation + discretion, weaker formal neutrality). Amazon ultimately walked away from the system.
A similar arc played out with HireVue’s AI video interviews. The company advertised automated analysis of facial expressions, tone, and word choice as a way to standardize and de-bias early-stage screening. Critics pointed out that these features correlate with disability status, neurodivergence, and demographic background in ways that are hard to justify as job-related. Under mounting pressure, HireVue dropped facial analysis altogether.
In both cases, what failed was not the idea of screening itself. What failed was the belief that measurement could be neutral in a world of unequal starting conditions, and that you could get efficiency, representation, and neutrality “for free” from the right model.
A toy model
A simple model is clearly structured. Imagine a firm that needs to fill a fixed number of positions from an applicant pool divided into two groups, A and B. Applicants in both groups are scored by a predictive model that estimates their probability of success. Because of unequal starting conditions—schooling quality, prior experience, background—group A has a higher average predicted success rate than group B. The firm considers a single threshold rule: hire everyone with a predicted success score above some predetermined level.
Under unequal base rates, one rule cannot do all three. It cannot pick the highest-expected-performance candidates; have hires from groups A and B roughly match their shares in the applicant pool (or population); and apply the same threshold to everyone. If the firm insists on strong efficiency and strong neutrality, it sets one common threshold. Hires will be disproportionately drawn from group A, the group with higher predicted scores. Representation diverges from group shares.
If it insists on strong efficiency and strong representation, it has to relax neutrality with group-specific thresholds or weights so that more group-B applicants are hired while still trying to pick the best among them. But then applicants in A and B who have the same score are treated differently.
If it insists on strong representation and strong neutrality—same rule for everyone, similar hire rates by group—it will not be picking the highest-scoring candidates in aggregate. It leaves some higher-scoring applicants un-hired and takes lower-scoring ones, sacrificing efficiency unless and until the underlying inequalities disappear.
This is the fairness trilemma in its simplest form. You can choose any two corners of the triangle, but the third will move against you. The impossibility is not primarily about machine learning; it is about allocating scarce slots under unequal conditions.
Scarcity doesn’t vanish; it moves
Economists have seen this movie before. Consider rent control. When price ceilings are imposed below market-clearing levels, scarcity does not disappear. It moves. It shows up as queues, non-price screening, side payments, and deteriorating quality. Landlords who cannot ration with rent will ration with waiting lists, personal networks, and discretion. Empirical work such as the Diamond–McQuade–Qian study of San Francisco rent control illustrates this pattern.
Hiring systems behave in much the same way. Constrain one allocation mechanism, and scarcity finds another channel. When performance metrics cannot do the rationing due to fairness constraints, organizations ration with committees, exceptions, holistic review, and opaque overrides. Each move preserves two corners of the trilemma by relaxing the third. Policy constraints adjust scarcity; ; they do not make scarcity go away.
What firms should do
When accepting that efficiency, representation, and formal neutrality cannot all be maximized at once, the question changes. Instead of asking “How do we eliminate bias without trade-offs?” firms have to ask “Which margin are we willing to relax, and where should discretion live?”
A more honest approach to equity and inclusion in hiring algorithms would do at least three things. Be explicit about priorities, and design governance around that choice. Put discretion where it can be monitored—structured committees, documented overrides, review processes—rather than burying value judgments inside model design and opaque fairness metrics. And stop selling algorithms as magic bullets. Models cannot engineer away the underlying trade-offs created by unequal starting conditions; at their best, they clarify where the constraints bind and what choices cost.
The goal is not perfection. It is legitimacy: openly deciding where the trilemma binds in a particular context, and taking responsibility for the consequences.
The post A Fairness Trilemma in Hiring appeared first on Econlib.






