Executive Summary
Incentives are among the most powerful tools available to institutions, markets, and governments. Properly aligned, they can encourage diligence, innovation, and cooperation. Poorly designed, they reliably produce distortion, moral hazard, gaming, corruption, and collapse of trust. This white paper examines the perversity of incentives—the recurring pattern by which incentive systems undermine the very goals they are meant to achieve—and explains why incentive design fails so often across sectors. It argues that incentive failure is not accidental but structural, arising from deep mismatches between human behavior, institutional measurement, time horizons, and moral responsibility. The paper concludes with design principles for building incentives that serve formation, resilience, and truth rather than superficial compliance or short-term metrics.
I. What Incentives Are—and What They Are Not
Incentives are signals embedded in systems of reward and punishment that guide behavior by attaching consequences to actions. They are not neutral tools.
Three critical clarifications are often neglected:
Incentives shape behavior, not values They alter what people do, not what they believe is right. Incentives reward what is measured, not what is intended Measurement substitutes for purpose under pressure. Incentives do not merely motivate—they select Over time, they favor certain personality types, strategies, and moral tolerances.
When these realities are ignored, institutions unintentionally cultivate the very behaviors they publicly condemn.
II. The Core Mechanism of Perverse Incentives
A perverse incentive exists when a system rewards behavior that undermines the stated or moral purpose of the system itself.
This occurs through a predictable chain:
A complex human good is reduced to a proxy metric Rewards are attached to the proxy Actors optimize for the proxy The underlying good deteriorates Leaders respond by tightening metrics Distortion accelerates
This cycle is not an aberration—it is the default trajectory of poorly grounded incentive regimes.
III. Why Incentive Design So Often Goes Wrong
1. Metric Substitution (Goodhart’s Law)
When a measure becomes a target, it ceases to be a good measure.
Institutions routinely confuse indicators with outcomes, assuming that if a number improves, the underlying reality must also improve. In practice, the opposite frequently occurs.
Examples:
Educational systems optimizing test scores at the expense of comprehension Healthcare systems optimizing throughput rather than patient health Academic publishing optimizing citations over truth
The metric survives; the mission does not.
2. Time Horizon Mismatch
Most incentives reward short-term performance, while most institutional goods are long-term:
Character Trust Competence Sustainability
Short-term incentives train people to:
Borrow against the future Externalize risk Defer consequences onto successors
This dynamic is central to corporate scandals such as Enron, where executives were lavishly rewarded for short-term paper gains while embedding catastrophic long-term liabilities.
3. Role Compression and Gaming
When incentives are attached narrowly to a role, individuals learn to game the system rather than serve the mission.
Common behaviors include:
Doing only what is rewarded, not what is necessary Avoiding unmeasured responsibilities Shifting costs onto adjacent roles Exploiting loopholes faster than reforms can close them
Over time, competence gives way to compliance.
4. Moral Hazard and Distance from Consequences
The more insulated an actor is from the harm their actions cause, the more likely incentives will produce destructive behavior.
This is especially acute in:
Financial systems Bureaucracies Algorithmic decision-making Large nonprofits and NGOs
When reward flows upward but damage flows outward, irresponsibility becomes rational.
The sales-account scandal at Wells Fargo illustrates this clearly: employees optimized for account quotas while customers absorbed the harm and leadership captured the bonuses.
5. Crowd-Out of Intrinsic Motivation
Strong external incentives often destroy internal motivation.
Where people once acted out of:
Professional pride Moral duty Craft excellence Service
They are retrained to ask:
“What do I get if I do this?”
Once intrinsic motivation is crowded out, removing the incentive does not restore virtue—the capacity itself has been eroded.
IV. Sectoral Failures of Incentive Design
A. Education
Stated purpose: Formation, understanding, wisdom
Incentivized behavior: Test performance, credential accumulation, compliance
Results:
Teaching to the test Credential inflation Administrative bloat Declining literacy alongside rising scores
The system rewards appearance of success, not education itself.
B. Healthcare
Stated purpose: Patient health
Incentivized behavior: Throughput, billing codes, risk avoidance
Results:
Over-testing and under-care Defensive medicine Burnout among clinicians Fragmentation of responsibility
Patients become billing units; health becomes secondary.
C. Corporate Governance
Stated purpose: Long-term value creation
Incentivized behavior: Quarterly performance, stock price movement
Results:
Buybacks over investment Hollowing out of productive capacity Executive enrichment detached from institutional health
Corporations are treated as extraction vehicles rather than communities of production.
D. Religious and Nonprofit Institutions
Stated purpose: Formation, service, truth
Incentivized behavior: Attendance numbers, donor totals, program expansion
Results:
Performative activity Risk aversion Suppression of inconvenient truths Clergy burnout and cynicism
When faith or service is quantified, sincerity becomes a liability.
V. Why Reforms Usually Fail
Most incentive reforms fail because they:
Add more metrics instead of restoring judgment Centralize control rather than distribute responsibility Punish outcomes instead of correcting formation Assume compliance equals health
Incentive systems cannot compensate for the absence of moral formation, competent leadership, or institutional trust.
VI. Principles for Non-Perverse Incentive Design
While no incentive system is foolproof, several principles dramatically reduce perversity:
1. Subordinate Incentives to Purpose
Incentives must be servants, not masters.
2. Preserve Zones Without Metrics
Some goods—trust, judgment, formation—must remain partially unmeasured.
3. Align Reward with Responsibility for Consequences
Those who benefit must also bear risk.
4. Favor Long Time Horizons
Reward sustained excellence over episodic performance.
5. Retain Human Judgment
No metric can replace moral discernment.
6. Protect Intrinsic Motivation
Use incentives sparingly where vocation and duty are primary.
VII. Conclusion: Incentives Reveal What Institutions Truly Value
Incentives do not merely shape behavior—they expose institutional beliefs.
When incentives are perverse, they reveal:
What leaders actually prioritize What risks they are willing to externalize What forms of excellence they no longer recognize
An institution that cannot function without constant incentives has already lost the culture that once sustained it.
The deepest reform, therefore, is not technical but moral:
to recover the courage to trust judgment, cultivate formation, and accept responsibility for long-term consequences.
