Executive Summary
The attention economy — the system by which human attention is harvested, packaged, and sold to advertisers at scale through social media platforms — has produced a consistently observable and deeply troubling pattern: it rewards emotional instability, poor impulse control, and dramatic behavioral dysregulation with wealth, visibility, and cultural influence. This is not an accident or a bug in the system. It is the system’s rational output, given the incentive structure on which it runs. Platforms that optimize for engagement above all other variables will inevitably surface and amplify content that generates the strongest emotional reactions, and the strongest emotional reactions are reliably produced by human beings who are in crisis, behaving badly, or publicly disintegrating. The influencer who weeps uncontrollably, who picks public fights, who makes reckless disclosures, who commits acts of domestic violence on camera, who collapses and recovers and collapses again — this person generates more engagement, more revenue, and more cultural power than the emotionally healthy person living a quiet, well-ordered life. This is the chaos premium: the measurable financial and reputational reward the attention economy attaches to emotional disorder.
This white paper examines the mechanisms by which the chaos premium operates, the costs it imposes on individuals and on broader culture, and — most critically — the specific, concrete actions that ordinary people can take to make the attention economy less dysfunctional. The diagnosis is systemic, but the remedy is substantially within the reach of individuals who understand what the system is doing to them and decide to stop cooperating with it.
I. The Architecture of the Attention Economy and Why It Rewards Chaos
To understand why the attention economy rewards emotional instability, it is necessary first to understand the economic logic that drives it.
The foundational insight of the attention economy, articulated by economist Herbert Simon in the 1970s and subsequently developed by Michael Goldhaber and others, is that in an information-rich world, the scarce resource is not information but human attention. Platforms that can capture and retain human attention can sell that attention to advertisers. The larger and more sustained the captured audience, the more valuable the advertising inventory. Every design decision made by every major social media platform is therefore, at its root, a decision about how to maximize the time users spend looking at the platform’s content and how to maximize the emotional intensity of that experience — because emotional intensity correlates directly with the behaviors — likes, shares, comments, replays — that the platform’s algorithm reads as evidence of successful engagement.
Human attention follows reward. When you engage with a post or linger on a video, the algorithm interprets this as interest, reinforcing similar content in your feed. This process, repeated billions of times daily across users worldwide, creates a personalized digital ecosystem — one that mirrors your preferences, fears, and desires. Algorithms are designed to understand what keeps users watching, not just what they enjoy. In doing so, they exploit core psychological vulnerabilities such as novelty-seeking, cognitive dissonance, and emotional arousal.
The consequence is a system with a structural preference for extreme emotional content. Emotionally charged content, whether outrage, joy, or fear, spreads significantly faster than neutral information. Algorithms learn to prioritize content that elicits strong emotional responses, regardless of its factual accuracy. By rewarding outrage and sensationalism, social platforms shape public discourse around emotional extremes.
The neurobiology of this process makes it self-reinforcing. Frequent engagement with social media platforms alters dopamine pathways, a critical component in reward processing, fostering dependency analogous to substance addiction. Changes in brain activity within the prefrontal cortex and amygdala suggest increased emotional sensitivity and compromised decision-making abilities. The platform’s algorithm conditions users to want more of the kind of content that generates the strongest neurochemical response — which is, reliably, content featuring dramatic emotional events, conflict, humiliation, crisis, and spectacle.
Over time, algorithms can move users from benign content to high-arousal material and eventually to graphic or extreme videos. It is the repeated, algorithmic recommendation of similar clips that drives escalation and gradual desensitization. This drift is not evenly distributed: the Facebook Papers revealed that the company’s own research identified vulnerable communities as being disproportionately exposed to borderline content that caused severe harm to users.
The political content equivalent of this dynamic has been documented experimentally. Twitter’s engagement-based ranking algorithm amplifies emotionally charged, out-group hostile content that users say makes them feel worse about their political out-group. Furthermore, users do not prefer the political tweets selected by the algorithm, suggesting that the engagement-based algorithm underperforms in satisfying users’ stated preferences. This is a crucial finding: the algorithm is not giving people what they want in any meaningful sense of want. It is exploiting automatic attentional responses to emotional stimuli in ways that diverge from users’ own reflective preferences.
The specific history of Facebook’s attempts to calibrate its own emotional weighting is instructive. Facebook adapted its algorithm to encourage “meaningful social interactions” by boosting posts by friends and family, boosting highly commented posts, and weighting the emotional-reaction buttons much more than likes. This became problematic, as the most heavily commented posts also made people the angriest. Strongly weighting angry reactions may have favored toxic and low-quality content. Responding to complaints, Facebook gradually reduced the angry emoji weight from five times the weight of likes in 2018 to weight zero in 2020. This sequence reveals that the platform’s own engineers discovered the chaos premium empirically, deployed it commercially, found it destructive, and attempted to walk it back — a cycle that illustrates the inherent tension between the platform’s commercial incentives and its users’ actual wellbeing.
II. The Influencer as the System’s Product
The influencer is not merely a participant in the attention economy. The influencer is the attention economy’s most refined product: a human being who has been optimized, through sustained market feedback, to produce content that the algorithm rewards. Understanding the influencer therefore requires understanding what the algorithm selects for — and what it selects for is demonstrably not emotional health, sound judgment, or behavioral stability.
With the rise of digital media, the concept of opinion leadership has undergone a radical transformation, with forms of influence based on expertise and authority in traditional media environments giving way to a regime of influence based on visibility and emotional performance on social media platforms. The shift from expertise-based to visibility-based influence is significant and underappreciated. The old media environment rewarded people who knew things. The new media environment rewards people who feel things loudly and publicly.
Influencer status is the result of a process of self-exploitation linked to emotional work. Influencers display their private actions to the public, transfer commercial agendas into their private realm, and exploit their selves. Digital influencers work under the condition that they must self-exploit to succeed. The structural imperative to self-exploit creates a market in which the boundary between the influencer’s private emotional life and their commercial product collapses entirely. Their relationships, their family conflicts, their mental health crises, their substance use, their domestic disputes — all of these become content, and the more extreme and unmanaged these events are, the more content they generate and the more engagement that content produces.
The influencer who is emotionally stable, who resolves conflicts privately, who makes and keeps commitments quietly, and who manages their own behavior with something resembling adult discipline is structurally disadvantaged in this market. They produce less content. Their content generates less emotional reaction. Their audience grows more slowly and engages less reliably. The chaos premium is not incidental; it is the mechanism by which the market selects its winners.
According to parasocial interaction theory, social media influencers foster a sense of intimacy with their followers by mimicking real-life relationships, creating emotional connections akin to “secondary attachment objects.” This parasocial bond is the mechanism by which the influencer converts their emotional content into audience loyalty, and that loyalty into advertising revenue. The follower who is bonded to an influencer through simulated intimacy will follow the influencer’s emotional arc with the investment of a person following a close friend — which means that every crisis, every breakdown, every public conflict generates not just attention but sustained attention of the most emotionally engaged kind.
The financial scale of this system is enormous. By 2027, the global influencer marketing industry is projected to expand to an estimated $480 billion. According to a recent survey, 60% of consumers trust online recommendations from social media influencers, with these endorsements influencing nearly half of all purchasing decisions. This is a vast transfer of commercial credibility from institutions with accountability mechanisms — regulated journalism, credentialed expertise, peer-reviewed science — to individuals selected entirely by their capacity to capture and retain emotional attention.
III. What This Costs: The Distributed Harm of the Chaos Premium
The harm produced by the attention economy’s reward structure for emotional instability is not contained within the influencer class. It distributes outward through several channels into the lives of ordinary people who are consuming this content, being shaped by it, and making decisions — about their own behavior, their own relationships, their own sense of what normal looks like — on the basis of a systematically distorted picture of human life.
A. The Normalization of Dysregulation
When the most visible, most followed, most culturally influential people in a media environment are selected precisely because they are emotionally dysregulated, impulsive, and dramatic, the culture that emerges from that environment begins to treat dysregulation as normative. Since platforms thrive on engagement and views, the most dramatic, exaggerated, or emotionally charged content tends to rise to the top. As a result, creators feel pressure to produce content that is increasingly sensational. Over time, constant exposure to this kind of content can distort our perception of normalcy. The truth is, most of life is about small, ordinary moments, not the extremism we see online.
The specific and insidious harm here is not that people consciously adopt the influencer’s behavior as a template. It is that repeated exposure gradually shifts the perceived baseline of what ordinary emotional life looks like, what relationships normally involve, and what constitutes a proportionate response to ordinary stress. The influencer’s breakdown becomes the reference point against which ordinary people measure their own experiences, and the ordinary person’s relatively managed emotional life begins to feel, by comparison, dull, suppressed, or inauthentic.
B. The Erosion of Empathy and the Desensitization Effect
Repeated exposure to violent or distressing content not only blunts emotional reactivity but also shifts how people interpret others’ behavior. Classic research by Bandura demonstrated that children imitate observed aggressive behavior, and subsequent work has shown that such exposure shapes attitudes, expectations, and behavioral scripts for evaluating conflict. Media violence predicts later aggression, but not the reverse, and decreases empathy. Studies on moral contagion show that moral-emotional language spreads more quickly online, meaning people are repeatedly exposed to collective reactions that model hostility rather than reflection. The algorithm often pairs distressing content with comment sections filled with ridicule, blame, and certainty that model how viewers are expected to interpret the situation, creating a social reinforcement loop that rewards harsh conclusions.
This is the empathy erosion mechanism: the repeated experience of watching emotionally dysregulated behavior in a context that frames it as entertainment, and that surrounds it with audience commentary modeling contempt, ridicule, and hostility, trains the viewer’s own emotional responses away from care and toward detachment. The person who has watched ten influencer breakdowns and observed ten comment sections treating those breakdowns as comedy is not the same person they were before that exposure. Their capacity for genuine empathic response to human distress has been systematically worked on.
C. The Neurological Cost to Consumers
Frequent engagement with social media platforms alters dopamine pathways, a critical component in reward processing, fostering dependency analogous to substance addiction. Changes in brain activity within the prefrontal cortex and amygdala suggest increased emotional sensitivity and compromised decision-making abilities. Impairments in cognitive functions such as self-monitoring, memory retention, organizational skills, and time management are commonly seen in cases of internet and smartphone addiction.
The neurological impact is not metaphorical. The attention economy, by design, is restructuring the brains of its users in ways that make them more susceptible to its own products: more emotionally reactive, less able to sustain deliberate attention, less capable of the slow, patient cognitive work of evaluating information, forming nuanced judgments, and resisting impulse. The user who has been conditioned by years of algorithmic content selection is a user whose cognitive and emotional architecture has been modified to be a better consumer of the platform’s product — which is to say, a worse thinker, a more reactive emotional actor, and a more susceptible target for the next piece of high-arousal content.
Algorithmic feeds may include sensational or misleading posts, contributing to stress, confusion, and emotional distress, especially during crises. Many people choose to delete or temporarily deactivate social accounts to break these cycles. A two-week digital detox reduced depressive symptoms significantly, according to a meta-analysis across multiple studies. A randomized experiment before the 2020 U.S. election found that five weeks off Facebook or Instagram improved well-being and reduced anxiety and depression. Limiting social media to 30 minutes daily for two weeks led students to report significantly lower anxiety, depression, loneliness, and fear of missing out.
D. The Distortion of Children’s Development
The population most at risk from the attention economy’s incentive structure is children and adolescents, whose cognitive development is still in process and whose sense of social reality is being formed precisely during the period of maximum exposure to algorithmically curated content. As a clinical psychologist with extensive experience working with children, teens, and families, these algorithms can distort young people’s developing sense of identity and belonging by creating echo chambers or idealized versions of reality. Exposure to highly curated or emotionally charged content can contribute to increased anxiety, depression, body image concerns, and social comparison.
The generation of children whose primary frame of reference for human behavior, emotional expression, relationship dynamics, and what kind of person is worth being has been formed substantially by influencer content is a generation that has been given a systematically distorted developmental input. The influencer who is rewarded for impulsive behavior, who achieves wealth and celebrity through emotional dysfunction, and who models a life in which there are no durable consequences for dysregulation is not a neutral presence in a child’s development. They are an active intervention.
E. The Corruption of Commercial Trust
According to a recent survey, 60% of consumers trust online recommendations from social media influencers, with these endorsements influencing nearly half of all purchasing decisions. When this trust is vested in individuals selected by the attention economy for their emotional dysregulation rather than for their expertise, integrity, or accountability, it creates a commercial environment in which the most powerful product recommendation force in modern markets is controlled by people with no demonstrated competence in the product category they are endorsing and no institutional accountability for the advice they give. The influencer who recommends a financial product, a dietary supplement, a medical treatment, or a relationship practice is speaking to an audience that trusts them as a friend while operating as a commercial actor with financial incentives that are often undisclosed or inadequately disclosed.
IV. The Problem of Accountability and the Absence of Consequence
One of the most significant structural features of the chaos premium is that it systematically insulates its beneficiaries from the consequences that would, in any ordinary social environment, constrain dysregulated behavior. Ordinary social environments impose costs on people who behave badly: damaged relationships, professional consequences, social exclusion, reputational harm within the communities that matter to them. These costs are the mechanism by which social norms are enforced and through which people learn — however slowly and imperfectly — that their behavior has consequences for others and for themselves.
The attention economy inverts this mechanism for its top performers. The influencer whose domestic violence is documented on video and broadcast to millions does not lose their audience — they gain attention. The influencer whose relationship collapses publicly does not experience social consequence — they experience a content cycle. The influencer whose impulsive disclosures harm the people around them do not face reputational damage in the communities that matter most to them — because the community that matters most to them is an audience of millions of strangers who are emotionally invested in the drama of their lives and rewarded, by the algorithm, for continuing to watch.
This insulation from consequence is the mechanism by which the attention economy selects for ever-more-extreme behavior. If ordinary behavior generates ordinary attention and extreme behavior generates extreme attention, and if the influencer’s financial and social welfare depends on the magnitude of their audience, then the rational strategy — arrived at consciously or unconsciously — is to behave in ways that are increasingly extreme. The chaos premium is not a stable equilibrium; it is an escalating dynamic in which the required threshold of emotional disruption rises continuously as the audience becomes accustomed to each prior level.
V. What Ordinary People Can Do: A Practical Framework for Individual Action
The systemic analysis above might suggest that individual action is futile — that the attention economy is too large, too well-resourced, and too deeply embedded in the infrastructure of daily digital life to be meaningfully affected by the choices of individual users. This conclusion is wrong, for a specific and important reason: the attention economy runs on individual attention. Its entire value proposition depends on capturing the attention of individual people. The individual who withdraws their attention is not a rounding error; they are a subtraction from the only resource the system cannot manufacture.
The following practical framework is organized around five categories of individual action, each of which is available to ordinary people without special resources, technical expertise, or institutional access.
A. Radical Curation: The Deliberate Construction of Your Information Environment
The most powerful single action an individual can take is to exercise deliberate, intentional control over what they consume. This is substantially different from vague intentions to “use social media less.” It requires a specific and honest audit of one’s current consumption, followed by active decisions about what to remove, what to add, and on what basis those decisions are made.
The audit question is direct: of the accounts you follow, which of them are you following because the content is genuinely useful, genuinely true, or genuinely good for you — and which of them are you following because the content generates a strong emotional reaction, whether positive or negative? The account that consistently makes you feel anxious, angry, envious, or morbidly fascinated is precisely the account that the algorithm has identified as emotionally valuable to you and is serving you more of. The fact that it generates a strong reaction does not mean the reaction is good for you.
Radical curation means making a principled decision about what qualities will determine who receives your attention. Some productive criteria: Does this person demonstrate genuine expertise or hard-won knowledge in the areas they discuss? Do they take clear accountability for errors? Does their content leave you better informed and more capable than before you consumed it? Do they model the qualities — emotional stability, intellectual honesty, genuine care for others — that you actually want to see more of in the world? Would you be comfortable if someone you respected deeply could see your full consumption history and knew how much time you spend with this content?
B. The Engagement Audit: Stopping the Reward Signal
Every like, comment, share, and extended viewing session is a vote. It is a concrete signal to the platform’s algorithm that the content you just consumed should be shown to more people and shown to you again. The person who watches an influencer’s breakdown in full, who comments — even critically — on a public conflict, who shares a video to express their disgust at what it depicts, is participating in the amplification of that content and contributing to the financial reward of its producer. The emotional valence of the engagement is largely irrelevant to the algorithm; what registers is that engagement occurred.
The practical implication is counterintuitive: the most effective form of criticism of emotionally dysfunctional content is silence. Not the angry comment. Not the appalled share. Not the lengthy discussion thread. Silence — the withdrawal of engagement — is the only signal the algorithm processes as disinterest. This is a genuine discipline, because the impulse to comment, to correct, to express horror or disapproval, is itself a product of the emotional arousal the content was designed to generate. Recognizing that impulse as the content’s intended effect, and choosing not to act on it, is the practical act of refusing the chaos premium.
C. The Parasocial Boundary: Distinguishing Entertainment from Relationship
Social media influencers foster a sense of intimacy with their followers by mimicking real-life relationships, creating emotional connections akin to secondary attachment objects. The practical response to this dynamic is not to refuse all parasocial engagement categorically — which is neither realistic nor necessary — but to maintain a clear and honest internal distinction between the simulated intimacy of the parasocial bond and the actual intimacy of genuine relationship.
The parasocial bond works on the audience’s relational instincts: the same systems that generate care, loyalty, and concern for real friends and family members are activated by the carefully constructed performance of intimacy that the influencer offers. The viewer who feels genuinely worried about an influencer’s wellbeing, who feels personally implicated in their conflicts, or who structures their daily routine around consuming their content is experiencing a hijacking of relational instincts by a commercial product.
Maintaining the parasocial boundary means regularly and honestly asking: if I stopped following this person tomorrow, what would I actually lose? If the honest answer is “something that functions like a friendship,” the bond has become parasocial in a sense that warrants serious examination. Real friendships survive the absence of content. They are characterized by mutual obligation, genuine accountability, and the capacity of the other person to actually show up when needed. The influencer cannot do any of these things for their audience.
D. Redirecting Financial Support: The Deliberate Economics of Attention
The attention economy converts attention into money through advertising. This means that the influencer’s income is not generated by their audience directly — it is generated by advertisers who are paying for access to that audience. Ordinary people have leverage over this system through two channels: their purchasing behavior and their advocacy with advertisers.
The purchasing channel is simple in principle and requires discipline in practice: do not buy products endorsed by influencers whose content you have identified as emotionally dysfunctional, exploitative, or harmful. The influencer’s commercial power depends on the demonstration that their audience converts to purchases. An audience that withholds purchasing behavior devalues the influencer’s commercial proposition to advertisers.
The advocacy channel requires more active engagement but is demonstrably effective. Advertisers are acutely sensitive to public association with controversy — as the rapid sponsor departures following the Taylor Frankie Paul domestic violence revelations demonstrated — and they respond to coordinated audience pressure. The ordinary person who contacts a brand directly to explain that their sponsorship of a specific influencer is influencing purchasing decisions is participating in the most direct form of economic pressure available to individuals within the attention economy.
E. Feeding the Alternative: Deliberate Support of Genuine Quality
Withdrawing attention from dysfunctional content is only half of the available action. The other half is deliberately redirecting attention toward content that demonstrates the qualities the chaos premium actively suppresses: emotional maturity, genuine expertise, intellectual honesty, careful thinking, accountability to truth, and the quiet competence of people who know things and communicate them clearly.
This is not merely an ethical preference; it is a market intervention. The algorithm amplifies content that receives engagement. Content from emotionally stable, knowledgeable, honest creators that receives sustained engagement from a committed audience will be amplified toward that creator’s potential audience. The deliberate construction of an audience for good content is a productive use of the same mechanism that the chaos premium exploits. Engaging purposefully — posting updates or reaching out — instead of mindless browsing, and curating your feed to prioritize positivity, are practical steps that carry measurable mental health benefits.
The practical form of this action includes: leaving substantive comments on content that models good thinking or genuine expertise; sharing content that is genuinely useful or genuinely true with people in your network; subscribing or financially supporting creators through direct payment mechanisms (Patreon, Substack, direct purchase) that bypass the advertising model entirely and remove the creator from the incentive structure that rewards chaos; and explicitly recommending good creators to people whose judgment you trust.
VI. The Institutional Dimension: What Individual Action Cannot Accomplish Alone
Honest analysis requires acknowledgment of what individual action cannot accomplish. The structural incentives of the attention economy are embedded in the design of platforms controlled by enormously powerful corporations with strong financial interests in maintaining those incentives. Individual withdrawal of attention from harmful content does not, by itself, change the platform’s algorithm, its data practices, its monetization model, or its accountability to the users it harms.
The regulatory dimension of this problem is real and unresolved. Algorithmic accountability — the requirement that platforms disclose how their recommendation systems work, what signals they optimize for, and what the documented effects of those systems are on user wellbeing — is a policy question that individual user behavior cannot answer. Children’s protection from algorithmically curated high-arousal content requires regulatory intervention that individual parents cannot substitute for, however vigilant they may be.
Individual action matters most when it is accompanied by a clear understanding of the system’s limits and by support for the kinds of institutional responses — transparency requirements, algorithmic accountability legislation, advertising standards enforcement, age-appropriate design regulations — that can address the system’s structure rather than only its outputs.
VII. A Note on the Moral Dimension
It is tempting, when analyzing the chaos premium and the influencers it produces, to frame the problem primarily as a moral failure of those individuals. This framing is understandable but analytically incomplete. The influencer whose emotional dysregulation makes them famous and wealthy is, in most cases, not a person who has made a clear-eyed decision to harm the people around them for financial gain. They are a person whose genuine emotional difficulties have been identified by a system optimized to find and amplify exactly those difficulties, placed in an environment that rewards escalation rather than recovery, and surrounded by financial incentives that make emotional stability commercially costly. The system selects them, exploits them, and then — when the exploitation reaches its inevitable crisis — discards them.
Digital influencers work under the condition that they must self-exploit to succeed, and this is demonstrated in seven distinct work practices. Whether this is a boundary condition in the transformation to become more powerful person-brands, where work becomes more individualized and subjectified, is an open question. The influencer is both the chaos premium’s beneficiary and its victim — rewarded in the short term for a form of self-exploitation that is, in the medium and long term, destructive to their own integrity, relationships, and psychological wellbeing.
This does not relieve them of moral responsibility for the specific choices they make or the specific harms they cause. But it places the moral weight of the system’s outcomes squarely where it belongs: on the design of the platforms, the decisions of the advertisers, and — with full acknowledgment of the difficulty — on the attention choices of the audience that makes the whole system financially viable.
VIII. Conclusions
The attention economy’s chaos premium is not a marginal or accidental feature of contemporary media. It is the system’s dominant selection mechanism, operating continuously and at scale to identify, amplify, reward, and financially sustain human beings whose emotional instability, impulsive behavior, and poor self-regulation generate the strongest audience engagement. This mechanism imposes real costs on the influencers it exploits, on the audiences it conditions, on the children it reaches during their most formative developmental periods, and on the broader cultural environment in which it progressively normalizes dysregulation, erodes empathy, and displaces expertise-based authority with visibility-based celebrity.
The ordinary person who understands this system has available to them a meaningful set of concrete responses: radical curation of their information diet, discipline in withholding engagement from dysfunctional content, maintenance of honest parasocial boundaries, deliberate redirection of purchasing behavior, and active support for the creators and content that model the qualities the chaos premium cannot reward. None of these actions is individually decisive. Together, and at sufficient scale, they constitute a genuine market intervention — a withdrawal of the resource without which the entire system cannot function.
The broader cultural task is the deliberate reconstruction of an information environment in which the qualities that were once required to achieve influence — knowledge, integrity, accountability, the slow cultivation of genuine expertise, and the patient demonstration of character under ordinary rather than spectacular conditions — are again recognized, rewarded, and sought out. This reconstruction will not happen by itself. It will happen through the accumulated individual decisions of people who have understood what the current environment is costing them and decided that the cost is too high.
This white paper was prepared using data and analysis from Wiley Online Library’s Psychology & Marketing, PMC’s neuroscience research on social media addiction, the Knight First Amendment Institute, Psychiatric Times, the Associated Clinic of Psychology, Grit Daily, Frontiers in Psychology, and DeGruyter’s work on digital emotional labor.
