The Scapegoat Machine
Part 4 of 5: How We Choose Powerlessness Over Accountability
In Part 1, I showed you who wrote the viral AI essay and what he’s selling. In Part 2, I showed you what AI actually can and can’t do. In Part 3, I showed you how collective panic responses, from school closures to bank runs to , consistently cause more damage than the threats they claim to address.
Now I want to talk about the thing that makes this pattern possible. Not the threats themselves. Not even the panic responses. But the mechanism that prevents us from learning anything after the damage is done.
I call it the scapegoat machine. And it works like this: a real problem exists. Someone has to make a decision about it. That decision produces consequences. And then, instead of tracing who decided what, when, with what information, and with what incentive, we blame an abstract force. Technology. Capitalism. The algorithm. AI.
The abstract force can’t be held accountable. Which is the point.
The Pattern Nobody Wants to See
Let me say something that I think most people already know but find uncomfortable to say out loud: we agree on more than we think.
Most people, regardless of where they fall politically, agree that kids falling behind in school is a problem. Most people agree that we don’t have enough nurses and caregivers. Most people agree that the cost of housing and childcare is making it impossible for young families to get started. Most people agree that a generation of children spent too long out of classrooms and are paying for it now.
The disagreement isn’t about whether these problems exist. It’s about what caused them and what to do about them. And that’s where the scapegoat machine activates.
Because arguing about causes and solutions is more rewarding. Financially, socially, psychologically. More rewarding than actually solving anything. The fight generates donations. The fight generates engagement. The fight generates media coverage. The fight signals tribal loyalty. The fight gets you elected, or re-elected, or retweeted.
Solving the problem requires compromise. Compromise gets you primaried. Compromise gets you called a traitor by your own side. Compromise doesn’t trend.
So the individually rational choice for every actor in the system (politician, media outlet, advocacy group, platform) is to keep fighting. And the collectively rational outcome, the one where someone actually does something, never arrives. This is the same incentive structure I described in Part 1 with Shumer: individual rationality producing collective dysfunction. It just operates at a larger scale.
⚠️ Rational Dysfunction: When the incentive structure rewards fighting over solving, every rational actor will fight rather than solve. This isn’t a moral failure. It’s a design failure. The system is working exactly as its incentives predict. If you want different outcomes, you need to change the incentives, not lecture the actors.
How the Scapegoat Works
Here’s the mechanism. A decision gets made. That decision has consequences. Then an abstract force is identified as the cause instead of the decision itself.
“AI took your job” is so much more comfortable than “your CEO made a preemptive decision based on a panic essay written by someone with 20% confidence, and nobody in the decision chain is accountable for the outcome.”
“Smartphones ruined kids” is so much more comfortable than tracing the actual timeline of who decided to keep schools closed, when the data changed, what the internal memos said, and which institutional incentives made reopening the higher-risk career move for the adults making the decision.
“The economy is terrible” is so much more comfortable than asking why, as Kyla Scanlon documented in her work on the “vibecession,” consumer sentiment remains miserable while aggregate economic indicators show growth, and whether the gap between feeling and data might have something to do with which stories get amplified and which get ignored.
“Capitalism failed us” is so much more comfortable than asking which specific policy decisions, made by which specific people, with which specific incentives, produced the specific outcomes we’re living with.
In every case, the scapegoat does the same thing: it replaces a traceable decision chain (who, what, when, why) with a vague, unaccountable force. And vague, unaccountable forces can’t be held responsible. They can only be feared, argued about, and used to justify the next round of panic.
This is not a left-wing or right-wing observation. The scapegoat machine is bipartisan. The right blames “woke ideology” or “government overreach.” The left blames “capitalism” or “corporate greed.” Both framings serve the same function: they replace specific, traceable accountability with a villain so large and abstract that no one is ever actually responsible, and nothing ever actually changes.
✓ The Accountability Test: When someone tells you that an abstract force caused a bad outcome, ask: “Who made the specific decision? When did they make it? What information did they have at the time? What did they stand to gain or lose?” If the answer is vague, the abstraction is doing the work of shielding someone from accountability. This isn’t cynicism. It’s the minimum standard for understanding how anything actually happens.
The Quiet Crises
While we fight about abstractions, real problems compound. Not because they’re invisible. Because there’s no financial incentive to make them loud.
Nobody gets 80 million views for an essay about the nursing shortage. But the numbers are staggering. One-third of the current U.S. nursing workforce (approximately one million registered nurses) are over 50 and approaching retirement. Over 500,000 additional nurses will be needed by 2030 according to HRSA projections. Nursing schools turned away over 91,000 qualified applicants in 2021 alone because there aren’t enough faculty to train them. The pipeline is broken at every stage.
There are 63 million caregivers in the United States, paid and unpaid, with 24 states declaring a critical caregiver emergency. Unpaid family caregivers, with 53 million Americans, provide over $870 billion a year in care. A home health aide earns an average of $16.82 per hour, barely more than a fast food worker, for a job that requires more training and involves significantly more physical and emotional labor. The 65+ population has grown 73% since 2011 and the number keeps rising.
The U.S. fertility rate has dropped to 1.62, well below the 2.1 replacement threshold. South Korea has reached 0.72. By 2060, South Korea is projected to have 0.9 persons over 65 for every one working-age citizen. These aren’t projections about what might happen. These are people who already exist, or don’t exist, moving through the system.
And the children who lost nearly half a grade level during COVID closures? They’re now in the workforce, or approaching it, with documented skill deficits that economists project will cost $17 trillion in human capital globally by 2050. That’s not a future problem. That’s a current workforce entering the economy right now with measurably reduced capabilities.
None of these problems went viral. None of them generated 80 million views. None of them made anyone famous for sounding the alarm. They don’t have the narrative structure of a threat you can panic about. They’re slow, structural, and boring. Which is why they’re so dangerous.
What AI Can’t Fix
Here’s where this connects to the AI narrative, and why the scapegoat machine matters for how we deploy this technology.
AI might help with some of these problems. AI-assisted diagnostics might reduce the burden on healthcare workers. AI tutoring might help close learning gaps. AI scheduling and documentation might make caregiving more efficient at the margins. These are real possibilities worth pursuing.
But AI cannot produce nurses. It cannot produce caregivers willing to do physically and emotionally demanding work for barely above minimum wage. It cannot produce children to replace the ones a generation decided not to have. It cannot rebuild the institutional trust that was eroded when people watched decision-makers protect their own interests during a crisis and call it precaution. It cannot fix a dependency ratio that is a function of human beings who were either born or weren’t over the past thirty years.
And this is the critical point: if we allow AI to become the next scapegoat, if “AI will solve the care crisis,” or “AI is causing the job crisis,” we accomplish the same thing the scapegoat machine always accomplishes. We avoid looking at the actual decision chain. We avoid asking who decided what, when, and why. We avoid accountability. And the problems compound.
The question isn’t whether AI is good or bad. The question is: who is deciding how it’s deployed, with what incentives, and who bears the cost when those decisions go wrong? If we don’t answer that question deliberately, other actors, the ones with the strongest , will answer it for us. And we will get exactly the outcome the incentive structure predicts: one that serves narrow interests while the costs are distributed to the people with the least power to object.
We’ve seen this movie before. We watched it with school closures. We watched it with the bank run. We watched it with the layoffs. Every time, specific people made specific decisions that served their specific interests, and then an abstract force absorbed the blame.
✓ The Deployment Question: When someone tells you “AI will transform healthcare” or “AI will eliminate jobs,” ask: Who is making the deployment decision? What do they stand to gain? Who bears the risk if it goes wrong? Is there a feedback mechanism that lets the people affected push back? If those questions don’t have clear answers, you’re not looking at a technology story. You’re looking at a power story with a technology costume.
The Choice That Isn’t a Choice
I said at the beginning of this series that I wouldn’t tell you what to think. I’m going to hold to that. But I do want to name something.
Not choosing is choosing.
When we argue about whether AI will take 50% of jobs instead of asking who’s making the deployment decisions, we’re choosing. When we debate whether smartphones or social media or screen time ruined children instead of tracing the actual decision chain that kept schools closed, we’re choosing. When we share panic essays instead of reading the primary sources, we’re choosing. When we fight about solutions so aggressively that the problem compounds while we argue, we’re choosing.
We’re choosing to focus on the loud alarm, AI is coming, AI is coming, while the quiet crises, the ones that will actually determine whether the next thirty years are livable, continue to compound in the background.
A 75-year-old former nurse providing 24/7 care to a family member because there aren’t enough caregivers in the system doesn’t care about your AI discourse. A fourth-grader reading at a 1992 level doesn’t care whether GPT can write code. A young couple who ran the numbers and realized they can’t afford a child in an economy where housing costs have outpaced wages for two decades isn’t comforted by the news that AI might eventually make everything more productive.
These are not abstractions. These are specific people living with the consequences of specific decisions that specific actors made for specific reasons. And the scapegoat machine exists to make sure we never connect those dots.
I’m not asking you to agree on solutions. I’m asking you to agree on the diagnosis: the fight is more profitable than the fix, for almost everyone in a position to do something about it. And until we change that, no technology, not AI, not anything else, will save us from the consequences of our own incentive structures.
Next in this series: Part 5: The Mirror and the Map. This series started with 80 million people sharing an essay without reading it. It ends with a question: what would it take for us to choose differently? Not better technology. Not better policy. Better thinking. And whether that’s still possible.
This piece is Part 4 of “The Velocity of Hype,” a five-part series on the AI discourse crisis and the critical thinking emergency it reveals. It draws on research compiled for my forthcoming books The World is Always Never Ending and This Is Not The Whole Story.

