The Mirror and the Map
Part 5 of 5: What Would It Take for Us to Choose Differently?
In Part 1, I showed you an essay that 70 million people shared without questioning, written by a man who privately admitted to 20% confidence in his own predictions. In Part 2, I showed you what AI actually can and can’t do, which turns out to be both more impressive and more limited than the panic narrative allows. In Part 3, I showed you how collective panic responses, from school closures to bank runs to copycat layoffs, consistently cause more damage than the threats they claim to address. In Part 4, I showed you the scapegoat machine: how we replace traceable decision chains with abstract villains so that no one is ever accountable and nothing ever changes.
Now I want to talk about you.
Not “people.” Not “society.” Not “we” in the comfortable collective sense that lets everyone assume I mean someone else. You. The person reading this. What you share. What you believe. What you check. What you let slide because checking feels like too much work or because questioning feels socially expensive.
Because the series started with 70 million people and it ends with one. The only discourse you actually control is your own.
What We Learned
Let me lay out what I think the evidence across these five pieces actually shows, stripped down to its essentials.
The AI hype narrative is being driven by people with financial interests in its acceleration, amplified by media systems that reward fear over accuracy, and consumed by an audience that shares before reading and reacts before thinking. The technology itself is real, useful, and improving. It is also limited in specific, documented, and currently unpredictable ways that the hype narrative cannot afford to mention. The honest assessment of AI’s trajectory involves far more uncertainty than anyone selling certainty will admit.
Our collective responses to perceived threats have a documented track record of causing more damage than the threats themselves. This pattern repeats because the incentive structures that drive it remain unchanged: panic is profitable, caution is boring, and the costs of disproportionate responses fall on the people least equipped to bear them.
The scapegoat machine exists to prevent us from learning this lesson. Every time a specific person makes a specific decision for a specific reason and that decision produces harm, an abstract force absorbs the blame. AI. Capitalism. Technology. Social media. The algorithm. The abstraction can’t be held accountable. The pattern continues.
And while we argue about the loud alarm, the quiet crises compound. The nursing shortage. The care crisis. The learning loss. The dependency ratio. The fertility decline. Problems that are documented, measurable, and worsening, but that don’t generate engagement, don’t go viral, and don’t make anyone famous for sounding the alarm.
None of this is complicated. All of it is uncomfortable.
The Critical Thinking Emergency
I want to name the thing I actually think is the civilizational risk, and it’s not AI.
It’s the erosion of our collective capacity to evaluate what we’re told.
I don’t mean that people are stupid. I mean that the information environment we’ve built is actively hostile to careful thinking. It rewards speed over accuracy. It rewards emotional intensity over nuance. It rewards tribal loyalty over independent evaluation. It rewards sharing over reading, reacting over reflecting, and certainty over the honest admission that we don’t know.
In that environment, a 26-year-old AI startup CEO can write an essay with the help of AI, frame 20% confidence as absolute certainty, get amplified by every major media outlet without meaningful scrutiny of his financial interests or track record, reach 70 million people, and shift the public conversation about a technology that will affect billions of lives. Not because the system is broken. Because the system is working exactly as designed. The system is optimized for engagement, and fear engages.
This is the meta-problem. Every other problem I’ve described in this series, the panic responses, the scapegoat machine, the quiet crises, the deployment decisions that will determine how AI actually affects your life, every one of them depends on this underlying capacity. If we can evaluate what we’re told, we can interrupt the panic, trace the decision chain, resist the scapegoat, and demand accountability. If we can’t, we’re at the mercy of whoever tells the most compelling story, regardless of whether it’s true.
The technology isn’t the test. Our thinking is the test. And the evidence suggests we’re failing it.
⚠️ The Information Environment Is Not Neutral: The platforms where you consume information are designed to maximize engagement, not accuracy. The media outlets that amplify stories are incentivized by attention, not truth. The people who tell you things have interests that may or may not align with yours. None of this is secret. All of it is systematically ignored in the moment of consumption. Knowing it abstractly and applying it in real time are very different skills, and the second one requires practice.
The Tools
I’ve been teaching critical thinking tools throughout this series, embedded in the callout boxes. Let me collect them here, because these are the things I actually want you to walk away with. Not opinions about AI. Tools for thinking about anything.
The Source Check. When someone cites a source, read the source. It takes five minutes. The gap between what was said and what was claimed is almost always revealing. Shumer cited the GPT-5.3 System Card to argue AI is building itself. The System Card says the opposite. This pattern is everywhere, not just in AI discourse.
The Incentive Map. When someone tells you something, ask what they stand to gain from you believing it. Not to dismiss them automatically, but to calibrate your trust. An AI startup CEO telling you AI will change everything is like a car dealer telling you this is the best time to buy. They might be right. But you should know they’re selling.
The Specificity Test. When someone blames an abstract force for a bad outcome, ask for the specific decision chain. Who decided what? When? With what information? With what incentive? If the answer stays vague, the abstraction is shielding someone from accountability.
The Generalization Check. When someone shows you that AI can do one thing brilliantly and argues it will therefore do everything, check whether that one thing has special properties. Coding has automated verification and tight feedback loops. Most knowledge work doesn’t. Gains in one domain don’t automatically transfer to others.
The Certainty Alarm. When someone expresses absolute confidence about the future of a complex system, that confidence is itself a red flag. The honest experts are the uncertain ones. Certainty in the face of genuine complexity is either ignorance or marketing. Sometimes both.
The Tribal Cost. Before you share, argue, or react, ask: am I doing this because I’ve evaluated the evidence, or because it signals something about my identity? This is the hardest question on the list because the honest answer is usually uncomfortable.
None of these tools are sophisticated. None of them require expertise in AI or economics or media theory. They’re the questions a thoughtful person asks before accepting any extraordinary claim from any source. The fact that almost nobody asks them is the problem. The fact that asking them feels radical is the symptom.
✓ Critical Thinking Is Not Cynicism. I’m not arguing that you should trust nothing and no one. I’m arguing that trust should be earned through a track record of accuracy, transparency about incentives, and willingness to say “I don’t know” when the honest answer is uncertain. That’s a higher bar than most public discourse currently meets. Raising the bar is not the same as burning the building down.
The Story We Tell Ourselves
Here’s something I’ve been thinking about throughout this entire series, and it connects to why I write about historical panics.
Every era has a version of this moment. A new technology or a new threat arrives and society splits into two camps: the people who say it will destroy everything, and the people who say it will save everything. The printing press. The telegraph. Radio. Television. The internet. Social media. And now AI.
In every case, the people who predicted total destruction were wrong. And the people who predicted total salvation were wrong. The truth was always in the middle: the technology changed things, sometimes dramatically, and the change created winners and losers, and the distribution of winning and losing was determined not by the technology itself but by the decisions that specific humans made about how to deploy it.
The printing press didn’t destroy civilization. But the people who controlled printing presses had enormous power over what was considered true. The internet didn’t liberate humanity. But the people who built the platforms that organize the internet have enormous power over what you see, what you share, and what you believe.
AI won’t destroy your life. But the people who decide how AI is deployed, in your workplace, in your healthcare, in your children’s education, in the systems that determine whether you get a loan or a job or a diagnosis, those people will make decisions that affect your life profoundly. And the quality of those decisions depends entirely on whether anyone is paying attention, asking questions, and demanding accountability.
Or whether we’ve all been distracted by the loud alarm.
The Mirror
I started this series by telling you who I am and what I’m selling. Let me end the same way.
I’m an economist by training who writes about how well-intentioned systems create perverse incentives. I have a book coming out about historical panics called The World is Always Never Ending and another about media literacy called This Is Not The Whole Story. I use AI every day. I think it’s one of the most powerful tools I’ve ever had access to. I also think the discourse around it is a perfect case study in everything I write about: how narratives driven by financial incentives, amplified by engagement-optimized media, and consumed by an audience that has been systematically trained to react rather than think, can produce collective outcomes that serve almost nobody’s actual interests.
I have not told you what to think about AI. I’ve tried to give you the tools to think about it yourself. And about the next thing. And the thing after that. Because the specific topic doesn’t matter nearly as much as the underlying capacity. AI will evolve. The next panic will arrive. The next compelling narrative from someone with something to sell will go viral. The question is whether you’ll have the tools to evaluate it or whether you’ll share it with the message “you need to read this” without having read it yourself.
I don’t know what AI will look like in five years. Neither does Matt Shumer, and he told a reporter as much. Neither does anyone. The honest answer is uncertain, and uncertainty is the one thing the information environment we’ve built cannot tolerate.
But I do know this: the future won’t be shaped by AI. It will be shaped by the decisions humans make about AI. And the quality of those decisions depends on the quality of our thinking. Not our technology. Our thinking.
That’s the thing that’s actually at stake.
The Map
If you’ve read all five parts of this series, you now have something that 70 million people who shared that essay don’t: a framework for evaluating what you’re told.
You know to check who’s talking and what they sell. You know to read the source, not just the claim about the source. You know that the honest picture is always more uncertain and more interesting than the panic version. You know that collective responses to threats often cause more damage than the threats themselves. You know that the scapegoat machine exists to prevent accountability. You know that the quiet crises compounding in the background are the ones that will actually determine whether the future is livable.
And you know that not choosing is choosing.
I’m not going to tell you what to do with that. It’s yours. Use it for AI. Use it for the next election. Use it for the next viral essay, the next media panic, the next time someone with something to sell tells you something extraordinary with absolute confidence.
The only thing I’ll ask is this: the next time someone sends you something and says “you need to read this,” read it. Actually read it. And then ask who wrote it, what they’re selling, what they’ve claimed before, and whether the sources say what they claim the sources say.
That’s it. That’s the whole thing. It’s not a revolution. It’s not a movement. It’s just one person, reading carefully, thinking honestly, and refusing to let someone else’s financial incentives become your worldview.
It’s the least radical and most important thing you can do.
This is the final installment of “The Velocity of Hype,” a five-part series on the AI discourse crisis and the critical thinking emergency it reveals.
If this series gave you tools you found useful, the best thing you can do is use them. Not share the series (though you can). Use the tools. The next time an extraordinary claim crosses your feed, take five minutes. Check who’s talking. Read the source. Ask who benefits. Sit with uncertainty instead of collapsing into a side.
That’s not a small thing. In an information environment designed to make you react, choosing to think is an act of resistance.
This series draws on research compiled for my forthcoming books The World is Always Never Ending and This Is Not The Whole Story. If you want to go deeper into historical panics, incentive structures, and the patterns that repeat across centuries, those books are where this work continues.
Thank you for reading all five parts. That already puts you ahead of 70 million people.

