
A response to a haunting thought experiment — and a search for what remains.
I want to start with a disclaimer, because the piece I'm about to reference deserves one.
CitriniResearch published a macro memo — written as if from June 2028 — that I cannot stop thinking about. It's framed as a scenario, not a prediction. A fictional future memo, dated June 2028, looking back at how the global economy unraveled after AI began replacing white-collar workers at scale. It describes unemployment at 10.2%, the S&P 500 down 38% from its 2026 peak, a private credit crisis triggered by the implosion of SaaS companies, and a feedback loop with no natural stopping point.The authors are careful to say: this is not a prediction. It's a stress test of optimism. A left-tail scenario worth modeling.
But here's what stayed with me after reading it — not the macroeconomics, not the asset class analysis, not even the eerily plausible collapse of mortgage underwriting assumptions.
It was this line, buried near the end:
"Throughout all of modern economic history, human intelligence has been a scarce resource... Every institution in our economy — from labor markets to mortgage markets to the tax code — was designed around this assumption."
And then: "We are now experiencing the unwinding of that premium."
That sentence doesn't just describe an economic crisis. It describes an existential one.
The Real Question Nobody Is Asking
Every serious AI discourse right now is either about capabilities (what can it do?) or economics (who profits?). The CitriniResearch piece is the most rigorous economic bear case I've read — it traces the transmission mechanisms with surgical precision, from SaaS contract renegotiations to private credit defaults to the $13 trillion mortgage market.
But it sidesteps — perhaps intentionally — the deeper question:
If human intelligence is no longer scarce, what exactly are humans for?
This isn't a philosophical indulgence. It's the most practical question of our time. Because "meaning" isn't just a spiritual concept — it's the hidden load-bearing structure of human motivation, mental health, civic participation, family formation, and economic behavior.
When people lose not just their jobs but their sense of contribution, the economic models become the least of our problems.
What We've Confused for Identity
We've spent roughly 200 years constructing a civilization where what you do defines who you are. Your profession is your introduction. Your output is your worth. Your productivity is your moral standing.
Ask someone what they do and watch how quickly they respond. The answer comes before they've even thought about it, because it lives not in their conscious mind but in the layer of identity beneath it.
This was always a fragile arrangement. We just never had a reason to notice.
The Industrial Revolution displaced muscle. Society adapted — awkwardly, painfully, over generations — by relocating human meaning from physical labor to intellectual labor. The factory worker became the knowledge worker. The craftsman became the consultant. The physical body was automated away, but the mind remained sovereign.
What the CitriniResearch scenario describes is the next displacement. And this time, there is no adjacent category to retreat to. You cannot out-think a system that thinks faster, cheaper, and without ever needing sleep or health insurance.
So where does that leave us?
The Things That Don't Compress
Here's my actual belief, not as optimism-signaling but as a structural observation:
There is a class of human experiences and capabilities that do not benefit from optimization.
They are not valuable despite being inefficient. They are valuable because they are inefficient. Because they require presence, imperfection, time, and irreducible humanness.
Let me be specific.
Connection. The CitriniResearch piece notes, almost as an aside, that "we overestimated the value of 'relationships'" — finding that much of what we called relationships were really just frictions with a friendly face. I think this observation is both correct and tragically incomplete. It's true for transactional relationships. But it misses the category of relationships that are entirely non-transactional — the ones that exist for no other reason than shared presence and mutual recognition. A parent reading a bedtime story slowly, badly, with all the wrong voices, is doing something that an AI narration service cannot replicate. Not because the AI reads worse, but because the point was never the reading.
Making things by hand. In a world where any text, image, piece of music, or software can be generated on demand, the act of making something yourself becomes more meaningful, not less. The hand-thrown pot, the home-baked bread, the garden grown from seed — these retain their value precisely because they are slow and personal. The maker knows this even if the market doesn't.
Presence in difficult moments. Grief, illness, birth, dying — these are the coordinates of a human life, and they have never been about efficiency. No one at the bedside of a dying parent is thinking about optimizing the interaction. The value of human presence in extremis is not informational. It cannot be outsourced to a system that processes language, however beautifully.
Teaching as relationship. The transmission of knowledge is not really about the knowledge. A great teacher doesn't just explain things well. They model how to be in the world — how to be curious, how to tolerate uncertainty, how to fail and continue. This is an embodied, relational act. It happens through hundreds of micro-interactions, most of which are invisible to any curriculum. You can build excellent AI tutors. You cannot build a mentor.
Witnessing and being witnessed. This might be the most underrated human need. We don't just want to experience things. We want someone else to know that we experienced them. This is why we still call our mothers when something good happens. Why we still gather for funerals. Why we keep journals. The desire to be known — really known, by another consciousness that could choose not to pay attention but chooses to — is something that AI can simulate but never fulfill, because the value comes specifically from knowing that the other party is not obligated by code.
The Shift We Need to Make (And Why It's Hard)
Everything I just described is economically marginal.
That's the real crisis. Not that these things will disappear, but that our civilization has no good mechanism for honoring them.
We pay therapists for presence. We pay teachers for content delivery. We pay nurses for procedures. But the actual valuable thing in each of those roles — the witnessing, the modeling, the care — we don't know how to price, so we tend not to.
If AI collapses the economic value of the informational component of these jobs, two things could happen.
In the pessimistic scenario: we lose the jobs, assume the valuable part comes bundled in the job title, and slowly realize it doesn't. We build better and better AI interfaces and feel increasingly hollow.
In the optimistic scenario: we finally have to reckon with what we actually valued all along. We start building social and economic structures that support the human relational layer explicitly — not as a side effect of information work, but as the work itself.
This is harder than it sounds. It requires changing what we measure, what we fund, and — most importantly — what we respect.
What I'm Choosing to Do With This
The CitriniResearch memo ends with: "The canary is still alive."
It's February 2026. We are, according to their scenario, at the last moment before the feedback loops activate. We have, in theory, time.
I'm not an economist and I'm not writing a policy paper. But I am a person who works in technology and thinks about these things daily. Here is what I'm actually doing with this:
Investing deliberately in inefficiency. Not as a luddite, but as someone who recognizes that the slow things are now the rare things. I'm learning to cook food I could order faster. I'm writing letters I could send as messages. I'm having long conversations I could replace with a summary. These are not acts of nostalgia. They are acts of insistence — insisting on the kind of experience that cannot be replicated.
Redefining productivity in my own head. I spent years measuring my days by output. I am slowly, imperfectly, relearning to measure them by depth of attention and quality of presence. This is uncomfortable in a culture that rewards throughput. Do it anyway.
Paying attention to what I uniquely bring. Not skills — anyone can build skills. But the specific texture of how I think, what I notice, the particular way I care about certain problems. These are not optimizable. They are mine. The more AI can do, the more precious the irreducibly personal becomes.
Telling people what they mean to me. This sounds small. It is not small. In a world optimizing for efficiency, the explicit act of saying "you matter to me, specifically, as a person" is increasingly countercultural. Do it more.
The Bigger Frame
The CitriniResearch scenario imagines a crisis of economic displacement. I believe it is more accurately described as a crisis of meaning infrastructure — the systems by which humans understand themselves to matter.
Economic disruption is the visible symptom. The underlying condition is that we built a civilization that told people their worth was proportional to their cognitive output, and now we've built a machine that does cognitive output better and cheaper.
We have two choices. We can patch the economic system — redistribution, UBI, AI taxes, whatever the politics will allow — and hope that meaning reconstitutes itself. Or we can do the harder thing: consciously rebuild our meaning infrastructure from first principles, asking what human life is actually for once survival and cognitive labor are taken care of.
I don't think this is a question AI can answer for us. Not because AI isn't smart enough. But because the answer has to come from the asking — from the distinctly human act of sitting with uncertainty, looking at other humans, and deciding together what matters.
That might be the most important work left.
This piece was inspired by and draws on a macro scenario memo by CitriniResearch (published February 2026), which imagines an AI-driven economic crisis from the vantage point of June 2028. I strongly recommend reading the original — it is more rigorous, and more unsettling, than my response does justice to. The views here are my own.
If this piece made you think of someone, send it to them. The act of sending is the point.


















