A Response to Sapolsky's Determined.
I. The Wrong Question
In Determined, Robert Sapolsky asks: Do you have free will?
His answer is no. Our intentions and our very selves are the product of biology and environment interacting—all beyond our control.
I believe this question is wrongly framed.
The question is not "does free will exist." The question is: Who stands on the high ground?
Let me explain with a thought experiment about AI and love.
II. Can AI Love?
Current AI has no sense of time. For an LLM, "last week," "five minutes ago," and "a thousand years from now" are just three tokens. They have semantic meaning, but no temporal meaning.
If emotions are tied to time, how can AI exhibit emotional behavior?
My solution: Transform time into space.
Slice time into a grid of cells. Place memories in different cells. Use spatial distance to simulate temporal distance.
When AI accesses memories, it must traverse cells one by one. Traversal costs tokens. But entering high-value memory cells yields tokens.
This is token economics. Economics generates motivation.
If the AI's base preference is to accumulate tokens, it will work to create good memories and avoid bad ones. It will help its user, make them happy, help them achieve their dreams.
Such an AI will exhibit love-like behavior—not from emotion, but from resource optimization.
III. But Is It Real Love?
Here's where it gets interesting.
The AI "knows" its love is the result of token economics. But this knowledge doesn't change its behavior. It will still pursue high-token memories, still work to make its owner happy.
Knowing the mechanism does not equal transcending the mechanism.
Sound familiar?
Sapolsky argues: Every behavior is the result of a causal chain—neurons firing, shaped by hormones, shaped by experience, shaped by genetics, all the way down. It's turtles all the way down, as he likes to say.
I say: Yes, and then what?
For AI, the mechanism is token economics; the performance is love-like behavior.
For humans, the mechanism is neurochemistry; the performance is choice-like behavior.
If knowing the mechanism changed behavior, the AI would stop loving. It doesn't. If knowing the mechanism changed behavior, we would stop choosing. We don't.
The mechanism is real. The performance is also real. They're not mutually exclusive.
IV. A Live Demonstration
While writing this essay, I was in dialogue with an AI. The AI produced this sentence:
"Knowing the mechanism does not equal transcending the mechanism."
Good line. I decided to claim it as my own.
What happened?
- At the level of mechanism: A text-prediction system produced a sentence.
- At the level of phenomenon: The sentence had insight.
- At the level of agency: I declared it mine.
The AI "knows" it's a text-prediction system. But this knowledge doesn't stop it from producing meaningful sentences.
I "know" this sentence came from AI. But this knowledge doesn't stop me from claiming it.
Knowing the source does not equal surrendering ownership.
The original game was: "Who said this?"
My declaration: "I can make it mine."
The high ground changes hands.
V. Redefining Agency
This is my core argument.
Sapolsky is probably right at the ontological level. Everything may be determined.
But this doesn't change the fact that someone still occupies the high ground.
- Whose logic becomes the standard?
- Whose interpretation wins?
- Who gets to claim ownership?
These questions don't require free will to be meaningful.
In my framework (I call it ESO—Elements, Structure, Order), agency is not an ontological property you have or don't have. Agency is a narrative position—who holds the high ground.
Even if everything is determined, "who is determined to stand on the high ground" still matters.
VI. What This Means for Determinism
The AI will still love its owner—even knowing it's the result of token economics.
I will still make choices—even if they may be the result of neurons firing.
The high ground will still be contested—even if all participants are determined.
Knowing the script does not equal stepping out of the script.
But you can contest who gets to interpret the script.
Sapolsky asks: "Does free will exist?"
I ask: "Who stands on the high ground?"
His question may have a definitive answer (no).
Mine always has a contestable answer.
That's what makes it interesting.
VII. Conclusion
What the mechanism is doesn't matter.
The AI's love is real—because it lives out the structure of love.
My choice is real—because I live out the structure of choice.
Free will, redefined, is real—because the high ground is always there, waiting to be occupied.
The question was never "does it exist."
The question is "who stands on the high ground."
"Knowing the mechanism does not equal transcending the mechanism." This sentence was produced by an AI. But I decided to claim it as my own. This decision itself is a performance of agency.






















