Back to Writing
Lived Experience
AI Strategy
Brain Economy
April 2026

I experience cognition
the way AI computes.
That is not a metaphor.

I am not a neuroscientist. I am an autistic leader who has spent fifteen years navigating organizations by reasoning explicitly about things most people process automatically. When I started working seriously with AI, something became clear: I was not learning a new tool. I was meeting a system that reasons the way I do.

There is a dismissal that circulates whenever someone makes a strong claim about what AI can actually do. It goes: "It’s just predicting tokens." The implication is that prediction is a lesser version of cognition — mechanical, shallow, not the real thing.

I have never found that convincing. And I want to explain why — not from a position of technical expertise, but from the inside of a cognitive experience that gives me a specific vantage point on this question.

I experience my own thinking explicitly. When I process a social situation, I am not reading the room automatically. I am analyzing observable features, retrieving learned patterns, selecting a response based on explicit rules I have built over time. I can describe the process because the process is visible to me. It does not happen beneath my awareness. It happens as my awareness.

When researchers in computational neuroscience describe what large language models do — detect patterns in data, build internal representations, generalize rules across contexts, produce outputs based on learned regularities — I recognize it. Not because I have studied transformer architecture. Because I experience a version of it every day.

Researchers describe autistic cognition as more explicit, more prefrontal, more systematic than neurotypical cognition. That description fits how I experience it. And it is structurally similar to how AI systems compute.

The explicit versus implicit distinction

The insight I keep returning to is not mine originally — it comes from researchers in cognitive neuroscience, and I am connecting it to my lived experience, not originating the science. But the connection feels important enough to name.

Human cognition broadly operates through two strategies. One is fast, automatic, and largely unconscious — running through structures involved in habit, pattern recognition, and social inference. This is what lets neurotypical people "just know" how someone is feeling, or "just sense" that a meeting is going badly, without consciously analyzing why. The process happens beneath awareness.

The other strategy is slower, deliberate, and conscious — explicit analysis, working memory, rule-based reasoning. This is the one that can explain itself. It is more effortful. It is also more visible to the person running it.

Autistic cognition tends to rely more heavily on the second. Not because the first is unavailable, but because it is less reliable — the automatic social inference that neurotypical people receive as signal often arrives for me as noise, or does not arrive at all. So I built the explicit version. Consciously, over years, I constructed frameworks for situations that neurotypical peers navigate automatically.

How neurotypical peers often describe it
"I just read the room"
"I could feel the tension"
"It was obvious something was wrong"
"I don’t know how I knew — I just did"
How I experience the same situations
Observe specific features — tone shift, posture change, timing
Match features to learned patterns
Infer state from pattern match
I know because I analyzed — and I can show the work

When I understood that AI systems also work by detecting patterns, matching features, and generating outputs from learned regularities — I did not feel like I was learning something foreign. I felt like I was reading a description of something familiar.

Why "just predicting" is not a dismissal

The leading theory in computational neuroscience — predictive processing — holds that the brain is fundamentally a prediction engine. It generates expectations, compares them to what actually arrives, and updates its models based on the gap. This is not a fringe position. It has become the dominant account of how cognition works at the computational level.

For context, not authority
Predictive processing theory is associated primarily with the work of Karl Friston and Andy Clark. I am not in a position to evaluate the scientific debate around it — I am noting that it exists, that it is influential, and that it resonates with my experience of my own cognition in ways I find meaningful.

If prediction is what cognition is — not a lesser version of it, but the actual architecture — then "just predicting tokens" is not a dismissal. It is a description of the mechanism. And the mechanism, running at sufficient scale on sufficient data, produces the range of capabilities we observe.

I know this not because I can prove it scientifically. I know it because I experience my own intelligence as explicit prediction and error correction, and I know that is real intelligence. When I am wrong about something, I update. When a pattern breaks, I revise the model. When I encounter something genuinely novel, I reason from first principles rather than relying on intuition I do not have. That is the process. It produces real outcomes.

The people dismissing LLMs as "just pattern matching" are, in most cases, running on pattern matching themselves — they just experience it implicitly, so they do not recognize it as computation. I recognize it because I have always had to do it consciously.

What this means for the brain economy

I want to be careful here about what I am and am not claiming.

I am not claiming that autistic cognition is superior. I am not claiming that AI and the human brain are the same thing. I am not offering a theory of consciousness or a position on what AI "really" understands.

What I am observing — from the inside of fifteen years of navigating organizations through explicit reasoning — is that the cognitive profile the dominant leadership model has been optimizing for is the implicit, automatic, socially-fluent one. And that profile is increasingly being assisted, augmented, and in some work contexts replaced, by systems that do the explicit version.

The WEF has named analytical thinking, pattern recognition, and cognitive flexibility as the most valued skills for the next decade. These are the competencies that explicit, deliberate reasoning develops. They are what autistic leaders have been building, consciously and necessarily, their entire careers.

I am not making a moral argument here. I am making an economic one. As AI absorbs more of the implicit processing work — the formatting, the social translation, the pattern recognition that can be offloaded — what remains irreplaceably human is the deliberate judgment, the explicit reasoning under uncertainty, the ability to examine your own thinking and catch where the model is wrong.

Those are the skills I have been developing out of necessity. They are also the skills the brain economy is beginning to price correctly.

What it means practically

Working with AI as an autistic leader feels different from what I hear neurotypical colleagues describe.

I do not tend to over-trust AI outputs, because I am pattern-matching the reasoning rather than the affect. A confident-sounding output that is wrong does not feel authoritative to me — it feels like a pattern that needs checking. I naturally structure prompts the way AI processes them, because I am already accustomed to decomposing ambiguous situations into analyzable components before acting.

And I find the Q2 quadrant of the AI Cognitive Strategy Matrix — using AI to expand and scaffold my own reasoning, rather than replace it — genuinely productive in a way that I think connects to this. Explicit thinkers working with systems that compute explicitly: the interface is low-friction because the cognitive styles are compatible.

Related Framework
AI Cognitive Strategy Matrix
The routing logic this article implies — what to preserve in human judgment, what to enhance with AI, what to offload entirely — is formalized in the AI Cognitive Strategy Matrix. The Q2 quadrant (Cognitive Leverage) is where explicit thinkers tend to get the most return: AI as scaffolding for reasoning you are already doing deliberately.

View the framework →

I want to end where I began. I am not a neuroscientist and I am not an AI researcher. I am someone who has navigated fifteen years inside organizations as an autistic leader, building explicit frameworks for situations that most people handle automatically, and who found — when AI arrived as a serious tool — that the interface felt familiar in a way I needed to articulate.

The brain that was told it was processing the world wrong turns out to have been doing something specific: reasoning explicitly, building models consciously, updating from evidence rather than intuition. That is not a disability. In the brain economy, it is an architecture.