Back to Writing
Lived Experience
Cognitive Architecture
AI Strategy
April 2026

I Think I Found the Unit of Cognition.
It’s the Same Size in My Brain as It Is in Opus.

A pattern I didn’t design, three bodies of research that suggest it isn’t random, and a design variable almost nobody is measuring.

I use Claude Opus for complex problem mapping. Not as a tool. As a thinking partner — the kind of session where I go in with a hard problem and don’t come out until it’s fully resolved.

I started noticing a pattern I didn’t design.

The moment I finished processing — fully finished, nothing structurally left to resolve — was almost exactly the moment the Opus context window could no longer sustain the conversation. Same unit of work. Same completion point. And it kept happening.

The feeling it produced wasn’t satisfaction. It was the same quality of awe I felt the first time I understood the gravitational constant — the fact that it sits within an impossibly narrow band where matter coheres, stars form, planets hold, and life becomes possible. A little stronger and stars burn out before life can emerge. A little weaker and nothing coalesces at all.

Scientists debate what that precision means. But the awe of noticing it — that’s not in dispute.

I felt that standing inside my own cognition.

How is it possible that the unit of processing in my brain and the unit of processing in this model sit within a band where they actually meet?

I went looking for whether that question had a structural answer.


The Observation, Precisely

I need to be specific about what I’m describing, because precision matters here.

When I work through a genuinely complex problem with Opus — not summarizing, not drafting, but actual structural reasoning about something hard — there is a point where the problem feels structurally resolved. Not perfect. Not exhaustive. But the dependencies I can see are mapped, the assumptions I can surface have been surfaced, the gaps I can identify have been addressed. I know when I’m there. It isn’t a decision to stop. It’s a signal. Enough of the model is complete that continuing in the same container stops producing new structure.

That point maps almost exactly to the moment the Opus context window begins to degrade — where the model starts losing coherence, dropping earlier threads, struggling to hold the full structure of what we built together.

Most people hit that limit and feel interrupted. Cut off mid-thought. I hit it and I’m done.

And it scales. For larger bodies of work I use Projects — multiple sessions, each scoped to a distinct problem set. The way I naturally break down complex work maps cleanly to individual Opus sessions. Each session closes completely. A fresh session handles the connective analysis across them.

I didn’t design that structure. It emerged from how I process.

That’s the observation. It’s repeatable. It isn’t a coincidence I can dismiss.


The Research Bridge

I want to be direct about what I am and am not. I am not a neuroscientist. I am an autistic leader who has spent fifteen years navigating complex organizations by reasoning explicitly about things most people process automatically. When I went looking for whether this observation has structural grounding, I found three independent bodies of research that suggest it isn’t random.

01
The Chunk

In 1956, George Miller established that working memory is limited not in raw bits of information but in chunks — meaningful units of any size. A chunk is whatever the brain treats as a single coherent piece. Crucially, the size of a chunk is not fixed. It scales with expertise and cognitive architecture. An expert chess player holds an entire board position as one chunk. A novice sees thirty-two separate pieces. What this means is that the unit of working memory is relative to the processor running it. Different architectures produce different chunk sizes for the same problem.

02
Neural Oscillations

The brain doesn’t process continuously. It processes in discrete bounded episodes, structured by oscillatory cycles — particularly theta and gamma frequencies in the hippocampus and cortex. These oscillations implement the temporal boundaries of cognitive episodes. A processing cycle has a natural beginning, middle, and completion point — not because you decide to stop, but because the neural architecture imposes structure. The brain’s working memory capacity is directly tied to what fits within these oscillatory windows. The cognitive episode, biologically, has a unit size.

03
Predictive Processing and Autistic Precision

The dominant framework in computational neuroscience holds that the brain is fundamentally a prediction engine — constantly generating models of the world, comparing them to incoming information, and updating based on the gap. Karl Friston’s work on autistic cognition proposes something specific: that autistic brains run what he calls aberrant precision on prediction errors. Rather than learning to smooth over certain errors as background noise — which is what allows neurotypical brains to declare a model “good enough” and move on — autistic cognition keeps the precision dial high. The system doesn’t close until genuine resolution. It doesn’t short-circuit via social pattern-matching or intuitive inference. It processes fully, explicitly, to actual model completion.

That is a structural account of why I have a cleaner completion signal than most people. The cognitive episode doesn’t close at “good enough.” It closes at done.


What This Isn’t

I want to be precise about what I am not claiming.

This is not a superiority argument. Autistic cognition and neurotypical cognition are different architectures with different properties. High precision on prediction errors is the same mechanism that makes sensory environments overwhelming and social inference effortful. The completion signal is not a gift. It is a property of an architecture that comes with its own costs.

This is not a claim that AI thinks like a human. It doesn’t. A large language model has no biological oscillations, no prefrontal cortex, no predictive processing in the neuroscientific sense. What it has is a context window — a hard architectural boundary on a single coherent reasoning episode. Everything within it is in processing. Everything outside it doesn’t exist for that computation.

And this is not a designed convergence. Nobody building Opus set out to match the cognitive episode size of autistic processing. The capabilities that produce coherent extended reasoning in these models are emergent properties of training at scale — not design decisions. Two different systems, built through entirely different processes, arriving at the same unit.

That’s what makes it worth asking about.


The Implication

I want to be careful about what I’m generalizing here. This is my observation about my cognition. I can’t speak for all autistic processing — cognitive architectures vary enormously even within neurodivergent populations. What I can say is that when I published an earlier piece on the prefrontal cortex parallel between autistic cognition and AI computation, the comments suggested others recognized something similar in their own experience. So the question feels worth asking beyond just me.

Which leads to what I find most interesting: if the unit of cognition is a real and measurable property, and if different cognitive architectures produce different unit sizes — then we are sitting in front of a design variable that almost nobody is measuring.

We don’t yet know what units exist across different cognitive architectures. We don’t know how they interact with the environments organizations build, the tools they deploy, or the AI systems they adopt. But knowing that units can exist — and that they vary — changes the question.

That is a question the brain economy will eventually force organizations to answer. I suspect the range of answers will be wider than anyone expects.

Related Framework
AI Cognitive Strategy Matrix
The routing logic this observation connects to — what to preserve in human judgment, what to enhance with AI, what to offload entirely — is formalized in the AI Cognitive Strategy Matrix. The unit of cognition is the architectural property that determines where each person’s routing boundaries sit.

View the framework →