Back to Writing
Brain Economy
Leadership Systems
AI Strategy
April 2026

The article named the symptoms.
Here is the architecture.

Fortune, Harvard, Wharton, and Deloitte are converging on a diagnosis. Each finding maps to a body of work already built. This is what they are actually describing — and why the gap between naming it and designing for it is where organizations keep failing.

In Response To
AI's workforce has a human-design gap — and researchers from Harvard, Wharton, and Deloitte say it's approaching doomsday
Fortune · March 29, 2026 →

A Fortune article published this week cites researchers from Harvard Business School, the Wharton School, and Deloitte converging on a single, uncomfortable finding: companies are spending 93 percent of their AI adoption budgets on technology and 7 percent on figuring out how human beings are supposed to work alongside it.

The researchers are asking the right questions. But the framing in the article stops short of the architecture underneath the problem. Each finding they surface has a more precise name — and a structural answer — in the body of work I have been building at The Autistic Leader™.

This is not a critique of the article. It is a translation of it.

93/7
AI budget on technology vs. human design — Deloitte, 2026
This is not a resource allocation problem. It is a cognitive design problem.

The Symptom: 93/7

The headline statistic is stark — and the researchers are right to surface it. But calling it a "resource allocation" problem misframes the fix. Organizations are not underspending on people because they do not care. They are underspending because they do not have a framework for what "investing in the human side of AI" actually produces.

Technology investments are legible. You can benchmark a use case, show a board a number, or point to throughput metrics. Workforce transformation is harder to quantify because the question is not "what does the tool do" — it is "what should human cognition be doing, and what should it stop doing." Without a framework for that question, the 7 percent stays at 7 percent not because of neglect, but because the work is not yet specified.

Specifying that work — precisely, structurally, at the level of individual cognitive decisions — is what the AI Cognitive Strategy Matrix was built for. The 93/7 split is the symptom. The absence of a cognitive architecture is the cause.

Four Findings. Four Translations.

The Fortune article surfaces four distinct observations from its researchers. Each one is pointing toward something real. Here is the more precise architecture underneath each finding.

Harvard Business School
"Wayfinding vs. pathfinding"
↓ What this is actually describing
Intuition fails structurally under complexity
Linda Hill and Jason Wild distinguish between leaders who set a destination and drive toward it ("pathfinders") and those who navigate fog ("wayfinders"). It is a useful metaphor. But the neuroscience underneath it is more precise than a metaphor allows.

Fast, intuitive social inference — the cognitive mode that drove most 20th-century leadership — works when environments are stable, participants share similar backgrounds, cues are consistent, and consequences are local. None of those conditions describe a modern enterprise.

As organizations scale across distributed teams, unfamiliar technologies, regulatory regimes, and long-tail risk, intuition does not just become less reliable. It becomes actively misleading. Confidence decouples from correctness. Charisma becomes a poor proxy for judgment. The leader who "reads the room" well in a stable context reads it incorrectly in a complex one — and does not know it.

This is not a metaphorical problem. It is a neurocognitive one. The prefrontal route — deliberate, explicit, slower — is the correct routing mechanism for complexity. Most leadership systems are still optimizing for the subcortical route. That is the architectural failure Harvard is circling.
Wharton School / GBK Collective
"The donut hole" — middle managers stuck between C-suite investment and native AI workers
↓ What this is actually describing
Cognitive misrouting — translation tax
Wharton's research identifies a gap at the center of most large organizations: the C-suite is investing in AI, younger workers have grown up using it, and middle managers are the ones being left behind. The researchers frame this as "reluctancy" — passive or active resistance.

I would reframe it as cognitive misrouting.

Middle managers in most organizations are burning their highest-quality cognitive capacity — prefrontal, deliberate, pattern-recognition capacity — on social translation overhead: formatting communication upward, navigating political ambiguity, smoothing conflict, performing fluency they may or may not feel. That is not their job. That is the cost of operating in a system not designed for explicit cognition.

The donut hole is not a skills gap. It is a design gap. When you route a leader's best thinking into social overhead, you have nothing left for actual judgment. The question is not why middle managers are resistant to AI. It is what cognitive capacity they would have for AI adoption if the translation tax were removed.
Deloitte
"Workforces are like antigens — they fight what they cannot see as making their jobs better"
↓ What this is actually describing
Emotional debt and the undesigned cognitive role
Deloitte's Lara Abrash describes workforce resistance to AI as an immune response — people fighting what they cannot see as improving their situation. That is correct, but it understates the mechanism.

When organizations deploy AI without telling people what their cognitive job actually is now, people are not simply confused. They are carrying the weight of an unresolved architecture. They do not know which of their thinking is still valued, which has been replaced, and which should be redirected. That ambiguity does not produce neutral waiting. It produces the protection of everything — which is functionally the same as surrendering everything.

This is the emotional debt mechanism. People absorb the weight of systems that were never designed to account for them. The resistance is not irrational. It is the correct response to an organizational structure that has changed the inputs without changing the job description of the human.

The fix is not change management. It is cognitive role design — specifying, at an architectural level, what human judgment is for now.
Deloitte
"We need EQ in the workforce"
↓ What this is actually describing
Emotional competence vs. emotional intuition — the distinction that changes everything
Abrash cites emotional and social intelligence as one of three human capabilities that will matter most in the AI era. She is right that it matters. But "EQ" is one of the most consistently misused terms in organizational leadership.

What most organizations call "high EQ" is emotional intuition — the automatic resonance with others' emotional states, enabled by fast subcortical processing. It is the ability to feel the room, read what is unspoken, and respond in kind. This is valuable. It is also unreliable when the room contains people whose emotional signaling does not match your prediction model.

Emotional competence is different. It is the ability to understand others through reasoning, pattern recognition, and explicit validation — even when automatic resonance fails. It is slower. It is more effortful. And it is what actually scales under complexity, diversity, and high stakes.

The organizations that conflate the two are not just making a semantic error. They are systematically misidentifying capability — and systematically undervaluing the leaders who demonstrate competence in favor of leaders who demonstrate intuition. The leaders who get misread as "low EQ" in most organizational cultures are frequently the ones demonstrating the higher-order skill.

The Fifth Finding They Did Not Name

The article is thorough. But there is one observation missing from all four researchers' framing — and it is the most important one for understanding why the 93/7 gap persists.

The model of leadership being disrupted right now is not just "decisive pathfinding." It is a specific cognitive architecture: one that optimizes for speed, social fluency, intuitive authority, and implicit inference. That architecture was built by and for a specific kind of brain — one that processes social cues automatically, reads tone without effort, and produces consensus through presence.

The brain economy does not reward that architecture. It rewards accurate pattern recognition under ambiguity, deliberate risk weighting, explicit signal separation, and the ability to see what is actually there rather than what social prediction suggests should be there.

Neurodivergent cognition — and autistic cognition in particular — is not the edge case of this transition. It is the proof of concept. The same cognitive architecture that struggles to perform neurotypical social signaling is the one that catches risk signals early, separates observation from interpretation, weights ambiguity as risk rather than nuance, and maintains judgment integrity under social pressure.

The researchers from Harvard, Wharton, and Deloitte are not describing a technology problem. They are describing the structural consequences of having built every leadership evaluation system around one cognitive profile — and that profile's failure under the conditions that now define enterprise complexity.

This is why the architecture matters. Not as a framework for accommodation. As a framework for redesign.

What the Architecture Looks Like

The answer to the 93/7 problem is not "spend more on people." It is build a cognitive infrastructure that answers three questions at every level of the organization:

1 — What requires human judgment here?

Not "what do humans do" — but what specifically requires the prefrontal, deliberate, explicit reasoning that AI cannot replicate. Board decisions. Executive narrative. Judgment calls under uncertainty. Principled dissent. These are Cognitive Integrity work. AI functions as a mirror here, not an author. The quality of the output depends on the quality of the reasoning, not the quality of the prompt.

2 — Where should AI multiply human thinking?

Scenario mapping. System diagnosis. Risk architecture. Complex whiteboarding. This is Cognitive Leverage — where AI lowers the translation cost of explicit reasoning and allows pattern-based thinking to produce its highest return. This is not offloading. It is amplification of what the human does best.

3 — What can be delegated entirely?

Email drafting. Summaries. Repetitive reporting. Social formatting. This is Cognitive Offload — and critically, it is where the translation tax lives for many neurodivergent leaders. Offloading it is not laziness. It is the act of protecting deliberate cognition for where it is irreplaceable.

The 7 percent problem is solved when organizations can map every cognitive role in the organization against this architecture. Not as a policy. As a structural redesign of what leadership is for.

Thinking Architecture
AI Cognitive Strategy Matrix
The full routing logic — with cognitive allocation principles for every quadrant. How to apply it at the individual, team, and organizational level.

Why This Matters Now

The timing of this Fortune article is not coincidental. The convergence of Harvard, Wharton, and Deloitte on the same diagnosis in the same week signals that the conversation is shifting from "should we invest in AI" to "why is our AI investment not producing the outcomes we expected."

The answer is always the same: because the investment was in the technology, not in the cognitive infrastructure that determines whether the technology routes correctly.

Organizations that close the 93/7 gap by simply spending more on "workforce transformation programs" will get the same result — because the issue is not training volume. It is architectural clarity about what human cognition is for now.

The wayfinding metaphor is useful. But wayfinding without a map is just wandering with better language. The map already exists. The architecture has been built. The work now is translation — from the symptoms named in Fortune to the design decisions that actually change how organizations think.

That is the work of the brain economy.