The Context Bottleneck

3 min read
Table of Contents

There’s a joke making the rounds that we’ve reached the age where AI has taken flight — and humans are just the “context bottleneck to productivity.”

I laughed the first time I heard it. Then I sat with it for a minute. Then I stopped laughing.

Because the joke lands. I feel it every day. The AI could keep going. It could scaffold the next feature, write the tests, open the PR. But it’s waiting on me. Waiting for me to review, approve, clarify, redirect. I’m the slowest part of my own workflow.

I wrote about this shift a few weeks ago — how the ratio flipped from 80% writing code to 80% orchestrating. The code gets written in minutes now. The context transfer takes hours. CLAUDE.md files, PRDs, conversation resets, prompt tuning — that’s where my time goes. And that ratio keeps tilting.

So yeah. The bottleneck framing? It’s not wrong.

Where the hope isLink to heading

Here’s what I keep coming back to: the AI is fast, but it doesn’t know what matters.

It can generate ten approaches to a problem. It can’t feel which one fits. It can scaffold a feature in three minutes. It can’t tell you whether that feature should exist. It can write a migration, but it doesn’t know that the team decided last Thursday to deprecate that table.

The bottleneck isn’t human thinking. It’s human judgment. And judgment is still the whole game. The deciding, the “no, not that” — that’s where the value lives right now.

I want to believe that’s durable. That the human in the loop isn’t a bug in the system — it’s the part that makes the system worth running.

Where the doubt creeps inLink to heading

But I’d be lying if I said I was confident about that.

Five months from now, the models will hold more context. The configuration tax I pay today — the rules files, the slash commands, the careful conversation resets — might just disappear. The bottleneck I represent gets thinner.

A year from now? Maybe the AI doesn’t need me to structure the PRD. Maybe it reads the codebase, talks to the stakeholders, and figures out what to build on its own. Maybe “judgment” starts looking less like a human skill and more like a training objective.

Five years from now? I don’t know. And that’s the part that sits with me.

I’ve spent over twenty years building a career on writing software. The craft shifted once already — from writing code to orchestrating agents. I adapted. I built systems around it. But what if it shifts again? What if the next version doesn’t need an orchestrator?

Where I actually amLink to heading

I don’t have a clean answer. I’m not going to pretend I do.

Right now, today, I still matter in this loop. My judgment catches things the AI misses. My context about the team, the users, the history — that stuff is load-bearing. Remove me and the output gets faster but worse.

But I’m watching that gap close. Month by month. And I’m trying to figure out what to invest in that stays valuable even if the gap closes all the way.

Maybe it’s taste. Maybe it’s something I haven’t named yet.

I’ll keep building. I’ll keep writing about it. And I’ll keep sitting with the question I can’t answer: where do I fit in this?

My avatar

Grateful you made it to the end—browse more posts or say hello through the footer links.


More Posts

Comments