The Human Filter

Published on 2026-03-03
Adam Markon

I'm sitting on a plane from Denver to Nashville, getting ready to spend a week with my team, reflecting on the tectonic shifts that have taken place in the software industry in the last few months.

Not a year ago I was writing most of my code by hand, crafting elegant solutions to interesting problems, largely delegating small tasks like tweaking unit tests and small scripts to LLMs. Today, I probably spend less than 5 percent of my time in an editor writing code myself, with the rest of my time spent chatting with coding agents, writing plans for LLMs to execute, and supervising the assistant.

Relatedly, when you're spending all of your time coding in an IDE, you don't have much visibility into how others are working. The work is time-consuming enough that you don't have time to supervise the day-to-day, you just see the outputs when the pull request hits your inbox. With AI coding tools, that dynamic has shifted. Seeing how others work is low-cost, to be able to watch how someone else prompts an agent, when they push back on the agent, and when they just blindly accept its edits.

I've spent much of my last few months taking advantage of this new-found freedom, observing how others on my team use these AI tools. Possibly unsurprisingly, I've noticed a clear split in how people use these tools which correlates directly to the quality of output these engineers are able to achieve.

Two Ways of Working

This section is going to elaborate on the split between software engineer populations. None of the comments below are a value judgment on either type of person. You may see value judgments on the quality of the output, but my goal is to stay objective and not deride anyone's personal motivations, working style, or skill level; please don't interpret any of these comments as such.

The Speed Demon

The first category of engineer is generally motivated by speed of execution. They see AI coding tools as a means to an end, enabling them to get features out at breakneck speed. This is an engineer less motivated by the perfect object-oriented or functioal solution, and more motivated by getting something working into customers' hands.

As a result of this motivation structure, this engineer doesn't spend as much time in Plan Mode, and certainly is less critical of the plans the model might produce. If a model suggests a plaudibly-correct solution, that is good enough for this engineer - it satisfies its their goals. The feature will get shipped, it will work well enough, and it will get into customers' hands sooner rather than later.

The Master of Craft

The second category of engineer use AI coding tools as exactly that, a tool to optimize performance. These engineers are the ones who two years ago you might have seen taking hours or days to craft the perfect solution. Their motivation comes from delivering a solution that requires no human oversight, because the solution is elegant to the point that it shouldn't ever break. They don't see the AI as a way to get out of writing code, they see it as a way to write the same code faster.

This engineer spends hours in "Plan Mode" each week. They never begin a task without throughly fleshing out a plan with Claude or Codex before implementation begins. They might spend tens or hundreds of dollars in tokens on a plan containing many hundreds or thousands of lines of markdown. Once implementation begins, these engineers watch the implementation like a hawk. If the agent ever deviates from the plan, they course-correct and ensure the output matches the human's vision.

Endless Possibilities

With the advent of AI coding tools, there's a parallel to the speed observation, which is that there is now a functionally infinite set of solutions to any problem. With hand-crafted code, the set of solutions to a problem was fairly limited to what a human could code in a reasonable amount of time, understand easily, and maintain successfully. With AI coding tools accelerating the pace of coding, the possible solutions to any given problem has so grown in lockstep.

What made an engineer successful just a few years ago was being able to distill a problem into a single coherent solution. Today, that paradigm has shifted. A coding agent can propose several solutions to any problem, sometimes dozens of options, and a human is responsible for choosing the best solution for the job by filtering out the noise.

The False Dichotomy

In reality, those two worlds are not that different. A good Staff Engineer in 2022 was doing this filtering implicitly - the engineer didn't take a problem and instantly identify the correct solution, but rather they would have some mental model for considering several of the solutions an AI might suggest today, and quickly filter them to the solution that optimizes for devleopment speed, performance, maintainability, and functionality.

Today, we're being asked to explicitly go through that same process, and it's what separates 2026's Staff Engineers from its Junior Engineers. When Claude or Codex suggest several options and ask you which it should choose, or when the model's context gets polluted and it starts writing code that deviates from a reasonable plan, the strongest engineers are the Masters of Craft who will ensure the task is completed in the most architecturally-sound way possible.

Ultimately, this is the skill which will decide which engineers "win" in the AI era. The engineers who can act as a human filter on the ever-expanding set of ideas AI proposes, turning a dozen complicated ideas into 1-2 strong outputs, are the engineers who will succeed. AI ultimately replicates human behavior, and strong architectural patterns will prevail. Show the AI how to write strong code, and give it strong patterns to integrate with and follow, and the model will do the rest. Allow the models to write code uncritically and take naive approaches, and watch bugs become pervasive as complexity explodes.