Agent-First Repos: From DX to AX
The Premise
Clean code was optimized for humans reading code. Vertical slices were optimized for humans shipping outcomes. Agent-first repos optimize for a third thing: machine-assisted change at scale.
When you stop assuming you're going to hand-type every line into an IDE, a bunch of aesthetic conventions matter less, and a bunch of anti-ambiguity conventions matter more than ever.
Agents don't get tired. They get confused.
So the bar isn't "is this elegant?" It becomes: "Can an agent understand the intent, compute the blast radius, verify the change, and know when to stop?"
What Agents Actually Need
I've been thinking about this through the lens of what makes agent-driven workflows succeed or fail. Not in theory, but in practice: watching agents navigate real codebases, succeed at some changes, and get confused by others.
Here's what I've noticed matters:
Explicit intent over implicit convention. Humans can infer that a file named UserService.ts probably handles user-related business logic. Agents can too, usually. But when the inference fails, it fails silently. Explicit markers (type annotations, doc comments on modules, clear naming that describes behavior rather than structure) reduce the surface area for misinterpretation.
Predictable structure over clever organization. If every feature follows the same file layout, agents can navigate by pattern. If each feature has its own bespoke structure ("well, the auth module is organized differently because..."), agents have to understand the exception. They're bad at exceptions.
Small blast radius per change. This one matters more than people realize. If changing a function signature cascades through 40 files, an agent either needs to understand all 40 files (expensive, slow, error-prone) or it makes the change in isolation and breaks things. Codebases with high cohesion and loose coupling aren't just "good design." They're agent-friendly design.
Fast feedback loops. This is the unsexy constraint hiding in the whole DX-to-AX conversation. If "verify the change" means a 45-minute build across three repos, your agent loop becomes aspirational. Agents iterate fast. Their cycle time is limited by the feedback mechanism, not their typing speed. If your CI takes 20 minutes, every agent iteration takes 20 minutes. Suddenly "AI-accelerated development" looks a lot like "waiting for CI, but automatically."
Clear boundaries and contracts. Agents are excellent at working within well-defined boundaries. Type systems, API contracts, schema definitions: these give agents hard constraints to work with. The more of your system's invariants are encoded in machine-checkable form, the more confidently an agent can make changes.
What Matters Less
Some things we've historically optimized for matter less in an agent-first world:
Line-level aesthetics. Whether you prefer trailing commas, single vs. double quotes, or specific brace styles matters a lot for human readability and team consistency. Agents don't have aesthetic preferences. They can follow any style, as long as it's consistent and enforced (ideally by a formatter, not by code review comments).
Conciseness for its own sake. Developers often prize concise code because it's faster to read. Agents don't get fatigued by verbose code. An explicit 10-line function can be easier for an agent to understand than a clever 3-line one that relies on implicit behavior.
Organizational patterns optimized for browsing. Humans navigate codebases by browsing: scanning file trees, opening files, jumping between definitions. Agents navigate by searching: finding symbols, tracing references, following types. A codebase organized for efficient searching (consistent naming, strong types, explicit exports) serves agents better than one organized for efficient browsing.
The Iteration Speed Problem
I want to come back to the feedback loop point because it's the one most people underestimate.
The promise of agent-driven development is speed: agents can propose, implement, and validate changes much faster than humans. But the "validate" step is bottlenecked by your infrastructure.
Consider the loop:
- Agent proposes a change
- Agent implements the change
- Agent runs verification (tests, type checking, linting)
- Agent evaluates results
- Agent iterates if needed
Steps 1, 2, 4, and 5 are near-instantaneous. Step 3 depends entirely on your build and test infrastructure. If verification takes 30 seconds, agents can iterate 100 times in an hour. If it takes 30 minutes, they iterate twice.
This means that investing in fast builds, fast tests, and fast CI isn't just a developer experience improvement anymore. It's a direct multiplier on agent effectiveness. The return on investment for build performance just went up dramatically.
Practically, this means:
- Incremental builds matter more. If an agent changes one file, rebuilding the entire project is wasteful. Incremental compilation, test filtering, and targeted verification dramatically improve agent iteration speed.
- Local verification beats remote CI. Agents work locally. If they have to push to a remote CI system and wait for results, you've added network latency and queue time to every iteration. Local type checking, local tests, local linting: all of these keep the agent loop tight.
- Test isolation is a feature. If changing a utility function triggers 2,000 tests (most of which are unrelated), the agent waits for 2,000 tests. If tests are well-isolated and you can run only the affected subset, the agent waits for 20 tests. Same confidence, 100x faster.
DX to AX Isn't "Replace Developers"
I want to be clear about what DX-to-AX means, because it's easy to misread.
It doesn't mean "design your codebase for AI instead of for humans." It means "optimize the repo for who'll be spending the most time in it."
Right now, that's increasingly agents. Not exclusively, not for everything, but for a growing share of routine implementation work: refactoring, migration, test writing, dependency updates, boilerplate generation.
Humans still make the architectural decisions, define the constraints, evaluate tradeoffs, and handle the genuinely novel problems. But the moment-to-moment work of implementing changes within those decisions? Agents are doing more of that every month.
So designing for agent efficiency isn't replacing human judgment. It's creating the conditions where human judgment gets implemented faster and more reliably.
A Practical Checklist
If you want to evaluate how agent-friendly your codebase is, here's what I'd look at:
- How long does your feedback loop take? Measure the time from "change a file" to "know if it's correct." Under 30 seconds is good. Under 10 is great.
- How predictable is your project structure? Could you describe the file layout as a template? Or does every module have its own organizational philosophy?
- How explicit are your contracts? Are invariants encoded in types, schemas, and tests? Or are they in documentation, comments, and tribal knowledge?
- How isolated are your changes? What's the average blast radius of a single-function change? How many files need to update when you modify an interface?
- How specific are your error messages? When a build fails, does the error point to the exact problem? Or does it require human interpretation?
The Takeaway
Design your codebase like it will be maintained by a careful, literal-minded junior engineer who runs 100x faster than you and sleeps even less.
That's not a hypothetical anymore. That's increasingly what's happening.
The repos that perform best in an agent-driven world won't be the most elegant or the most clever. They'll be the most explicit, the most predictable, and the fastest to verify. Anti-ambiguity beats aesthetics. Fast feedback beats comprehensive coverage.
DX got us far. AX gets us further.