Two Industry Shifts Are Colliding
The Setup
Two shifts are happening in parallel, and their intersection is going to restructure a lot of software.
The first: AI models are getting cheaper and increasingly commoditized for everyday workloads. Frontier gaps will continue to exist, and some tasks will always reward specialization. But for a lot of businesses, models are becoming infrastructure. Commodity infrastructure.
The second: the "user" of software will increasingly be another agent. We already lived through a major shift from UX-first to DX-first. Now we're entering a third mode: designing for agents as first-class users.
Neither shift is new on its own. Together, they change what "competitive advantage" means for software companies.
Shift 1: Models as Infrastructure
When models commoditize, the durable advantage moves up the stack.
What wins isn't only better models. It's two things above the model layer:
The harness. The safety and execution layer that makes model output trustworthy. Guardrails, permissions, audit trails, exception handling. The boring-but-important part: what you can trust, what you can verify, and what you can explain after the fact.
Operational intelligence. The structured understanding of how a business actually operates. Assets. Relationships. Constraints. Permissions. Workflows. What's "normal." What's "not normal."
Here's a concrete example: "What changed since last week?" sounds like a question for a smart model. It's mostly a context and provenance problem. You need to know what data existed last week, what data exists now, what's authoritative, and what changed in between. The model summarizes; the hard part is everything upstream of the model.
This pattern repeats across domains. The model is the last mile. The value is in the miles before it: curated context, validated data, domain-specific constraints.
Companies that have deep operational context (and can surface it reliably to a model layer) will have a structural advantage over companies that just have a good model and a prompt.
Shift 2: Agents as Users
The DX-first era taught us that developer experience is a product surface. API design, documentation quality, SDK ergonomics: these became competitive differentiators.
Agent experience (AX, if you want a shorthand) is the next version of this, and it's different in important ways.
Agents don't click around. They call interfaces. They consume context. They take actions. So "agent-friendly" isn't a UI the way we think about it today. It's a set of capabilities:
Tool calls and stable APIs. Agents need programmatic interfaces that don't break between versions. The same principle that made REST APIs valuable for developers makes stable, well-documented tool interfaces valuable for agents.
Authoritative context. Agents need to know what's true. If your system can't distinguish between "current state" and "stale cache," agents will make decisions on bad data. Context authority is a product requirement now.
Guardrails and permissions. What can this agent do? What can't it? Who authorized this action? The permission model for agents needs to be at least as rigorous as the permission model for humans, probably more so, because agents operate faster and with less judgment about edge cases.
Observability and audit trails. When an agent takes an action, you need to know what it did, why, and what information it used. This isn't optional for regulated industries, and it's good practice everywhere.
Safe execution paths. Clear error handling, rollback capabilities, and well-defined failure modes. An agent that fails silently is worse than an agent that fails loudly.
The adoption question for software is shifting from "how easy is it for a developer to use?" to "how easy is it for an agent to call?"
Where the Shifts Collide
Put the two shifts together and you get a restructuring pattern.
A lot of software exists to help humans coordinate between systems. Dashboards, workflow tools, reporting layers, integration middleware. Humans read from system A, make a decision, and act in system B.
Agents can do this loop faster, cheaper, and (given good context) more reliably. So "software that exists to help humans shuttle information between systems" is the category most exposed to restructuring.
What persists, and gets more valuable, are the primitives that make agent-built workflows safe, cheap, and reliable:
- Data quality and provenance. Agents are only as good as the context they receive. Garbage in, confident garbage out.
- Domain models. The structured representation of how a business works. Not generic ontologies; specific, validated, operational truth.
- Permission and audit infrastructure. The governance layer that makes automated action trustworthy.
- Stable interfaces. APIs, tool definitions, and context endpoints that agents can rely on.
The companies that own these primitives, especially the domain-specific ones, are well positioned. The companies that are primarily coordination layers between other systems should be thinking hard about what agents mean for their value proposition.
What This Means for Builders
If you're building software (or building on top of software), here are the questions I'd be asking:
For your product:
- How much of the value you provide is coordination vs. domain intelligence?
- If an agent could call your system directly, what would it need? Do those interfaces exist?
- Is your data authoritative enough for an agent to trust it for automated decisions?
For your architecture:
- Are your APIs stable and well-documented enough for non-human consumers?
- Do you have a permission model that works for agents, not just human users?
- Can you provide audit trails for every action, including the context that informed it?
For your strategy:
- Where does your durable differentiation live? In the model layer (commoditizing), the harness layer (valuable but buildable), or the domain layer (hardest to replicate)?
- Which coordination tasks in your workflow are most exposed to agent automation?
So What?
The short version: models commoditize, agents become users, and the durable advantage moves to whoever understands the domain deeply enough to make agent-driven workflows safe and reliable.
"How easy is it for an agent to call?" becomes the adoption question. "How well do you understand the domain?" becomes the durable differentiator.
Software that exists to help humans coordinate between systems gets restructured. The primitives that make agent workflows trustworthy get more valuable.
If you're not thinking about what your software looks like to an agent, you're optimizing for the last era.