Industrial AI Needs Better Jigs, Not Smarter Models
A Childhood Memory About Precision
My parents are artists. I grew up around studios: glass blowers, bronze workers, woodshops. Lots of heavy equipment, lots of sharp tools, and (compared to any manufacturing plant) fewer guardrails. My dad worked in wood, mostly one-of-a-kind pieces.
But every year around Christmas, he did something a bit "production-y." He'd take a commission for 10 dollhouses. Incredible work, collectors' pieces, but requiring lots of similar, repeated parts.
Each dollhouse had a spiral staircase with 13 tiny bowtie-shaped treads, about the size of my thumb. Every tread needed two precise holes so tiny dowels could secure it to the spiral riser.
At some point I became "old enough to help," which meant: "Magnus, go drill 260 holes."
I was a smart kid. I was not a precision drill press.
So my dad built a jig.
A jig encodes alignment, constraints, guidance (and often safety) into a physical interface so the operation becomes repeatable. The jig didn't make me a master craftsman. It increased system reliability. It turned a hard-to-repeat operation into a boring one, while my dad focused on the valuable work only he could do.
That's the lens I keep applying to industrial AI.
The Blank Prompt Problem
A lot of AI UX still forces the operator to start from scratch:
- "Here's a model. Ask it things."
- "Here's an agent. Trust it."
- "Here's a workflow canvas. Build your own automation."
That's fine for power users. It doesn't scale to operators with real constraints: time, context-switching, accountability, safety, compliance, and the simple reality that failure modes are expensive. In industrial settings, "expensive" can mean safety incidents, scrap, downtime, or compliance violations. Not just a bad chatbot answer.
The gap isn't model capability. Most frontier models can reason well enough for a huge number of industrial tasks. The gap is the space between the model and the operator: the missing structure that makes AI usable under real-world constraints.
Something's missing. And I don't think it's smarter models.
What Software Already Figured Out
In software development, tools like Claude Code and Cursor are harnesses. The model is the tool, but the harness is what makes it usable day-to-day. It pulls in context, applies constraints, and turns output into action.
A good harness does several things a raw model can't:
Context injection. It knows what file you're in, what branch you're on, what tests exist. You don't have to explain your world every time you start a conversation.
Constraint enforcement. It respects linting rules, type systems, build pipelines. The model proposes; the harness validates.
Action paths. It doesn't just suggest code. It writes files, runs tests, creates commits. Output becomes action through a controlled interface.
Failure handling. When something goes wrong, the harness catches it, surfaces the error, and lets you iterate. You don't lose the context of what you were trying to do.
None of this is the model being "smarter." It's the surrounding system making the model's output reliable, auditable, and safe to act on.
What Industrial AI Jigs Look Like
Industrial needs the equivalent: AI jigs that make agents feel less like magic and more like a fixture. Reliable, boring, and easy to use.
Here's what I think that means in practice:
Pre-loaded context. An operator shouldn't have to describe their asset, their shift, their recent alarm history. The jig should know. Pull from the historian, the CMMS, the MES. Give the model the right context before the operator types a word.
Bounded actions. Don't give an agent open-ended access to a SCADA system. Give it a constrained set of actions appropriate to the operator's role, the current state, and the compliance environment. The jig defines what's possible, not just what's desirable.
Auditable reasoning. Every recommendation should come with a traceable chain: what data was considered, what was excluded, what the confidence level is. Not because operators need to read it every time, but because someone will need to review it after an incident.
Graceful degradation. When the model doesn't know, the jig should say so clearly and fall back to established procedures. "I'm not confident enough to recommend an action; here's the relevant SOP" is vastly better than a hallucinated suggestion with high confidence.
Domain-specific guardrails. A jig for predictive maintenance looks different from a jig for quality inspection, which looks different from a jig for energy optimization. Generic agent frameworks don't encode the safety constraints of specific operational domains.
The Harness Is the Product
This is the part that I think gets underestimated in the current AI conversation, especially in industrial contexts.
Models will continue to improve. They'll get cheaper and more capable. That's happening fast and it will keep happening.
But the durable value isn't in the model. It's in the harness: the structured understanding of how a specific domain works, what's safe, what's normal, what's not normal, and how to turn model output into trustworthy action.
The companies that win in industrial AI won't necessarily have the best models. They'll have the best jigs: the deepest understanding of operational context, the tightest integration with existing systems, and the most thoughtful constraints on what agents can and can't do.
So What?
If you're building or buying AI for industrial operations, here's the checklist I'd use:
- Does it pre-load operational context? Or does every interaction start from a blank prompt?
- Are actions bounded and role-appropriate? Or does the agent have unconstrained access?
- Is reasoning auditable? Can you trace why a recommendation was made after the fact?
- Does it degrade gracefully? What happens when the model isn't confident?
- Are guardrails domain-specific? Or is it a generic agent framework bolted onto your OT environment?
Industrial AI doesn't need more genius. It needs more jigs.