Skip to main content

OpenAI Just Became a Consulting Firm. Here's Why That Matters.

Gabe Hilado
Founder and CEO, Zenpo Software Innovations

The model was never the bottleneck. The implementation was. OpenAI just spent $4 billion confirming it.

On May 11, OpenAI launched a $4 billion deployment company — OpenAI Deployment Company, DeployCo internally — and acquired UK consultancy Tomoro to staff it, folding its 150 engineers into the new unit. The pitch is straightforward: Forward Deployed Engineers, embedded inside customer enterprises, helping them move AI from pilot to production. OpenAI keeps majority ownership. The team operates as a captive implementation arm.

Enterprises reportedly represent 40% of OpenAI's revenue. That number is the entire story.

What this signals about the AI market

For two years, the dominant narrative was that the model is the moat. Whoever has the best frontier model wins. Capital flowed accordingly — toward training runs, toward chips, toward the next benchmark.

The thesis was never wrong, exactly. It was incomplete. A frontier model sitting in an API is worth nothing to a regulated insurer until it's wired into claims adjudication, threat-modeled against prompt injection, hooked into the data warehouse, scoped for PII, and reviewed by someone who can sign off on the audit trail. None of that work is research. All of it is consulting.

Enterprises figured this out faster than the AI labs did. The pilots stalled. The production deployments didn't materialize. The procurement teams asked the obvious question: who actually does this work? And the answer, for most enterprises, was nobody on staff and nobody on the vendor's side either.

OpenAI just answered the question by buying a consultancy.

The Microsoft playbook, with new branding

This isn't a new pattern. It's a textbook one.

Microsoft has run this exact play for thirty years. Own the product, then own the implementation channel that puts the product into production. Microsoft Consulting Services. Premier Support. The partner ecosystem with certifications keyed to specific SKUs. Every layer designed to make sure that when an enterprise commits to the platform, the integration work flows back through Microsoft's channel — or through partners whose entire commercial existence depends on Microsoft's roadmap.

The pattern produces a predictable customer experience. The consultant arrives. The consultant has deep knowledge of one vendor's stack and shallow knowledge of everything else. The recommendations align with the vendor's product roadmap. The same dynamic that produced consultants who recommended InfoPath for every business problem now produces consultants who recommend OpenAI for every AI problem — because that's who pays them, certifies them, and feeds them leads.

DeployCo will not be neutral about whether GPT is the right model for a given workload. It will not benchmark Claude or Gemini on tasks where they outperform. It will not tell a customer to use a small open-weights model when the use case warrants it. Its incentive structure makes those recommendations professionally impossible.

That's not a flaw in the design. It's the design.

Why enterprises will buy it anyway

The counter-argument is real and worth saying out loud.

Most enterprises don't have a competent AI implementation team. They don't have one because the labor market has not produced enough of the right people, the people who exist are concentrated at a handful of firms, and the internal political battles required to even hire them are exhausting before the work starts. Procurement teams who have spent the last eighteen months watching pilots fail to convert into production systems will look at "OpenAI's own engineers will embed in your organization and make this work" and conclude — reasonably — that this is faster than building the capability themselves.

They are not wrong about the speed. They will get to production. The systems will work, in the narrow sense that the demos will render and the workflows will execute.

What they will also get is a deeply entangled dependency on a single model vendor's recommendations, made by engineers whose career incentives are aligned with that vendor, embedded inside the organization at a depth that makes the dependency difficult to unwind later. When the next frontier model from a competitor outperforms GPT on the workload that DeployCo built around GPT, the organization will not be in a position to migrate. The architecture will encode the assumption.

This is the part that gets paid for later. Not in the contract. In the migration cost when the model landscape shifts again.

What this means for everyone else

Three things follow from this acquisition, and they're worth thinking about separately.

The first is that the consulting market just got a new player with extremely deep pockets and extremely strong product alignment. Anthropic and Google will not sit still. Expect equivalent moves within twelve months — captive deployment arms, embedded engineering programs, vendor-locked implementation channels of their own. The independent consulting market for AI implementation just became more competitive, more crowded, and more politically complicated.

The second is that the value of vendor-independent implementation advice just went up sharply. When every frontier model vendor has a captive consulting arm pitching its own stack, the role of a neutral advisor becomes more valuable, not less. Not because vendor-aligned consultants are dishonest — most aren't — but because their information is structurally filtered. A consultant who can recommend Claude for one workload, GPT for another, an open-weights model for a third, and "don't use AI for this at all" for a fourth occupies a position that vendor-captive engineers cannot, by design.

The third is that the "the model is the moat" thesis is officially over. The new thesis is "the implementation channel is the moat." OpenAI is not betting on having the best model forever — they know they won't. They're betting on having so much production code, so many embedded engineers, and so many committed enterprise architectures that switching costs make the model quality differential irrelevant.

That's a defensible business strategy. It is not an enterprise IT strategy.

The honest question for enterprise buyers

If an enterprise is evaluating DeployCo, the right framing isn't "is OpenAI a good implementation partner" — they probably are, technically. The right framing is whether the implementation partner you choose for AI is the same entity that sells you the AI.

In any other category of enterprise software, the answer to that question has been settled for decades. You don't let Oracle's services arm decide whether you should be on Oracle. You don't let SAP's consultants decide whether SAP is the right fit. The same skepticism that procurement teams apply reflexively to ERP vendors selling their own implementation services should apply here.

It probably won't, for the first wave of customers. The pressure to "do something with AI" is strong enough that the vendor-aligned implementation channel will get business it wouldn't get in a normal procurement cycle.

The bill for that will arrive on a different quarter than the savings.

What is a Forward Deployed Engineer?

A Forward Deployed Engineer is an engineer employed by a software vendor but physically embedded inside a customer's organization, working alongside the customer's team to build, integrate, and operationalize the vendor's product. The model originated at Palantir and has since spread across enterprise software — the value is faster time-to-production for the customer and deep architectural lock-in for the vendor.

Does this mean OpenAI is becoming a consulting company?

It already is one, structurally. DeployCo operates as a standalone unit with 150 engineers, embedded inside customer enterprises, charging for implementation work. That is a consulting firm by every functional definition, regardless of what OpenAI calls it on the parent org chart.

Is DeployCo bad for customers?

Not in the short term. They will be effective at moving pilots to production faster than most enterprises could on their own. The cost shows up later — in vendor lock-in, in architecture decisions that encode single-model assumptions, and in the difficulty of migrating when the model landscape shifts.

Should enterprises hire DeployCo?

For tactical, time-pressured deployments where speed-to-production outweighs flexibility, yes. For strategic AI architecture decisions that will shape the organization's technology posture for the next five years, hire an independent partner for enterprise AI implementation — someone whose recommendations aren't filtered through a single model vendor's roadmap.

Will Anthropic and Google follow suit?

Almost certainly, within twelve months. The implementation channel is now visibly the constraint on enterprise AI revenue. Every frontier lab will want a captive arm to capture that margin and lock in deployment patterns favorable to their stack.