Skip to main content

Product-Led Consulting

Executive Summary

Most enterprise technology engagements fail not because the team lacked skill, but because the technology decision was made before the problem was understood. A firm that calls itself "a Salesforce shop" or "a React house" has already decided what to build before hearing what the client needs. That is not consulting. That is installation.

Product-led consulting inverts that sequence. It starts with the client's problem, evaluates the full spectrum of available solutions — from native platform configuration to commercial marketplace products to custom code — and selects the lightest viable option that solves the problem without creating unnecessary maintenance burden. The consulting firm's identity, preferred stack, or partner certifications do not enter the decision until the problem is defined and the solution landscape is mapped.

This guide defines what product-led consulting means in practice, introduces a decision framework for technology selection (the Solution Hierarchy), and identifies the failure modes that cause organizations to overbuild, overspend, and lock themselves into platforms they did not need. It is written for IT directors, CTOs, program managers, and anyone who has sat through a vendor pitch and suspected the recommendation had more to do with the firm's revenue model than the client's actual requirements.

The core thesis is simple: the best technology decision is the one that solves the problem with the least complexity the organization has to maintain. Everything in this guide follows from that premise.

Stack-First Consulting vs. Product-Led Consulting

DimensionStack-First ConsultingProduct-Led Consulting
Starting pointTechnology the firm sellsProblem the client has
Discovery outputTechnology recommendationProblem map with complexity ratings
Solution evaluationSingle vendor ecosystemFull landscape: native, marketplace, custom
Custom code roleDefault approachLast resort, scoped narrowly
Team compositionPlatform specialistsCross-platform evaluators
Licensing modelPremium platform licenses assumedLightest licensing that solves the problem
Maintenance burdenHigh — custom code + platform dependenciesLow — native config and vendor-maintained products
Handoff readinessClient depends on contractor for ongoing supportClient's team can maintain most components independently
Delivery timelineMonths (architecture-first)Weeks (deploy-and-iterate)
Incentive alignmentFirm profits from complexityFirm profits from solving the problem

The Problem: Platform Religion

The enterprise consulting industry has a belief system problem. Firms organize themselves around technology stacks, build their hiring pipelines around certifications, and structure their revenue models around platform partnerships. The result is predictable: when the only tool a firm sells is Salesforce, every client problem looks like a CRM implementation.

This is not a fringe issue. It is the default operating model for most mid-market and large consulting firms. The firm's identity is fused with its technology stack, and that fusion distorts every recommendation the firm makes. A client walks in with a workflow problem. The Salesforce shop recommends Salesforce. The ServiceNow partner recommends ServiceNow. The Microsoft house recommends Power Platform. The recommendation tracks the firm's margin structure, not the client's need.

The data on what happens next is not encouraging. Research from the Standish Group has tracked IT project outcomes for nearly three decades and consistently finds that only about 31% of projects meet their goals on time and on budget.1 Large projects fare dramatically worse — those exceeding $15 million in budget average 45% cost overruns while delivering 56% less value than predicted, according to a McKinsey–Oxford University study of more than 5,400 IT projects.2 BCG's research confirms the pattern: more than two-thirds of large-scale technology programs miss their targets on time, budget, or scope.3

These are not random failures. They follow patterns. And one of the most consistent patterns is technology selection driven by something other than the problem at hand. GAO has repeatedly flagged federal IT programs for selecting technology platforms before defining measurable requirements — a sequence that virtually guarantees scope creep, integration complexity, and cost escalation.4 The same dynamic plays out in commercial enterprises, just with less public documentation.

Platform religion does not only affect large programs. It shows up in $50,000 engagements where a firm deploys a full CRM platform for what should have been a two-page web form. It shows up in architecture decisions where a lead engineer picks microservices for an internal tool with 50 users because that is what they used at their previous employer. It shows up every time a consulting firm's first instinct is to reach for the technology it knows rather than the technology that fits.

The cost is not just budget overrun. It is ongoing maintenance burden, licensing overhead, organizational complexity, and the slow accretion of technical debt that makes the next technology decision even harder than the last one.

What Product-Led Consulting Actually Means

The phrase "product-led" has been thoroughly colonized by the SaaS growth marketing world, where "product-led growth" describes a go-to-market strategy in which the product itself drives acquisition and expansion. Product-led consulting is a different concept entirely. It describes a delivery model in which the consulting firm's recommendations start with existing products and configurations before reaching for custom code.

This is a specific operational discipline, not a philosophy statement. It means the engagement team evaluates the full solution landscape — native platform capabilities, commercial off-the-shelf (COTS) products, marketplace solutions, and custom development — in a defined sequence, and only escalates to higher-complexity options when lower-complexity options genuinely cannot solve the problem.

The distinction matters because it cuts against two opposing failure modes that are equally common in enterprise consulting.

The first failure mode is the one already described: platform religion, where the firm's stack identity drives every recommendation regardless of fit. The second failure mode is its mirror image — firms that refuse to recommend custom code under any circumstances, insisting that every problem can be solved with configuration and commercial products even when the client's requirements genuinely demand something bespoke. Both are forms of dogma. Both produce bad outcomes.

Product-led consulting occupies the pragmatic middle. It does not privilege any single approach. It privileges the approach that fits the problem with the least organizational burden.

In practice, this means the consulting team must have breadth across platforms and solution types. A firm that only employs Salesforce-certified consultants cannot credibly evaluate whether the client's problem is better solved by a Microsoft Power App, a commercial marketplace product, or a simple native configuration in the platform the client already owns. The team composition has to support the evaluation, not constrain it.

It also means the engagement's discovery phase is structured around problem decomposition rather than technology selection. The team's first deliverable is a clear articulation of what the client needs the system to do — stated in terms of business outcomes, not technical specifications. Technology enters the conversation only after the problem is defined.

This applies across ecosystems. Whether the client operates in Microsoft 365, Salesforce, AWS, Google Workspace, ServiceNow, or a hybrid environment, the logic is the same: define the problem, map the solution landscape, select the lightest viable option. The framework does not care which vendor's logo is on the platform. It cares whether the problem gets solved without overbuilding.

The AI tooling landscape makes this discipline more important, not less. Gartner predicted that at least 30% of generative AI projects would be abandoned after proof of concept by the end of 2025, citing escalating costs and unclear business value as primary drivers.5 Organizations rushing to deploy AI-powered solutions without first asking whether a simpler approach solves the problem are repeating the same pattern that has driven enterprise IT failure rates for decades — just with newer technology. Tools like Copilot, ChatGPT, Cursor, and Claude can accelerate evaluation and prototyping across the solution hierarchy, but they do not change the hierarchy itself. The question remains: does this problem need custom code, or does something already exist that handles it?

The Solution Hierarchy: A Decision Framework for Technology Selection

The Solution Hierarchy is a three-layer decision model that governs how product-led consulting teams evaluate and select technology. It is platform-agnostic — it applies whether the ecosystem is Microsoft, Salesforce, AWS, or anything else. The principle is simple: always start at Layer 1 and only escalate when the layer above genuinely cannot solve the problem.

The Solution Hierarchy in one sentence: Exhaust native configuration before evaluating commercial products, and exhaust commercial products before writing custom code.

The framework produces a repeatable evaluation sequence:

  • Layer 1 — Native Configuration: Use what the platform already provides. Zero additional licensing, zero custom code, zero new vendor dependencies.
  • Layer 2 — Marketplace or Commercial Product: Someone already solved this. Buy a tested, vendor-maintained solution before building one from scratch.
  • Layer 3 — Custom Code: Build only what does not exist and cannot be bought. Scope it narrowly. Document it thoroughly. Plan for its maintenance from day one.

Each layer has explicit entry and exit criteria. Escalation from one layer to the next requires documented justification — not a gut feeling, not a preference, not a partner incentive. The documentation protects the client from scope creep and creates the auditable decision trail that regulated environments demand.

Layer 1: Native Configuration

Use what the platform already does.

Every major enterprise platform — Microsoft 365, Salesforce, ServiceNow, Google Workspace, AWS — ships with configuration capabilities that most organizations underuse. Lists, views, workflows, permissions, dashboards, form builders, approval flows — these exist in the platform the client already owns and already pays for.

The entry criteria for Layer 1 are minimal: does the client have a platform, and does that platform offer relevant configuration options? The exit criteria are specific: the native capability cannot meet a clearly documented functional requirement that has been validated with end users. Not "it would be nicer if" — it genuinely cannot do the thing the user needs it to do.

Most consulting firms skip this layer entirely, or treat it as a checkbox before moving to the recommendation they already planned to make. Product-led teams spend meaningful time here because every problem solved at Layer 1 is a problem that generates zero additional licensing cost, zero custom code maintenance, and zero new vendor dependencies.

Layer 2: Marketplace or Commercial Product

Someone already solved this.

If native configuration falls short, the next evaluation is whether a commercial product — available through a marketplace like Microsoft AppSource, Salesforce AppExchange, or an equivalent — already addresses the gap. These are products built by ISVs (independent software vendors) specifically to extend platform capabilities in areas where native features are insufficient.

This is the layer where the buy-versus-build decision becomes concrete. A well-built commercial product has been tested by thousands of users, maintained by a dedicated team, and updated continuously. Custom code that solves the same problem must be tested by the client's team, maintained by the client's team (or their contractor), and updated when the underlying platform changes. The total cost of ownership comparison almost always favors the commercial product unless the client's requirements are genuinely unique.

For example, within the Microsoft 365 ecosystem, organizations frequently need calendar aggregation, visual scheduling, or cross-site data views that exceed what SharePoint's native web parts provide. Commercial marketplace solutions — including Zenpo's SharePoint products on AppSource — exist specifically to fill these gaps without requiring custom SPFx development. The same pattern applies in Salesforce's AppExchange, AWS Marketplace, and other platform ecosystems.

The exit criteria for Layer 2: no available commercial product meets the documented functional requirement, or the requirement is so specific to the client's domain that no general-purpose product could reasonably address it.

Layer 3: Custom Code

Build only what does not exist and cannot be bought.

Custom development is the right answer when the client's requirements are genuinely unique — when no configuration option and no commercial product can solve the problem. This happens. Domain-specific workflows, proprietary data models, novel integrations, and bespoke user experiences are all legitimate reasons to write code.

But custom code should arrive at the table as the last option evaluated, not the first one proposed. Every line of custom code creates a maintenance obligation. It must be updated when the platform changes. It must be secured. It must be documented. It must be supported when the original developer is unavailable. These costs compound over years, and they are almost always underestimated at the point of decision.

The entry criteria for Layer 3 are strict: Layers 1 and 2 have been evaluated and documented as insufficient, the functional requirement is validated with end users (not assumed by the architecture team), and the organization has a realistic plan for maintaining the custom code after the engagement ends.

How the Layers Interact

The Solution Hierarchy is not a waterfall. A single engagement may use all three layers simultaneously. Native configuration handles 60% of the requirements. A commercial product covers another 25%. Custom code addresses the remaining 15% where the client's needs are genuinely unique. The ratio varies, but the principle holds: complexity enters the solution only where simpler options have been exhausted.

This framework also provides a natural governance checkpoint for enterprise technology decisions. At each layer transition, the team documents why the lower-complexity option was insufficient. That documentation protects the client from scope creep, protects the engagement from feature bloat, and creates an auditable decision trail — something federal program managers and commercial CTOs both value.

The Solution Hierarchy in one paragraph

Product-led consulting evaluates every requirement through three layers in sequence: native platform configuration first, commercial marketplace products second, custom code last. Each layer escalation requires documented justification. The framework is platform-agnostic — it applies to Microsoft, Salesforce, AWS, and any other enterprise ecosystem. The goal is not to avoid custom code. The goal is to ensure custom code enters the solution only where simpler options have been genuinely evaluated and found insufficient. The result is lower implementation cost, lower ongoing maintenance burden, and a solution the client's team can realistically own after the engagement ends.

Implementation: Running a Product-Led Engagement

A product-led consulting engagement operates differently from a stack-first engagement at every stage. The differences are structural, not cosmetic.

Discovery: Problem Decomposition, Not Technology Selection

The first phase of a product-led engagement produces a problem map, not a technology recommendation. The team decomposes the client's stated need into discrete functional requirements, validates each requirement with the people who will actually use the system, and prioritizes by business impact.

This sounds obvious. It is rarely done. Most engagement teams arrive with a mental model of the solution already formed — "this is a Power Platform project" or "this needs a custom React app" — and the discovery phase becomes a confirmation exercise rather than a genuine investigation. Product-led discovery resists that pattern by prohibiting technology discussion until the problem map is complete.

The deliverable at the end of discovery is a requirements matrix that maps each functional need to a complexity rating (can this be solved with configuration, a commercial product, or custom code?) without specifying which configuration, which product, or which custom approach. Technology enters in the next phase.

This phase also surfaces a critical distinction that stack-first engagements routinely miss: the difference between stated requirements and actual requirements. Stakeholders who have been conditioned by previous vendor pitches will often describe their needs in technology terms — "we need a Power App" or "we need a custom portal" — rather than business terms. Product-led discovery translates those statements back into the underlying need. "We need a Power App" becomes "staff need to submit intake forms from mobile devices, with conditional logic based on form type, and automatic routing to the correct approver." That restatement opens the solution space. The Power App might be the right answer. A configured SharePoint list with a custom view might also work. A $50-per-month SaaS form tool might solve it entirely. The point is that the technology conversation cannot happen honestly until the business requirement is stated in technology-neutral language.

Evaluation: The Solution Hierarchy Applied

With the problem map in hand, the team evaluates each requirement through the Solution Hierarchy. This is where AI tools — Copilot, ChatGPT, Cursor, and others — accelerate the process. Rapid prototyping, marketplace research, and configuration testing that once took weeks can now be compressed into days. The tools do not change the decision logic, but they dramatically reduce the cost of evaluating options at each layer.

The evaluation produces a solution architecture document that maps each requirement to a specific layer, identifies the specific product or configuration approach for Layers 1 and 2, and scopes the custom development for Layer 3. Every layer escalation includes a documented justification.

Team Composition: Breadth Over Depth

Product-led engagements require people who can work across multiple platforms and solution types. A team of five Salesforce specialists cannot credibly evaluate whether the client's problem is better solved in a different ecosystem. This does not mean every team member must be a generalist — domain expertise matters — but the team as a whole must have sufficient breadth to evaluate options honestly.

This is a structural constraint that most consulting firms resist because their hiring, training, and utilization models are organized around single-platform specialization. A firm that bills consultants based on Salesforce certification hours has a financial incentive to staff Salesforce-certified consultants on every engagement, regardless of whether Salesforce is the right answer. Product-led firms structure their teams around problems, not platforms.

Delivery: Build Small, Validate Early

Product-led delivery follows a simple principle: deploy the smallest viable solution to real users as quickly as possible, observe how they use it, and iterate. This is not a novel insight — it is the core of every agile methodology ever written. What makes it different in a product-led context is that the smallest viable solution often involves zero custom code. A native configuration or a commercial product, deployed to real users in two weeks, generates more useful feedback than a custom architecture document that takes three months to write.

The Standish Group's research consistently supports this approach — small projects achieve roughly 90% success rates, while large projects succeed less than 10% of the time.1 Product-led consulting achieves small-project success rates on large-project scopes by decomposing the engagement into discrete, independently deployable components.

Handoff: The Maintenance Reality Check

The engagement is not complete when the solution is deployed. It is complete when the client's team can maintain and evolve the solution without the consulting firm's involvement. This sounds straightforward, but it is where most engagements — product-led or otherwise — quietly fail.

Product-led engagements have a structural advantage here because the maintenance burden is proportional to the amount of custom code in the solution. Native configurations require no ongoing developer involvement beyond normal platform administration. Commercial products are maintained by their vendors. Only the Layer 3 custom components require the client to have — or contract for — ongoing development capability.

Day 2 operations: what the client owns after you leave

The self-sufficiency test is the clearest measure of whether a product-led engagement succeeded. On the day after the consulting firm leaves, can the client's team answer these questions without making a phone call?

  • What does each component in the solution do, and which layer of the Solution Hierarchy does it belong to?
  • For Layer 1 components, who on the client's team knows how to adjust the configuration when business rules change?
  • For Layer 2 components, who manages the vendor relationship and renewal, and what is the escalation path when the product does not behave as expected?
  • For Layer 3 components, who maintains the code, what platform dependencies does it have, and what will need to change when the underlying platform updates?

If the answer to any of these is "we'd have to call the consulting firm," the handoff is incomplete. The solution may be working, but it is not yet owned by the client.

The long-term ownership question is where product-led consulting diverges most sharply from traditional models. A stack-first engagement that delivers 80% custom code has effectively created a permanent dependency — either on the original consulting firm, on developers with the same platform specialization, or on an internal team that must be staffed and retained specifically to maintain what was built. That dependency is a recurring cost that rarely appears in the original proposal.

A product-led engagement might deliver 70% native configuration, 20% commercial product, and 10% custom code for a workflow problem — or 30% native, 15% commercial, and 55% custom for a complex integration with proprietary business logic. The ratio varies by problem complexity. What stays constant is the sequence: every component earned its complexity level by surviving the evaluation, not by being the default.

The native configuration is maintained by the platform vendor's update cycle — the client's admins apply updates, not developers. The commercial products are maintained by their ISVs — the client pays a subscription, not a maintenance contract. Only the 10% custom layer requires developer attention, and because it was scoped narrowly, that attention is measured in hours per quarter rather than FTEs per year.

This changes the client's internal staffing calculus. A traditional engagement output often requires the organization to hire specialized developers or retain the consulting firm on a managed-services contract. A product-led engagement output can typically be maintained by the client's existing IT operations team, supplemented by occasional specialist support for the custom components. The difference in annual cost of ownership between these two models compounds significantly over three to five years — often exceeding the original engagement cost.

The handoff deliverable should include a clear inventory of every component in the solution, categorized by layer, with specific documentation for each Layer 3 component: what it does, why it was built custom, what platform dependencies it has, and what will need to change when the underlying platform updates. This documentation is not optional. It is the difference between a solution that serves the client for five years and one that becomes unmaintainable the quarter after the engagement ends.

In federal contexts, this documentation also contributes to audit readiness and knowledge continuity — two persistent concerns for program managers who have watched institutional knowledge evaporate when contractor teams rotate off an engagement.6

Common Failure Modes

Product-led consulting is a discipline, and disciplines have failure modes. These are the five most common ways the model breaks down.

Platform Religion

The core dysfunction. A firm's identity is fused with its technology stack, and that fusion overrides objective evaluation. The Salesforce shop recommends Salesforce. The Microsoft partner recommends M365. The AWS house recommends Lambda functions. The recommendation follows the firm's margin structure, not the client's need.

The diagnostic question: did the engagement team evaluate solutions outside their primary platform? If every recommendation maps to a single vendor's ecosystem, the evaluation was not product-led. It was procurement dressed as consulting.

Résumé-Driven Architecture

The lead architect selects technology based on what they know, what they have used before, or what they want to learn — rather than what the problem requires. Microservices for an internal tool with 50 users. Kubernetes for a team that cannot maintain a VM. Event-driven architecture for a workflow that runs once a day.

This failure mode is distinct from platform religion because it operates at the individual level rather than the firm level. The architect may not be loyal to a particular vendor — they may simply be loyal to a particular pattern. The result is the same: complexity that the problem did not demand and the organization cannot sustain.

OOTB Fundamentalism

The mirror image of platform religion. Some firms — or some consultants within firms — develop a dogmatic commitment to out-of-the-box solutions and refuse to recommend custom code even when the client's requirements genuinely demand it. The result is a solution that technically works but forces users into awkward workarounds because the configuration was stretched beyond its design intent.

Product-led consulting is not anti-custom-code. It is anti-premature-custom-code. When Layers 1 and 2 cannot solve the problem, Layer 3 is the right answer, and a firm that refuses to go there is not being pragmatic — it is being rigid.

Vendor-Captive Evaluation

A subtler version of platform religion. The engagement team evaluates multiple options, but only within a single vendor's ecosystem. The evaluation compares Power Apps versus Power Automate versus a custom SPFx web part — all Microsoft. Or it compares Salesforce Flow versus Apex versus a managed package — all Salesforce. The evaluation looks rigorous, but it never asks whether the problem might be better solved outside the vendor's walls.

This failure mode is reinforced by vendor partnership programs that reward firms for driving platform adoption. A Gold-tier Microsoft partner has financial incentives to keep every solution within the Microsoft ecosystem. Those incentives are not inherently wrong, but they must be disclosed and managed, not allowed to silently constrain the evaluation.

Complexity Inflation

Building for scale that will never exist. A healthcare company with 200 employees does not need a microservices architecture. A nonprofit running a volunteer scheduling system does not need an event-driven pipeline. An internal tool used by 50 people does not need the infrastructure patterns that Netflix uses to serve 200 million subscribers.

Complexity inflation is often driven by a combination of résumé-driven architecture and genuine enthusiasm for elegant engineering. The architecture is technically impressive. It is also unmaintainable by the team that inherits it, over-licensed for the usage it actually receives, and far more expensive than the problem justified.

The diagnostic question is blunt: how many users will this system serve, and does the proposed architecture match that number? If the answer is "50 users" and the architecture involves Kubernetes, something has gone wrong.

Real-World Scenario

The following is a composite drawn from multiple consulting engagements. Details have been altered to protect confidentiality.7

A mid-size healthcare organization needed to digitize its patient intake workflow. Paper forms were creating data entry backlogs, errors were propagating into downstream systems, and compliance reporting was consuming staff hours that should have been spent on patient care. The organization issued an RFP.

Three firms responded.

Firm A, a Salesforce partner, proposed a Health Cloud implementation. The platform would handle intake forms, route approvals, generate compliance reports, and integrate with the existing EHR. Estimated cost: $150,000 for initial implementation, plus $48,000 per year in licensing. Timeline: six months. The proposal was thorough, well-structured, and entirely organized around Salesforce capabilities. At no point did it evaluate whether the organization's existing systems could handle any portion of the requirement.

Firm B, specializing in the Microsoft ecosystem, proposed a Power Apps solution with a SharePoint backend. The lead consultant had deep expertise in Power Automate workflows and recommended an architecture built around custom connectors and premium Power Platform licensing. The proposal was solid technically, but it relied on a workflow tool the organization had never used and would need to staff ongoing support for. Estimated cost: $80,000 for implementation, plus $24,000 annually in premium licensing. Timeline: four months.

Firm C applied the Solution Hierarchy. During discovery, they decomposed the intake workflow into six functional requirements. Three of those — form capture, approval routing, and basic reporting — were already solvable using native features of the M365 tenant the organization already paid for. Two more — cross-system data validation and conditional form logic — were handled by a commercial forms product available in a SaaS marketplace at $6,000 per year. One requirement — a custom integration with the legacy EHR's API — genuinely required custom development. That custom work was scoped at 120 hours.

Total cost: $38,000 for the engagement, $6,000 per year in product licensing, and zero new platform dependencies. Timeline: six weeks. The organization deployed the intake workflow to one clinic, observed staff using it for two weeks, made three adjustments based on user feedback, and rolled it out to the remaining locations over the following month.

The difference was not that Firm C was cheaper. The difference was that Firm C evaluated the full solution landscape before making a recommendation. The other firms started with their platform and worked backward to justify it.

Measuring Success

Product-led engagements produce outcomes that are measurably different from stack-first engagements. The metrics that matter most to buyers fall into four categories.

Time to Value

How quickly does a working solution reach real users? Product-led engagements typically deploy usable components within weeks, not months, because the first components deployed are often native configurations or commercial products that require no custom development cycle. Stack-first engagements that begin with custom architecture tend to delay user-facing deployment until the architecture is complete — which can take months.

Complexity Ratio

What percentage of the final solution is custom code versus native configuration or commercial product? A product-led engagement that results in 80% custom code should raise questions — either the problem was genuinely unique (possible) or the evaluation skipped Layers 1 and 2 (more common). Healthy ratios vary by domain, but most enterprise workflow problems can be solved with 30% or less custom code when the full solution landscape is evaluated.

Ongoing Maintenance Burden

What does the client need to support after the engagement ends? Every line of custom code, every premium license, and every custom integration adds to the ongoing cost of ownership. Product-led engagements minimize this burden by design — native configurations require no maintenance beyond platform updates, and commercial products are maintained by their vendors. Custom code, when used, is scoped narrowly enough that the client's team can realistically support it.

Licensing Efficiency

What is the ratio of licensing cost to user count? A $150,000-per-year CRM platform for 50 users is $3,000 per user per year. A commercial marketplace product that solves the same problem at $6,000 per year for the same 50 users is $120 per user per year. This metric exposes platform religion more reliably than any audit — if the per-user licensing cost is dramatically out of proportion to the problem's complexity, the technology selection was driven by something other than the client's need.

What to Look for in Proposals

When evaluating consulting proposals, these signals distinguish product-led firms from stack-first firms:

Product-led proposals describe the problem before describing the technology. They include an explicit evaluation of native platform capabilities. They document why commercial products were or were not sufficient. They scope custom development as a percentage of the total solution, with justification for each custom component. They estimate ongoing maintenance burden in addition to implementation cost.

Stack-first proposals lead with the technology. They describe platform features before articulating the client's problem. They recommend a single vendor's ecosystem without documenting alternatives evaluated. They estimate implementation cost without addressing total cost of ownership.

The most reliable signal is the proposal's structure. If the technology recommendation appears before the problem statement — or if the problem statement reads like a justification for a predetermined technology choice — the engagement is not product-led regardless of what the firm calls its methodology.

A second signal is how the firm handles the "what happens after" question. Product-led firms can articulate the ongoing maintenance burden in specific terms: which components require developer attention, which components are vendor-maintained, and what the annual cost of ownership looks like in year two and year three. Stack-first firms tend to scope implementation cost in detail and treat ongoing cost as an afterthought — or worse, as a separate maintenance contract that generates additional revenue for the firm. That incentive structure is worth understanding. A firm that profits from ongoing maintenance of custom code it recommended has a financial interest in recommending more custom code than the problem requires.8

In federal contexts, FITARA compliance and CPARS ratings increasingly reward lower-complexity, faster-delivery approaches.9 Agencies that have achieved the highest FITARA grades — 13 agencies received an A on the most recent scorecard — tend to be the ones that have embraced commercial off-the-shelf solutions and reduced their dependency on custom-built systems.10 Product-led consulting engagements align naturally with this trajectory because they minimize custom code, reduce platform dependencies, and deliver working systems faster.

Summary and Key Takeaways

Product-led consulting is a delivery model that selects technology by client need, not firm identity. It operates through the Solution Hierarchy — a three-layer decision framework that evaluates native configuration first, commercial marketplace products second, and custom code last. The framework is platform-agnostic and applies across Microsoft, Salesforce, AWS, and any other enterprise ecosystem.

The core dysfunction it addresses is platform religion: consulting firms that fuse their identity with a technology stack and allow that fusion to distort every client recommendation. The opposite failure — refusing custom code when it is genuinely needed — is equally problematic. Product-led consulting occupies the pragmatic middle.

The key takeaways:

  • Technology selection should follow problem definition, never precede it. Every engagement should begin with problem decomposition, not technology evaluation.
  • The Solution Hierarchy — native configuration, then commercial product, then custom code — provides a repeatable decision framework that minimizes complexity and ongoing maintenance burden.
  • Complexity inflation, résumé-driven architecture, and vendor-captive evaluation are the most common ways product-led engagements fail. Each can be detected by asking whether the proposed architecture matches the actual scale and specificity of the problem.
  • The best technology decision is the one that solves the client's problem with the least complexity the organization has to maintain. Everything follows from that premise.

Footnotes

  1. OpenCommons, "CHAOS Report on IT Project Outcomes" — Standish Group data from 1994–2020 showing project success rates ranging from 16% to 31% depending on era and measurement criteria. Small projects consistently achieve ~90% success rates; large projects succeed <10% of the time. 2

  2. McKinsey & Company, "Delivering large-scale IT projects on time, on budget, and on value" — McKinsey–Oxford study of 5,400+ IT projects finding 45% average cost overrun and 56% less value delivered for projects exceeding $15M. Seventeen percent of large IT projects become "black swans" with overruns exceeding 200%.

  3. BCG, "Most Large-Scale Tech Programs Fail—Here's How to Succeed" — BCG research confirming more than two-thirds of large-scale technology programs miss targets on time, budget, or scope. Over 60% of respondents attributed failure to the absence of an overarching master plan.

  4. GAO, "IT Portfolio Management: OMB and Agencies Are Not Fully Addressing Selected Statutory Requirements" (GAO-25-107041) — GAO finding that OMB is not fully addressing FITARA requirements for IT portfolio management oversight, including portfolio reviews and high-risk investment reviews.

  5. Gartner, "Predicts 30% of Generative AI Projects Will Be Abandoned After Proof of Concept By End of 2025" — Gartner prediction citing poor data quality, inadequate risk controls, escalating costs, and unclear business value as primary drivers of GenAI project abandonment.

  6. McKinsey & Company, "Unlocking the potential of public-sector IT projects" — McKinsey data showing public-sector IT projects experience cost overruns nearly three times higher than private-sector equivalents, with more than 80% overrunning their schedules.

  7. Composite scenario based on representative engagement patterns. Specific organizations, costs, and timelines are illustrative. The scenario reflects common dynamics observed across healthcare, nonprofit, and commercial enterprise engagements.

  8. BCG, "De-Risking Large Programs: Quality Assurance in Tech" — BCG finding that one-third of large technology projects go significantly over budget or are cancelled, with potential loss from major project delays ranging from 100–170% of investment cost.

  9. Federal News Network, "Historic FITARA scorecard shows record 13 agencies earned A's" — Reporting on the 18th FITARA scorecard showing record high grades, with GAO estimating $31.4 billion in cumulative cost savings since the scorecard began.

  10. CIO.com, "Does vendor influence turn into CIO bias?" — Analysis of how vendor influence distorts technology selection at the enterprise level, with practical recommendations for distinguishing problem definition from solution discovery.