Skip to main content

Evidence-Based Enterprise Governance

Why Most Governance Programs Fail Before the Platform Goes Live

Enterprise governance has a credibility problem. Organizations spend months writing governance documentation before a single user touches the platform, then act surprised when the documentation has no measurable effect on how the platform actually gets used.

The failure rate is not subtle. Gartner projects that 80% of data and analytics governance initiatives will fail by 2027, driven primarily by a disconnect between governance programs and tangible business outcomes.1 The Standish Group's research paints an even bleaker picture across IT generally: 66% of technology projects end in partial or total failure, with governance structures that existed on paper but never translated into operational discipline.2

This guide makes a specific argument: governance that precedes operational reality is governance built on borrowed assumptions. It will always lag behind the organization it claims to govern, because it was never designed around the organization in the first place. It was designed around a vendor's best-practice checklist, a consulting firm's framework, or a compliance requirement that maps to an audit — not to what real users actually do on the platform every day.

The alternative is evidence-based governance. Observe how the platform is used in production. Identify the behaviors that cause operational pain. Write rules that address those specific behaviors. Enforce those rules through the platform itself, not through a document that depends on voluntary compliance.

This guide is written for IT directors, CTOs, and program managers evaluating how to stand up or reset governance for enterprise platforms — particularly within Microsoft 365 and Azure environments, with applicability to any enterprise stack. It covers the failure modes of premature and performative governance, introduces a framework for building governance from observed evidence, and provides concrete implementation guidance for M365 and Azure environments.


The Problem: Governance as a Pre-Deployment Exercise

The default playbook for enterprise governance runs something like this. Leadership approves a new platform investment. A governance committee is formed. That committee spends three to six months producing a governance document — sometimes hundreds of pages — covering naming conventions, permissions hierarchies, content types, data retention policies, and acceptable use standards. The document is reviewed, approved, and filed. The platform launches. Within weeks, the governance document is irrelevant.

This sequence is so common across enterprises and government agencies that it barely registers as a dysfunction. It is simply how governance gets done. The problem is that governance designed in advance of deployment is governance designed around predictions, and predictions about how thousands of users will interact with a complex platform are almost always wrong.

IDC's 2024 CIO Poll Survey found that "developing better IT governance and enterprise architecture" ranked as the fourth-highest priority for CIOs — yet the same survey revealed that lack of alignment between IT and the business remains the third-largest challenge organizations face.3 The disconnect is structural: governance programs are built by IT teams in isolation, based on vendor documentation and industry frameworks, then handed to a user population that had no input into the rules and no mechanism to comply with them even if they wanted to.

In federal environments, the problem compounds. The FITARA Scorecard — the biannual congressional report card on federal IT management — has been tracking agency compliance since 2015. By September 2024, 13 of 24 agencies had earned an A grade, the highest count in the scorecard's history.4 That sounds like progress. But the GAO simultaneously reported that 463 of its 1,881 IT-related recommendations to agencies remained unimplemented as of January 2025, with potential savings in the hundreds of millions.5 Governance grades can go up while governance outcomes stagnate. The scorecard measures the existence of governance structures. It does not measure whether those structures change behavior.

The commercial sector has its own version of this gap. BCG estimated that 70% of digital transformation efforts fall short of meeting targets.6 In every post-mortem, governance appears somewhere on the list of contributing factors — but governance is usually described as something the organization had, not something that failed. The governance document existed. The policy was written. The committee met quarterly. None of that prevented the outcome.

The root cause is not that organizations lack governance. It is that they build governance from the wrong source material. Vendor best practices describe how a platform could be used. Analyst frameworks describe how a platform should be used. Neither describes how a platform will be used by a specific organization's population of users with their specific habits, constraints, and institutional behaviors. Only operational data can answer that question, and operational data does not exist before deployment.

The pattern repeats across industries and platform categories. A healthcare organization deploys an EHR integration layer and governs it based on the vendor's configuration guide — then discovers that clinicians route around the governed workflows because the prescribed data entry sequence does not match their clinical workflow. A financial services firm implements Salesforce with a governance framework modeled after an analyst report — then finds that field teams create duplicate records at twice the rate the governance model anticipated, because the deduplication rules do not account for how sales reps actually enter prospect data in the field. A nonprofit migrates to a cloud collaboration platform and governs it based on a consultant's template — then watches departmental teams replicate their legacy file share structures inside the new platform, rendering the governance framework irrelevant within weeks. The technology changes. The governance failure mode stays the same: rules that describe a platform the organization does not actually inhabit.


What Enterprise Governance Actually Means

Strip the buzzword layer and governance reduces to three operational questions. What behaviors on this platform are acceptable? What behaviors are not? And what happens when someone engages in unacceptable behavior?

Every governance framework in existence — COBIT, ITIL, NIST RMF, ISO 27001 — is an attempt to answer those three questions at varying levels of abstraction. The abstraction is useful for compliance and audit purposes. It is less useful for day-to-day platform management, because the answers to those three questions are specific to each organization, each platform, and each user population.

The confusion starts with the word itself. "Governance" gets used to describe at least four distinct activities, and organizations routinely conflate them.

Policy governance is the creation of written rules about platform use. Naming conventions, data classification standards, retention schedules. This is what most governance committees produce: documents.

Permissions governance is the management of who can do what on the platform. Role-based access control, conditional access policies, group membership. This is what most IT teams implement: technical controls around identity and access.

Compliance governance is the alignment of platform operations with regulatory or audit requirements. HIPAA, FedRAMP, SOX, GDPR. This is what legal and compliance teams care about: demonstrable adherence to external mandates.

Operational governance is the ongoing observation and enforcement of platform health. Usage analytics, storage consumption, performance baselines, incident response. This is what keeps the platform running — and it is the category that most governance programs neglect entirely.

The first three categories get attention because they have clear ownership. Legal owns compliance. IT security owns permissions. A governance committee owns policy. Operational governance falls through the cracks because it requires continuous effort, platform-specific tooling, and a willingness to adjust rules based on what the data shows. It is the least glamorous category and the most consequential one.

When an organization reports that "governance is in place," they almost always mean policy governance — documents exist. They may also mean permissions governance — access controls are configured. They rarely mean operational governance — someone is watching what happens on the platform and enforcing standards in real time.

The distinction matters because policy without operational enforcement is documentation, not governance. A 200-page SharePoint governance plan that specifies naming conventions, metadata requirements, and site collection quotas means nothing if no mechanism exists to detect violations and enforce corrections. The plan might satisfy an auditor. It will not prevent a user from creating a folder structure twelve levels deep, building a list with 30,000 items and no indexed columns, or provisioning a team site that duplicates content already hosted elsewhere.

In M365 and Azure environments specifically, the tooling for operational governance has matured significantly. Microsoft Purview provides data classification, sensitivity labeling, and compliance scoring across the M365 ecosystem.7 Azure Policy enforces organizational standards at the resource level — preventing noncompliant deployments before they happen, not after. SharePoint admin center analytics surface usage patterns, storage trends, and site activity metrics. The tools exist. The gap is not technology. The gap is that organizations implement governance before they have the operational data to inform what those tools should enforce.

The equivalent capabilities exist across other enterprise stacks. AWS Service Control Policies enforce guardrails at the organizational unit level. Google Cloud Organization Policy Service constrains resource configurations across projects. ServiceNow governance modules track compliance against defined controls. The platform matters less than the principle: governance enforcement belongs in the platform, not in a document management system.


The Observe-Codify-Enforce Framework

Observe-Codify-Enforce Framework — Evidence-based enterprise governance cycle showing three phases: Observe (60-90 days of usage telemetry), Codify (rules from documented operational problems), and Enforce (platform-level automated controls), with a continuous feedback loop. Credit: zenpo.ai, Zenpo Software.

Evidence-based governance follows a three-phase cycle. Each phase depends on the outputs of the previous one, and the cycle repeats as the organization and its platform usage evolve.

Phase 1: Observe. Deploy the platform with telemetry enabled. Before writing a single governance rule, collect 60 to 90 days of usage data. The goal is to establish a baseline of actual behavior: how users create and organize content, which features they adopt, which ones they ignore, where storage consumption concentrates, which administrative actions generate the most support tickets, and which usage patterns correlate with platform instability or data sprawl.

This phase requires discipline, because the instinct to intervene will be strong. Leadership will want guardrails immediately. The observation period is not a governance vacuum — basic permissions controls and compliance requirements (encryption, conditional access, data residency) should be in place from day one. Those are security and compliance controls, not governance rules. The distinction is important: security controls protect the platform from external threats and unauthorized access. Governance rules shape how authorized users behave on the platform. The former should be non-negotiable from launch. The latter should be evidence-based.

Phase 2: Codify. After the observation period, analyze the data to identify the usage patterns that cause operational problems. Not theoretical problems. Actual, measurable problems: storage exhaustion, performance degradation, support ticket volume, data duplication, permission sprawl. Every governance rule written in this phase maps directly to a documented behavior that caused a documented consequence.

The codification step is where evidence-based governance diverges most sharply from traditional approaches. A traditional governance plan might include a rule like "site collections shall not exceed 100 GB of storage." An evidence-based governance rule reads: "Between days 30 and 75, six site collections exceeded 50 GB due to video file uploads that duplicated content already hosted in Stream. Storage policy will enforce a 25 GB soft cap with automated alerting and redirect video hosting to the designated media platform." The rule is specific, grounded in observed behavior, and paired with an enforcement mechanism.

Codification should produce a concise governance register — not a 200-page document. Each entry includes the observed behavior, the operational impact, the rule, and the enforcement method. Ten to fifteen well-targeted rules will address the majority of governance issues in a typical enterprise deployment. Comprehensive governance documents with hundreds of rules are a signal that the organization is governing hypothetical behaviors, not observed ones.

Phase 3: Enforce. Implement every rule through a platform-level control. If a rule cannot be enforced through the platform, it is either the wrong rule or the platform lacks the necessary capability — in which case, the gap should be flagged and tracked, not papered over with a policy that depends on user compliance. Voluntary compliance is not enforcement. It is aspiration.

Enforcement mechanisms include automated policies (Azure Policy, Microsoft Purview sensitivity labels, conditional access rules), workflow-triggered alerts (Power Automate notifications when thresholds are breached), administrative controls (SharePoint site creation restrictions, storage quotas, group expiration policies), and automated remediation (scripts that archive inactive sites, revoke stale permissions, or quarantine noncompliant resources).

The cycle then repeats. After enforcement is in place, the observation phase resumes — now measuring whether the rules are working, whether new problematic behaviors have emerged, and whether any rules have become obsolete. Governance drift — rules that made sense in year one but no longer reflect how the platform is used — is one of the most common failure modes in mature environments. The Observe-Codify-Enforce cycle is designed to prevent it by treating governance as an ongoing operational function, not a project with an end date.


Implementation in M365 and Azure Environments

The Observe-Codify-Enforce framework maps directly onto specific tooling in the Microsoft ecosystem. This section covers the practical execution for each phase.

Observation tooling

The Microsoft 365 admin center provides usage reports across Exchange, SharePoint, OneDrive, Teams, and Yammer. These reports surface active user counts, storage consumption trends, file activity by type, and device distribution. For deeper analysis, the Microsoft Graph API exposes granular usage data that can be piped into Power BI for custom dashboards and trend analysis.

Microsoft Purview's Data Security Posture Management dashboard — particularly the AI-focused views introduced alongside Copilot adoption — surfaces data classification gaps, sensitivity label adoption rates, and potential oversharing patterns.7 Even for organizations not yet deploying Copilot, the classification insights are valuable governance inputs: they show where sensitive data lives, how it moves, and whether existing labels are being applied consistently.

Azure Monitor and Log Analytics provide infrastructure-level telemetry for Azure-hosted resources. Resource utilization, cost trends, deployment patterns, and policy compliance status all feed into the observation phase for cloud governance.

SharePoint-specific analytics deserve particular attention. The SharePoint admin center surfaces site-level storage consumption, page views, unique visitors, and file activity. For organizations with large SharePoint footprints, the combination of admin center analytics and Power BI-connected audit logs produces a governance-ready dataset within 60 days of deployment.

Equivalent observation capabilities in other ecosystems: AWS CloudTrail and AWS Config provide audit and compliance data. Google Cloud's operations suite (formerly Stackdriver) covers logging and monitoring. ServiceNow's CMDB and Discovery modules map asset relationships and usage patterns. The principle is platform-agnostic: instrument first, govern second.

Codification in practice

The governance register should live where the team that enforces governance actually works — not in a document library that gets reviewed annually. For M365-centric organizations, a SharePoint list or a Dataverse table provides structured storage with versioning, approvals, and Power Automate integration. Each register entry captures the observed behavior, the data source that identified it, the operational impact, the rule, the enforcement mechanism, and the review date.

A practical example: the observation phase reveals that 40% of new Teams channels created in a 90-day window are abandoned within 30 days, with associated SharePoint sites consuming storage and creating content sprawl. The codified rule: Teams channels inactive for 60 days receive an automated ownership confirmation. Channels unconfirmed after 14 days are archived. The enforcement mechanism: a group expiration policy in Entra ID combined with a Power Automate flow that notifies owners and escalates to IT if no action is taken.

Enforcement architecture

Azure Policy is the primary enforcement layer for cloud resource governance. Policies can prevent noncompliant resource deployments (deny effect), audit existing resources for compliance gaps (audit effect), or automatically remediate drift (deployIfNotExists effect). For organizations operating across Azure and M365, Azure Policy handles infrastructure governance while Microsoft Purview handles data and content governance.

A practical enforcement example at the Azure level: the observation phase reveals that development teams are provisioning virtual machines in regions that violate the organization's data residency requirements. Rather than issuing a policy memo reminding teams of the approved regions, an Azure Policy with a deny effect blocks VM provisioning outside the allowed region list. The governance rule is self-enforcing. No one needs to read a document. No one needs to remember the policy. The platform prevents the violation before it occurs. This is the difference between governance that works and governance that hopes.

Conditional access policies in Entra ID enforce identity and device governance. These are security controls first, but they also serve governance purposes: restricting access to managed devices, enforcing multi-factor authentication for administrative actions, and blocking legacy authentication protocols that bypass modern governance controls.

For SharePoint and Teams governance specifically, the combination of sensitivity labels (applied through Purview), site creation restrictions (managed through the SharePoint admin center), and group lifecycle policies (configured in Entra ID) covers the majority of governance enforcement requirements without custom development. Where gaps exist — platform capabilities that do not yet support a specific enforcement need — Power Automate and Azure Logic Apps provide a workflow-based enforcement layer that bridges the gap until native platform controls catch up.

Team composition for governance operations

Governance operations is a function, not a project. It requires ongoing staffing. The minimum viable governance operations team for a mid-size enterprise M365 deployment consists of a governance lead (owns the register, reviews data, proposes rule changes), a platform administrator (implements enforcement mechanisms, monitors alerts), and a compliance liaison (ensures governance rules align with regulatory requirements and audit cycles). In federal environments, this team works alongside the agency ISSO and the CISO's office to ensure that governance controls complement — rather than conflict with — security requirements under FISMA and NIST 800-53.

Larger organizations may justify a dedicated governance analyst role focused on usage data interpretation and trend identification. The observation phase generates substantial data; someone needs to translate it into actionable governance insights, not just dashboards.


Common Failure Modes

Evidence-based governance is not immune to dysfunction. These are the specific ways it goes wrong, listed in order of how frequently they appear in practice.

Governing before observing. The most common failure mode, and the one this entire guide argues against. An organization deploys a platform, immediately writes governance rules based on vendor documentation or a consultant's template, and then wonders why compliance is low. The rules do not reflect what users actually do. They reflect what someone predicted users would do. The prediction is wrong. The rules are ignored. The governance program is declared unsuccessful, and the organization either abandons governance or doubles down on documentation — neither of which solves the problem.

Treating governance as a document. A governance plan that lives in a PDF or a Word document is not governance. It is a record of intentions. Governance requires enforcement, and enforcement requires tooling. If the question "how does this rule get enforced?" does not have a specific, technical answer for every rule in the register, the register is a wish list.

Confusing permissions with governance. Permissions management — who can access what — is necessary but not sufficient. An organization can have perfectly configured role-based access and still suffer from content sprawl, naming chaos, abandoned resources, and uncontrolled provisioning. Permissions answer the question "who is allowed in?" Governance answers the question "what happens once they're inside?"

Governing everything at once. A governance register with 150 rules is a governance register that enforces zero rules effectively. Evidence-based governance targets the specific behaviors causing the most operational damage. In practice, five to fifteen rules cover the vast majority of governance issues in a typical enterprise deployment. Comprehensiveness is a trap. Specificity is a strategy.

Assigning governance to a committee with no enforcement authority. Governance committees that meet quarterly to review policies but have no ability to implement or enforce changes are overhead, not governance. Effective governance requires a team with both the analytical capability to identify problems and the administrative access to deploy solutions. If the governance function cannot modify a policy, create an automation, or restrict a platform behavior without filing a change request and waiting three weeks for approval, governance is performative.

Never revisiting the rules. Governance drift is inevitable. User behavior changes as the platform matures. Features get added or deprecated. Organizational priorities shift. A governance rule that was critical in month three may be irrelevant in month eighteen, and a behavior that was harmless early on may become problematic at scale. The Observe-Codify-Enforce cycle accounts for this through continuous observation, but only if the organization actually completes the cycle more than once. Annual governance reviews are too infrequent. Quarterly review cycles, informed by ongoing telemetry, are the minimum cadence for governance operations in active enterprise environments.


Real-World Scenario: A Federal Bureau's SharePoint Governance Reset

The late 2000s were a formative period for SharePoint adoption in federal agencies. The platform was new to most organizations, the on-premises farm model required significant infrastructure investment, and the user population was largely unfamiliar with collaborative web-based document management. At the U.S. Department of State — specifically within the bureau now known as Diplomatic Technology (formerly the Bureau of Information Resource Management) — SharePoint was deployed as an enterprise collaboration platform across a global footprint supporting over 100,000 users at 275 posts in 191 countries.8

Leadership recognized the need for governance early. The directive came down: establish SharePoint governance. The intent was sound — prevent uncontrolled sprawl, enforce storage discipline, and avoid the folder-within-folder-within-folder hierarchies that had already plagued shared drives for a decade.

A comprehensive governance document was produced. Naming conventions, content type standards, site provisioning workflows, storage allocation guidelines, metadata requirements. On paper, the SharePoint environment was governed.

In practice, enforcement did not follow documentation. No automated mechanism prevented users from creating deeply nested folder structures. No alerting system flagged lists approaching item count thresholds. No periodic review identified abandoned site collections consuming storage. The governance document described the rules. The platform did not enforce them. End users — many of whom had never used SharePoint before — defaulted to the habits they brought from shared network drives. Monday morning farm instability became a recurring operational pattern, driven in part by usage behaviors that the governance document explicitly prohibited but nothing in the platform actually prevented.

The lesson was not that governance was premature. The lesson was that governance without enforcement is documentation. The intent was correct. The execution skipped the critical step: wiring the rules into the platform so that compliance was automatic rather than voluntary.

What made this scenario instructive — and what makes it generalizable beyond federal SharePoint environments — is that the governance failure was invisible to leadership for months. The governance document existed. Briefings referenced it. Compliance reviews confirmed its presence. The gap between "governance exists" and "governance works" was not surfaced until operational symptoms became impossible to ignore: performance degradation during peak usage windows, storage consumption curves that defied the allocation model, and support ticket volume that grew linearly with adoption instead of flattening as users matured on the platform.

A governance reset in this type of environment follows a predictable arc. Instrument actual usage to identify the five to ten patterns responsible for 80% of operational pain. In the SharePoint context, this typically surfaces the same recurring behaviors: unindexed list views that trigger throttling, nested folder structures that complicate search and navigation, oversized file uploads to document libraries instead of dedicated media services, abandoned site collections consuming storage with no active owner, and custom permission structures that deviate from inherited models and generate administrative overhead at scale.

Write rules specifically targeting those patterns. Enforce each rule through platform-level controls — storage quotas with automated alerting, list view threshold configurations with indexed columns, site lifecycle policies that require ownership confirmation, provisioning approval workflows that prevent uncontrolled site sprawl. The reset takes weeks, not months, because the scope is narrow and the evidence is specific. The organization does not need 200 pages of governance. It needs fifteen enforced rules that address the behaviors actually causing problems.

The broader takeaway applies equally to a healthcare system deploying ServiceNow, a financial services firm rolling out Salesforce, or a mid-market SaaS company scaling its AWS footprint. Governance documentation produced in advance of deployment will always reflect what the vendor manual says should happen. Governance enforcement built from observed usage reflects what actually happens. The gap between those two is where governance programs go to become shelf decoration.9


Measuring Governance Effectiveness

Governance metrics should answer one question: are the enforced rules reducing the operational problems they were designed to prevent? Everything else is vanity reporting.

Policy violation rate over time. The primary metric. Track the number of governance rule violations per period (weekly or monthly) as a trend line. A declining trend means enforcement is working and user behavior is adapting. A flat or increasing trend means the enforcement mechanism is insufficient, the rule is poorly targeted, or the underlying behavior has a structural cause that governance alone cannot address.

Enforcement coverage ratio. The percentage of governance rules in the register that have automated enforcement mechanisms versus rules that depend on manual compliance or periodic review. A mature governance program targets 80% or higher automated enforcement. Rules below this threshold are candidates for either automation investment or removal from the register.

Mean time to detect a governance breach. How long between a violation occurring and the governance team becoming aware of it. Real-time detection through automated alerting is the target. If governance breaches are discovered during quarterly reviews, the detection latency is too high for the rules to be effective.

Platform stability metrics correlated with governed behaviors. If a governance rule was created because a specific behavior caused performance degradation or service disruption, track the platform stability metric alongside the governance violation metric. The point of governance is not compliance for its own sake — it is operational health. If governed behaviors are declining but platform stability is not improving, either the correlation was wrong or there are additional unaddressed factors.

User adoption metrics that confirm governance is not blocking productive use. Governance that reduces violations by making the platform harder to use is not success — it is a different kind of failure. Track active user counts, feature adoption rates, and support ticket volume alongside governance metrics. If governance enforcement correlates with declining adoption, the rules are too restrictive or the enforcement mechanisms are creating friction that drives users to shadow IT alternatives.

What to look for in vendor proposals that claim governance expertise. When evaluating contractors or consultants for governance work, the differentiator is not whether they have a governance framework. Everyone has a framework. The differentiator is whether their approach starts with observation or starts with documentation. Ask specifically: what is the observation period before rules are written? How do you determine which behaviors to govern? What percentage of your governance rules have automated enforcement? How often is the governance register reviewed and updated? If the answer to the first question is "we implement our governance framework during the first sprint," the approach is pre-deployment governance by another name.10


Summary and Key Takeaways

Enterprise governance fails most often not because organizations lack rules, but because they write rules before they have evidence to inform them. The result is governance documentation that satisfies auditors but changes nothing about how the platform actually gets used.

Evidence-based governance follows the Observe-Codify-Enforce cycle. Deploy the platform with telemetry enabled. Observe real user behavior for 60 to 90 days. Codify rules that address specific, documented operational problems — not hypothetical ones. Enforce every rule through platform-level controls, not documents that depend on voluntary compliance.

The practical markers of effective governance: a concise governance register with fifteen or fewer enforced rules, automated enforcement coverage above 80%, declining policy violation trends over time, and platform stability improvements that correlate with governed behaviors.

Governance is an operational function, not a project. It does not have a completion date. Organizations that treat governance as a deliverable — a document to produce and file — will repeat the cycle of comprehensive documentation and zero behavioral change. Organizations that treat governance as continuous operations — observe, codify, enforce, repeat — build platforms that get healthier over time instead of more chaotic.

The rules should come from the platform, not from a template. The enforcement should live in the platform, not in a binder. And the governance team should be watching the data, not attending quarterly meetings to review a document that was already obsolete when it was written.


Footnotes

  1. Gartner, "Gartner Predicts 80% of D&A Governance Initiatives Will Fail by 2027" (February 2024). Gartner attributes the failure to governance programs that do not enable prioritized business outcomes.

  2. OpenCommons, "CHAOS Report on IT Project Outcomes". The Standish Group's CHAOS research has tracked IT project outcomes since 1994. The 2020 report found that 66% of technology projects end in partial or total failure, with only 13% of large federal IT procurements (>$6M) succeeding.

  3. CIO.com, "2025 is the year to unlock the power of pervasive IT governance". IDC's April 2024 CIO Poll Survey found that IT governance ranked as CIOs' fourth-highest priority, with business-IT alignment remaining the third-largest organizational challenge.

  4. Federal News Network, "Historic FITARA scorecard shows record 13 agencies earned A's". The 18th FITARA Scorecard (September 2024) reported the highest number of A grades in the scorecard's history, with agencies collectively achieving $31.4 billion in cost savings and avoidance since 2005.

  5. FedScoop, "DOGE's IT modernization work could start with GAO report" (January 2025). The GAO reported that 463 of 1,881 IT-related recommendations remained unimplemented, with 32 of 69 priority recommendations also unaddressed.

  6. Faeth Executive Coaching, "IT Project Failure Rates: Facts and Reasons" (March 2022). Cites BCG's 2020 estimate that 70% of digital transformation efforts fall short of targets, alongside McKinsey's finding that 17% of large IT projects threaten the company's existence.

  7. Microsoft, "Learn about Microsoft Purview". Microsoft Purview consolidates data security, governance, and compliance into a unified platform spanning M365, Azure, and multi-cloud environments. 2

  8. U.S. Department of State, Bureau of Diplomatic Technology. Formerly the Bureau of Information Resource Management (IRM). DT supports over 100,000 customers at 275 posts across 191 countries. The scenario described is informed by direct platform experience during the bureau's early SharePoint adoption period; specific operational details are composited for illustration purposes.

  9. This scenario is a composite informed by direct experience during the bureau's early SharePoint adoption and by patterns observed across multiple federal and enterprise governance engagements. Specific operational timelines and figures are illustrative.

  10. GAO, "Information Technology: Federal Agencies Are Making Progress in Implementing GAO Recommendations" (GAO-24-106693). For fiscal year 2024, 26 agencies planned to spend approximately $95 billion on IT, with $74 billion allocated to operating and maintaining existing systems — underscoring the scale at which governance operations must function.