Your best data science team just spent six months building a model that predicts customer churn with 90% accuracy. It’s sitting on a server, unused. Why? Because it’s been stuck in a risk review queue for a very long period of time, waiting for a committee that doesn’t understand stochastic models to sign off. This isn’t a hypothetical — it’s the daily reality in most large companies.
In AI, the models move at internet speed. Enterprises don’t.
Every few weeks, a new model family drops, open-source toolchains mutate and entire MLOps practices get rewritten. But in most companies, anything touching production AI has to pass through risk reviews, audit trails, change-management boards and model-risk sign-off. The result is a widening velocity gap: The research community accelerates; the enterprise stalls.
This gap isn’t a headline problem like “AI will take your job.” It’s quieter and more expensive: missed productivity, shadow AI sprawl, duplicated spend and compliance drag that turns promising pilots into perpetual proofs-of-concept.
The numbers say the quiet part out loud
Two trends collide. First, the pace of innovation: Industry is now the dominant force, producing the vast majority of notable AI models, according to Stanford’s 2024 AI Index Report. The core inputs for this innovation are compounding at a historic rate, with training compute needs doubling rapidly every few years. That pace all but guarantees rapid model churn and tool fragmentation.
Second, enterprise adoption is accelerating. According to IBM’s, 42% of enterprise-scale companies have actively deployed AI, with many more actively exploring it. Yet the same surveys show governance roles are only now being formalized, leaving many companies to retrofit control after deployment.
Layer on new regulation. The EU AI Act’s staged obligations are locked in — unacceptable-risk bans are already active and General Purpose AI (GPAI) transparency duties hit in mid-2025, with high-risk rules following. Brussels has made clear there’s no pause coming. If your governance isn’t ready, your roadmap will be.
The real blocker isn’t modeling, it’s audit
In most enterprises, the slowest step isn’t fine-tuning a model; it’s proving your model follows certain guidelines.
Three frictions dominate:
Audit debt: Policies were written for static software, not stochastic models. You can ship a microservice with unit tests; you can’t “unit test” fairness drift without data access, lineage and ongoing monitoring. When controls don’t map, reviews balloon.
. MRM overload: Model risk management (MRM), a discipline perfected in banking, is spreading beyond finance — often translated literally, not functionally. Explainability and data-governance checks make sense; forcing every retrieval-augmented chatbot through credit-risk style documentation does not.
Shadow AI sprawl: Teams adopt vertical AI inside SaaS tools without central oversight. It feels fast — until the third audit asks who owns the prompts, where embeddings live and how to revoke data. Sprawl is speed’s illusion; integration and governance are the long-term velocity.
Frameworks exist, but they’re not operational by default
The NIST AI Risk Management Framework is a solid north star: govern, map, measure, manage. It’s voluntary, adaptable and aligned with international standards. But it’s a blueprint, not a building. Companies still need concrete control catalogs, evidence templates and tooling that turn principles into repeatable reviews.
Similarly, the EU AI Act sets deadlines and duties. It doesn’t install your model registry, wire your dataset lineage or resolve the age-old question of who signs off when accuracy and bias trade off. That’s on you soon.
What winning enterprises are doing differently
The leaders I see closing the velocity gap aren’t chasing every model; they’re making the path to production routine. Five moves show up again and again:
Ship a control plane, not a memo: Codify governance as code. Create a small library or service that enforces non-negotiables: Dataset lineage required, evaluation suite attached, risk tier chosen, PII scan passed, human-in-the-loop defined (if required). If a project can’t satisfy the checks, it can’t deploy.
Pre-approve patterns: Approve reference architectures — “GPAI with retrieval augmented generation (RAG) on approved vector store,” “high-risk tabular model with feature store X and bias audit Y,” “vendor LLM via API with no data retention.” Pre-approval shifts review from bespoke debates to pattern conformance. (Your auditors will thank you.)
Stage your governance by risk, not by team: Tie review depth to use-case criticality (safety, finance, regulated outcomes). A marketing copy assistant shouldn’t endure the same gauntlet as a loan adjudicator. Risk-proportionate review is both defensible and fast.
Create an “evidence once, reuse everywhere” backbone: Centralize model cards, eval results, data sheets, prompt templates and vendor attestations. Every subsequent audit should start at 60% done because you’ve already proven the common pieces.
Make audit a product: Give legal, risk and compliance a real roadmap. Instrument dashboards that show: Models in production by risk tier, upcoming re-evals, incidents and data-retention attestations. If audit can self-serve, engineering can ship.
A pragmatic cadence for the next 12 months
If you’re serious about catching up, pick a 12-month governance sprint:
Quarter 1: Stand up a minimal AI registry (models, datasets, prompts, evaluations). Draft risk-tiering and control mapping aligned to NIST AI RMF functions; publish two pre-approved patterns.
Quarter 2: Turn controls into pipelines (CI checks for evals, data scans, model cards). Convert two fast-moving teams from shadow AI to platform AI by making the paved road easier than the side road.
Quarter 3: Pilot a GxP-style review (a rigorous documentation standard from life sciences) for one high-risk use case; automate evidence capture. Start your EU AI Act gap analysis if you touch Europe; assign owners and deadlines.
Quarter 4: Expand your pattern catalog (RAG, batch inference, streaming prediction). Roll out dashboards for risk/compliance. Bake governance SLAs into your OKRs.
By this point, you haven’t slowed down innovation — you’ve standardized it. The research community can keep moving at light speed; you can keep shipping at enterprise speed — without the audit queue becoming your critical path.
The competitive edge isn’t the next model — it’s the next mile
It’s tempting to chase each week’s leaderboard. But the durable advantage is the mile between a paper and production: The platform, the patterns, the proofs. That’s what your competitors can’t copy from GitHub, and it’s the only way to keep velocity without trading compliance for chaos.
In other words: Make governance the grease, not the grit.
Jayachander Reddy Kandakatla is senior machine learning operations (MLOps) engineer at Ford Motor Credit Company.