How the top 12% of Enterprises Manage AI effectively
Apr 24, 2026
|
Key Takeaways ▶ 97% of enterprises exploring agentic AI, only 12% have centralised management ▶ AI sprawl across PE portfolios creates invisible cost, unmeasurable impact, and security exposure ▶ Point solutions adopted independently cannot compound - they restart from zero each time ▶ The 12% with centralised AI are not more sophisticated - they made a structural decision |
|
97% of enterprises exploring agentic AI (OutSystems) |
12% have centralised AI management (OutSystems) |
78% cannot pass AI governance audit in 90 days (Grant Thornton) |
Here is the number that should alarm every operating partner in private equity: 97% of enterprises are now exploring agentic AI. Autonomous agents that can take actions, make decisions, and operate across business processes without human intervention for every step. The technology is real. The adoption is happening. And only 12% of those enterprises have any form of centralised management over what they have deployed.
That is from OutSystems, published April 2026. Ninety-seven percent exploring. Twelve percent with visibility. The other 85% are running agentic AI somewhere in their organisations and nobody at the top knows exactly where, what it does, or what data it touches.
Now multiply that across a PE portfolio of ten, fifteen, twenty companies. Each one exploring independently. Each one buying different tools. Each one running pilots that nobody at the fund level can see, measure, or govern. That is not a plan. That is AI sprawl. And it is the root cause of the proof gap.
The anatomy of AI sprawl in PE portfolios
AI sprawl in a portfolio company does not look dramatic. It looks like progress.
The commercial team buys an AI tool for lead scoring. The finance team adopts a different one for forecasting. The operations team experiments with an agent for supply chain alerts. The HR team uses a chatbot for onboarding. The CEO reports to the board that the company is “investing in AI.” The board reports to the fund that AI adoption is underway.
Nobody asks whether these tools talk to each other. Nobody checks whether the lead scoring model and the forecasting model are using consistent data. Nobody audits what information the supply chain agent can access or what actions it can take autonomously. Nobody measures the aggregate cost or the aggregate return.
|
94% of enterprises are concerned that AI sprawl is increasing complexity and security risk. They are right to be concerned. But concern without centralised management is just anxiety. Source: OutSystems, April 2026 |
In a PE context, sprawl creates three specific problems that directly impact returns.
|
Invisible cost accumulation Each tool has a subscription. Each pilot has consulting costs. Each implementation has integration overhead. Across a portfolio, the aggregate AI spend can reach millions without ever appearing as a single line item. It is distributed across ten IT budgets, twenty team budgets, and nobody is measuring total cost against total impact. |
|
Unmeasurable impact If you cannot see what AI tools are running, you cannot measure what they produce. The 6% of GPs reporting high AI impact, per Bain, are not using better tools. They know what tools they have, what those tools are doing, and whether the results justify the investment. The other 94% are guessing. |
|
Compounding security exposure Every AI tool that touches company data is a potential attack surface. Every autonomous agent that can take actions in production systems is a potential governance failure. Across a portfolio of companies, each running multiple ungoverned AI tools, the aggregate risk is significant and entirely invisible to the fund. |
Shadow AI: the problem you cannot see
The enterprise technology world has a term for this: shadow AI. It is the successor to shadow IT, the phenomenon where employees adopted cloud tools and SaaS products without IT approval. Shadow AI is the same dynamic, but with higher stakes.
Shadow IT meant someone used Dropbox without asking. Shadow AI means an autonomous agent is making pricing recommendations, drafting customer communications, or flagging supply chain decisions based on training data, reasoning patterns, and access permissions that nobody at the C-suite level has reviewed.
In a PE portfolio company, shadow AI is especially dangerous because of the operating model. Portfolio companies are under pressure to show results. Operating partners set targets for efficiency, margin improvement, and speed. Management teams respond by adopting AI tools that promise quick wins. The intent is good. The governance is absent.
73% of PE firms now run digital due diligence on most deals, according to BCG. They assess the technology landscape of a target before acquiring it. But very few are running the same diligence on the technology landscape of companies they already own. The AI inside the portfolio is terra incognita.
Why point solutions create the proof gap
The proof gap is not caused by insufficient AI spending. It is caused by the wrong deployment model. Point solutions, adopted independently across portfolio companies, create activity without proof because they are structurally incapable of producing the kind of results that move a P&L.
A standalone lead scoring tool might improve conversion rates by 3% in one business unit. That is real. But it is not material to EBITDA at the portfolio level. It is a feature, not a capability.
A standalone forecasting agent might reduce forecast error by 10%. Useful. But disconnected from the commercial intelligence, the supply chain data, and the customer behaviour patterns that would make that forecast actionable.
The problem with point solutions is not that they do not work. Many of them work well. The problem is that they do not compound. Each one operates in isolation. Each one learns from its own narrow data. Each one optimises locally while the global operational picture remains fragmented.
This is why 95% of funds report AI initiatives meeting or exceeding business case while only 6% report high impact. The individual initiatives are succeeding on their own terms. But the aggregate impact is negligible because the initiatives are not connected, not measured at the portfolio level, and not governed under a coherent framework.
The centralised model: what 12% got right
The 12% with centralised AI management are not more sophisticated technologically. They made a structural decision: AI is a fund-level capability, not a portfolio-company-level afterthought.
|
Apollo (APPS) - Apollo did not tell each portfolio company to go find AI tools. Apollo built a centralised data and AI platform that deploys standardised capabilities across the portfolio. Cross-portfolio intelligence. Consistent data models. Unified governance. The platform sees everything because it was designed to see everything. |
The result is not just better technology management. It is fundamentally different economics. When AI is deployed as a platform, the second deployment is cheaper than the first. The third is cheaper than the second. The intelligence from one portfolio company informs the deployment at the next. The capability compounds.
When AI is deployed as point solutions, each deployment starts from scratch. There is no institutional learning. No cross-portfolio intelligence. No compounding. Every portfolio company reinvents the wheel and the fund pays for it every time.
This is the structural difference between the 12% and the 88%. Not technology. Architecture. Not tools. Infrastructure.
The measurement crisis
There is a downstream consequence of sprawl that is even more damaging than cost or security: the inability to measure.
When AI tools are deployed centrally, measurement is built in. You know what was deployed, when it went live, what data it uses, and what outcomes it produces. You can calculate ROI at the tool level, the company level, and the portfolio level.
When AI tools are deployed as scattered point solutions across a portfolio, measurement is nearly impossible. Each tool reports its own metrics in its own format. There is no common framework. There is no way to aggregate “the AI tool in Company A improved lead conversion by 3%” with “the AI tool in Company B reduced forecast error by 10%” into a coherent portfolio-level AI ROI number.
This is why 39% of GPs do not expect material AI financial impact this year, per Bain. It is not that AI is not having an impact. It is that nobody can prove it. The data is scattered across dozens of tools in dozens of companies, none of it connected, none of it auditable at the fund level.
|
78% of PE firms cannot pass an AI governance audit within 90 days. That number is not just a compliance problem. It is a measurement problem. If you cannot audit your AI, you cannot measure your AI. And if you cannot measure your AI, you cannot prove its impact to LPs, to prospective buyers, or to your own investment committee. Source: Grant Thornton, April 2026 |
The fund-level capability gap
The mid-market is especially exposed. Mega-funds like Apollo, Blackstone, and Cerberus can afford dedicated AI infrastructure teams. They can build centralised platforms. They can hire AI Operating Partners, a role Korn Ferry reports is emerging as a dedicated position at large firms.
Mid-market funds with $1-10 billion in AUM and five to fifteen portfolio companies cannot build a 50-person AI platform team. But they face the same sprawl problem, the same governance exposure, and the same LP pressure to demonstrate AI-driven value creation.
The solution is not to build what Apollo built. The solution is to deploy the same principle: centralised AI capability at the fund level, standardised deployment across the portfolio, unified measurement and governance.
The difference is execution. Mid-market funds need to achieve this through a platform and a partner, not through an internal team of fifty. The architecture has to be right. Centralised visibility. Consistent deployment model. Fund-level measurement. Company-level execution.
From sprawl to proof
The path from AI sprawl to AI proof runs through three decisions.
|
Audit what exists Before deploying anything new, map every AI tool running in every portfolio company. What does it do. What data does it access. Who approved it. What does it cost. What impact can be measured. Most funds that run this exercise for the first time are surprised by what they find. |
|
Centralise the capability AI is a fund-level investment, not a portfolio-company-level expense. The deployment model, the governance framework, and the measurement system should be consistent across the portfolio. This does not mean every company runs the same tools. It means every company’s AI is visible, governed, and measured at the fund level. |
|
Deploy against EBITDA, not against technology Start with the P&L problem. Revenue at risk. Margin leakage. Operational bottleneck. Deploy a production system against that problem. Measure the result in business terms. Then replicate. The second deployment is faster. The third is faster still. |
The 97% exploring agentic AI are not wrong to explore. The technology is genuinely capable. The error is exploring without governing, deploying without measuring, and spending without proving.
How many AI tools are running across your portfolio right now? If the answer is “I don’t know,” that is the problem this post is about.
Sources & References
| OutSystems: Agentic AI Survey, April 2026 - 97% exploring agentic AI, only 12% have centralised management, 94% concerned about sprawl increasing complexity and security risk |
| Bain & Company: Global PE Report 2026 - Only 6% of GPs report high AI impact, 39% don’t expect material AI financial impact in 2026 |
| BCG: PE’s Future: AI-First Value Creation 2026 - 73% of firms running digital DD on most deals |
| FTI Consulting: PE AI Governance Survey, March 2026 - 95% of funds report AI initiatives meeting or exceeding business case |
| Grant Thornton: AI Governance Audit Readiness, April 2026 - 78% of PE firms cannot pass AI governance audit in 90 days |
| Korn Ferry: AI Operating Partner Role Report 2026 - AI Operating Partner emerging as dedicated position at large firms |
Frequently Asked Questions
|
What is agentic AI sprawl in PE portfolios? Agentic AI sprawl is the uncontrolled proliferation of autonomous AI agents across portfolio companies. 97% of enterprises are exploring agentic AI, but only 12% have centralised management (OutSystems, April 2026). In a PE portfolio of ten to twenty companies, each one adopts tools independently, buys different solutions, and runs pilots that nobody at the fund level can see, measure, or govern. The result is invisible cost accumulation, unmeasurable impact, and compounding security exposure across the portfolio. |
|
Why is shadow AI dangerous for portfolio companies? Shadow AI is the successor to shadow IT, but with higher stakes. Where shadow IT meant someone used Dropbox without asking, shadow AI means an autonomous agent is making pricing recommendations, drafting customer communications, or flagging supply chain decisions based on training data, reasoning patterns, and access permissions that nobody at the C-suite level has reviewed. In PE portfolio companies under pressure to show results, management teams adopt AI tools promising quick wins. The intent is good, but governance is absent. 94% of enterprises are concerned that AI sprawl increases complexity and security risk. |
|
What percentage of enterprises have centralised AI management? Only 12% of enterprises have any form of centralised AI management, according to OutSystems (April 2026). The other 85% are running agentic AI somewhere in their organisations and nobody at the top knows exactly where, what it does, or what data it touches. The 12% with centralised management made a structural decision: AI is a fund-level capability, not a portfolio-company-level afterthought. The difference is not technology sophistication - it is architecture. |
|
How does AI sprawl create the proof gap? AI sprawl creates the proof gap because point solutions adopted independently are structurally incapable of producing measurable portfolio-level results. 95% of funds report AI initiatives meeting or exceeding business case (FTI Consulting), while only 6% report high impact (Bain). Individual tools work on their own terms, but they do not compound. Each operates in isolation, learns from narrow data, and optimises locally while the global operational picture remains fragmented. Without centralised measurement, nobody can prove aggregate AI ROI to LPs, buyers, or investment committees. |
|
What did Apollo’s APPS platform get right? Apollo did not tell each portfolio company to find AI tools independently. Apollo built a centralised data and AI platform that deploys standardised capabilities across the portfolio: cross-portfolio intelligence, consistent data models, unified governance. The platform sees everything because it was designed to see everything. The result is fundamentally different economics: the second deployment is cheaper than the first, the third cheaper than the second. Intelligence from one portfolio company informs the next. The capability compounds, rather than restarting from zero each time. |
|
How should PE firms move from sprawl to proof? Three decisions. First, audit what exists: map every AI tool running in every portfolio company before deploying anything new. Most funds are surprised by what they find. Second, centralise the capability: AI is a fund-level investment with consistent deployment, governance, and measurement across the portfolio. Third, deploy against EBITDA, not against technology: start with the P&L problem, deploy a production system against it, measure in business terms, then replicate. The second deployment is faster. The third is faster still. 78% of PE firms cannot pass an AI governance audit in 90 days (Grant Thornton) - auditing what exists is the essential first step. |
This is Part 2 of 6 in the AI Proof Gap series on closing the gap between AI spending and AI proof in private equity.
Keep informed with the newsletter for PE operating partners and the portfolio companies they back.
Get operational insights and trends, AI frameworks, resources and real deployment stories.