Alex Bajdechi, VP of Sales, Deal Engine
There's a version of AI adoption that looks impressive from the outside: a pilot, some enthusiastic feedback from two or three power users, a slide in the board deck. Then, six months later, the same two or three people are still using it, and the rest of the firm hasn't changed how they work at all.
Most firms are living this reality right now. According to the AI Pathfinder Private Equity Benchmark Survey from September 2025, nearly half of PE firms (47.8%) are at the piloting stage. Just 11% have reached genuine scale, and only 2.1% describe AI as embedded in their long-term investment strategy.
The gap between piloting and scaling isn't a product problem. It's a data and infrastructure problem. And closing it requires more than enthusiasm.
At Deal Engine, we track impact our platform is driving for private markets firms using it. This covers four key areas of value: driving net new opportunities, monitoring the CRM for actionable insight, building a market data asset, and putting AI to work in daily workflows.
Here are some of the outcomes we’ve seen from firms using Deal Engine:
One firm with 70 people and $5bn AUM deployed their Deal Engine in spring 2025. Within six months, they had 214 net new companies of interest added to their CRM pipeline. One had reached Hot Prospect status within the first three months.
What that number represents matters as much as the number itself. Those 214 companies weren't the output of someone running searches, scrolling through results, and manually checking each one against the firm's criteria. The system continuously scans the market, matches against the investment thesis, and surfaces what's relevant; so the team's time goes on assessing and progressing the most promising companies, not on finding and filtering them in the first place.
The Hot Prospect milestone reflects something similar. Getting a company from initial identification to serious consideration in under 90 days requires confidence in the underlying data and a team that isn't spending its time verifying basic facts. The context was already there. That's what frees people up to actually move on something rather than just track it.
And because the system is continuously running, the firm is building relationships with companies before a process is underway, not finding out about them because someone sent a teaser. That's a different kind of market position.
Most firms have more in their CRM than they realize. The problem is that without something actively watching those companies, the coverage stays static. You're reaching out on a schedule, not because something has actually changed.
The alternative to that is knowing when something meaningful happens. A company you've been tracking for two years appoints a new CEO. A founder you met at a conference speaks at an industry event. A business that was too early last year starts showing headcount growth in exactly the right areas. Without automated monitoring, spotting any of those things means someone has to be searching manually across LinkedIn, news and websites, looking at the right company at the right time, which in practice means most of it gets missed.
For one firm, over 6,300 signals were tracked for companies monitored in their CRM during a 6 month period. Across their Hot Prospects alone, 1,155 events were tracked to 231 companies, with 187 resulting in coverage being dropped entirely.
That last number is key. Nearly 200 companies removed from active coverage isn't the system failing, it's the system working. The team was making real decisions about what to pursue and what to deprioritize, based on what was actually happening in those businesses. That's a different kind of discipline to a pipeline that just keeps growing because no one has had time to look at it properly.
Third-party market data is good. Your own data, actively maintained and acted on, is better. The difference is whether the signals are driving decisions or just accumulating.
Before AI workflows can work reliably, the data underneath them has to be in reasonable shape. That's not a given for most firms, and it's not something you can assess by feel.
For one firm, the lifetime company universe sat at around 613,000 companies, with over 2 million data points pushed to their CRM. Headcount data coverage was at 96.3%. Website coverage at 91.1%. Financials at around 50%. Transactions at 22.4%.
Those percentages aren't just technical detail. They determine which AI workflows are viable and which aren't. A pre-meeting brief that draws on CRM notes, headcount trends, and recent market signals is only as reliable as the data it's built on. If financials are at 50% coverage and your workflow depends on them, half your outputs are working with a gap where real information should be. You might not notice until something goes wrong.
Firms that understand the shape of their data can build AI applications they actually trust. Firms that don't tend to discover the gaps at the worst possible moment - when a recommendation doesn't make sense, or when a report surfaces information that turns out to be months out of date. The consequence isn't just a bad output. It's a team that stops relying on the system, and as a result, more likely to miss key potential opportunities.
Understanding data coverage isn't a technical exercise. It's the foundation that determines whether anything built on top of it is worth acting on.
Before Deal Engine, preparing for a meeting meant checking the CRM, going to one or more data providers, pulling documents from the drive, and assembling notes by hand. For the investment team, that process was consuming hundreds of hours a year spread across every person, every week, in the kind of low-level overhead that never goes away.
In the first month after deploying AI company reports, 32 pre-meeting briefs were generated. The following month, that jumped to 160. By the next it was 245. In one quarter alone, the team generated over 400 briefs. That growth didn't come from a mandate. It came from people seeing what their colleagues were getting and wanting the same thing.
What they were getting was walking into a conversation already prepared. Relationship history, recent company developments, how the business sits against the investment thesis - assembled automatically and waiting, rather than built by hand over the course of an afternoon. The quality of those conversations changes when the preparation isn't a bottleneck. You're not catching up on context in the first ten minutes. You're already past it.
What changed wasn't just the number of briefs generated. It was what the team could do with the time and headspace they got back.
Look across these four areas and the same thing keeps showing up. The firms making real progress with AI aren't necessarily using more sophisticated models or spending more on tooling. They've built better foundations.
214 net new companies surfaced to thesis, without manual searching. A team that knows when something meaningful changes at a company they've been tracking, and reaches out with a reason rather than a scheduled check-in. A data universe they understand well enough to trust. And an investment team that walks into conversations prepared, with the overhead of manual research behind them rather than ahead of them.
That last point is where the compounding effect becomes most visible. The time that used to go on finding, filtering, checking, and assembling is now going on assessment, relationships, and conviction. Those are the things that actually move opportunities forward. They're also the things that are hardest to scale when the infrastructure underneath them isn't working.
The gap between piloting and scaling isn't about which AI model you're running. It's about whether your data is structured, your workflows are repeatable, and your team can trust what the system surfaces. Closing that gap is an engineering problem, and it's the one most firms are still working through.
Deal Engine tracks adoption and ROI across these four value propositions through Lighthouse, our KPI and analytics framework. If you'd like to see how clients at different stages are using it, we'd be glad to walk you through it, and if you want to see how Deal Engine works in practice, we would be glad to show you.
If you are exploring how to build the data infrastructure that makes AI useful, our data engine guide sets out the practical steps. And if you want to understand how models like Claude fit into that picture, our recent article goes deeper.