Most firms have been focused on which AI tools to adopt, which data providers to subscribe to, and how quickly they can stand up a proof of concept. Those are reasonable operational questions. But they obscure a more important strategic one.
In a world where frontier AI can reason over data on demand, where does the actual advantage compound?
For a long time, the answer seemed obvious. Firms that owned better data would make better decisions. Access to proprietary signals, structured company universes, and integrated market intelligence was a genuine differentiator. Data was the moat.
That logic is being tested.
Frontier models can now access data on demand through web browsing, API connections, and a growing ecosystem of direct integrations with data providers. The marginal cost of accessing a new data source is falling toward zero. Assembling a broad market picture no longer requires months of integration work and specialist infrastructure. In many cases, a well-configured AI agent can do it in minutes.
So if data access is becoming a commodity, what isn't?
There are two schools of thought, and they are both worth taking seriously.
The first argues that structured, proprietary data remains the foundation. Not data in the generic sense, but your data: the accumulated institutional memory of every deal you have reviewed, every company you have tracked, every signal your team has acted on or chosen to ignore. That cannot be bought from a provider or scraped from the web. It lives in your CRM, your documents, your correspondence, and the heads of your people. A model with access to that context makes fundamentally different decisions than one working from public sources alone. And crucially, the firms that govern and structure that data well now will be the ones best positioned as AI capabilities continue to advance. The data layer is not a commodity if it is yours.
The second argues that data, even proprietary data, is inert without the intelligence to act on it. The real differentiator is orchestration: the ability to sequence the right actions at the right time, surface the right insight to the right person, and encode the institutional knowledge of how a firm actually operates into the workflows that govern daily work. In practice, that means an dealmaker walking into a meeting with a briefing assembled automatically from CRM history, internal documents, and live market data. It means targeted outreach drafted from the firm's own thesis and prior interaction history, rather than written from scratch. It means a signal in a portfolio company's trading data prompting the right conversation before anyone thought to look. Any firm can accumulate data. Very few can turn it into repeatable, compounding advantage. That requires something more than storage. It requires a system that knows what to do with what it knows.
These are not opposing views. They are describing different layers of the same problem.
What they share is a conviction that the firms which pull ahead in the next five years will not be the ones that adopted AI the fastest. They will be the ones that built the infrastructure to make AI work in the context of how they invest, how they operate, and what they have already learned.
The tooling question matters less than the architecture question.
Over the coming weeks, we will be publishing two pieces examining each layer in turn: what the data layer provides, where its limits are, and why the orchestration layer may be where the most durable competitive advantage sits.