Claude, context and control: how private markets firms actually scale AI
Josh Berman
Claude is changing how work gets done
Since Anthropic’s February 24 release, we’ve had a growing number of conversations with private equity and corporate finance teams asking a similar question:
If Claude can now do all of this, how does the rest of our tech stack fit in?
It is an understandable reaction. The introduction of “skills”, combined with a rapidly expanding set of integrations, has made Claude feel much closer to a complete working environment.
You can ask for a pre-investment committee memo on a UK software asset, pull in multiple data sources, and get a structured output in seconds. You can analyse a company, build a market map, or summarise a sector without switching tools.
For many teams, it feels like the end state.
But what Claude has fundamentally changed is the interface. It has not replaced the underlying system required to make that work at scale.
What changed with Claude on February 24
The most important shift in the February release is the move from prompting to “skills”.
Previously, getting reliable outputs depended on writing increasingly detailed prompts. That approach works, but it is difficult to standardise across a team and almost impossible to operationalise at scale.
Skills introduce a more structured way of working.
Instead of describing a task each time, you define how it should be done once, and reuse it.
The closest analogy is onboarding a new analyst. You provide guidance, examples, templates and expectations so that work is done consistently across the firm.
Claude’s skills follow the same pattern. They combine:
- instructions that define how a task should be performed
- reference materials, such as previous outputs or style guides
- templates that shape the output
- a layer of code that ensures the result is consistent and usable
This is a meaningful step forward. It allows firms to move from one-off interactions to repeatable workflows.
At the same time, building high-quality skills is still a technical and iterative process. Getting from a good output to a reliable, firm-wide standard takes time.
Why the experience feels so powerful
Claude’s strength is how naturally it brings workflows together.
A single prompt can produce a fully formed output that combines multiple sources. For example:
- building a pre-IC memo using internal documents, third-party data and web signals
- mapping a sub-sector and identifying relevant companies
- summarizing recent developments across a target pipeline
For an individual user, this is transformative. It removes friction, reduces manual work, and makes complex tasks feel simple.
It also starts to change expectations. If this is possible for one user, it raises the question of how this should work across an entire firm.
That is where the next layer of thinking begins.
How Claude connects to data
Claude’s ability to operate across systems is enabled through model context protocol, or MCP.
In practical terms, this allows the model to query external sources via APIs. It can pull from third-party data providers, access internal documents, and interact with systems such as CRM.
This is what enables multi-source workflows in a single interaction.
However, it is important to understand how those interactions behave underneath.
Each time you ask a question, Claude goes out to those sources, retrieves the data, processes it, and returns an answer. It does this well, but it does it fresh each time.
That means:
- the same company analysis may be rebuilt multiple times
- the same datasets may be queried repeatedly
- the same insights may be generated without being retained
At a small scale, this is barely noticeable. At a firm level, it becomes material.
Where firms start to think about structure
As teams begin to use Claude more broadly, the questions naturally shift.
-
If two people analyze the same company, should that work be done twice?
-
If a team has already mapped a sector, should that insight be recreated each time?
-
If signals have been identified across a pipeline, where should they live?
These are not limitations of Claude. They are questions about how work is organised at a firm level.
This is where many firms start introducing a context layer alongside tools like Claude.
The role of a context and memory layer
A context layer focuses on what happens beyond the individual interaction.
Its role is to capture, structure and retain the outputs generated by tools like Claude, and make them reusable across the firm.
Instead of insights sitting inside chat conversations, they become part of a shared dataset.
For example:
- a company analysis generated once can be reused and enriched over time
- a market map can be updated, rather than rebuilt
- signals across a CRM pipeline can be tracked continuously
This creates persistence, consistency and shared visibility.
More importantly, it allows knowledge to compound.
Over time, the firm is not just answering questions. It is building a proprietary view of the market.
What about passive tasks?
Another shift becomes clear as usage matures.
Claude is highly effective for user-driven tasks. But many of the most valuable workflows in private markets are continuous.
For example:
- tracking 200 companies in a CRM and surfacing the five most relevant to prioritise this week
- monitoring hiring, product and funding signals across a target sub-sector
- identifying when a previously out-of-scope company moves into scope
These are not one-off queries. They require ongoing evaluation.
This is where firms start defining tasks that run continuously in the background. Often described as agents, these processes monitor data, update insights and surface actions without requiring constant user input.
For those workflows to work effectively, they need to connect to a structured layer where data can be stored, updated and linked over time.
Managing scale: cost, consistency and control
As usage expands, practical considerations follow.
When multiple users are interacting with multiple data sources, it becomes important to think about efficiency and control.
Without structure:
- similar queries may be run repeatedly across teams
- costs can increase quickly due to repeated API calls
- workflows can become inconsistent
These are not issues with the model itself. They are the result of operating without a shared system.
Introducing a layer that captures outputs, orchestrates workflows and manages usage helps address this. It ensures that work done once can be reused, and that activity across the firm is aligned.
How Deal Engine fits into this picture
Within this architecture, Claude and Deal Engine play complementary roles.
Claude becomes the interface. It is where users ask questions, generate outputs and interact with data.
Deal Engine sits alongside it as the context and orchestration layer.
It captures the outputs generated by Claude, structures them into a reusable dataset, and connects them to existing systems such as CRM. It also enables continuous workflows, such as sourcing and CRM monitoring, to run in the background.
In practice, this means:
- analysis is not repeated unnecessarily
- insights are shared across the firm
- workflows become consistent and scalable
- knowledge builds over time
PE origination isn't as binary as just "Yes" or "No"; There's a big 3rd category, "Not Yet". This is where the importance of passive tasks should not be underestimated
Build or adopt: different approaches to the same challenge
At this point, firms typically consider how to implement this type of architecture.
One option is to build internally, using platforms such as Snowflake or Databricks alongside custom connectors and workflows. This offers flexibility, but requires ongoing investment in engineering and maintenance.
Another option is to adopt a platform designed for this purpose, with data structures, workflows and orchestration already in place.
Whichever route is taken, the key point remains the same.
The model is only one part of the system.
Bringing it together
Claude represents a significant step forward in how private markets teams interact with data and perform analysis.
It simplifies workflows, improves productivity, and opens up new possibilities for how work can be done.
At the same time, scaling those capabilities across a firm requires a complementary layer that captures, structures and builds on what Claude produces.
Together, these layers form a more complete system.
One that not only helps teams work faster in the moment, but also builds a lasting, compounding view of the market over time, where passive tasks are happening 24/7.
That is where the real advantage begins to emerge.
Book a demo to learn how Deal Engine can help you embed a modern data infrastructure built for private equity in 2026 and beyond.
Find out more about how Deal Engine helps dealmakers.
Don't miss perspectives like this
Sign up to our mailing list to get insights on tech, data and AI for dealmaking efficiency in private equity, corporate finance and M&A markets.
Your AI is only as powerful as the context behind it
Stop layering tools on fragmented data. Turn market data and your own proprietary information into structured advantage to source, prioritize and surface opportunities aligned to your investment thesis.
Related Posts
Be first to every deal.
See Deal Engine in action.
Discover how Deal Engine is providing private equity firms with the data engineering and AI capabilities fueling their competitive advantage.