MC: I do believe there is a role for GenAI and machine learning, absolutely, but the timeline for having a material impact on private markets is unclear. I don't think it's tomorrow but may be in the next five years and the resulting impact will likely be quick with a lot of upside. I don't think anybody can honestly look at what's happening in AI, and the capabilities of AI, and say it's not going to impact private markets in a material and meaningful way.
CL: BGO was early in deploying machine learning tools to inform our investment decision-making process. That initial success spurred a flurry of investment in talent, tools and technology to further develop our investment capabilities. It also had the unintended benefit of priming us to take advantage of the explosive levels of AI progress over the last few years. We see AI touching virtually every corner of our operations. GenAI allows our engineers to code faster, increasing productivity and reducing the time it takes us to build solutions. It enhances our investment and research teams' ability to analyze vast amounts of unstructured data in a fraction of the time it would normally take.
Paradoxically, while it has increased productivity, the advances in AI are coming so quickly that my team spends more time in research mode to ensure we can drive the best outcomes using the latest technology, while reducing the risk of technology obsolescence. While I’m biased, I see GenAI having an outsized impact within private markets because so much of our data is unstructured – it lives in PowerPoint, Excel and text documents – precisely where GenAI is the strongest. GenAI enables us to mine our existing internal data – which wasn't necessarily searchable, or was at least much more difficult to extract insights from without a GenAI system.
MA: They have created real opportunity in terms of the problem set we're talking about here. They are, however, just a tool. If you're a large asset owner that may be in hundreds or thousands of funds, or even more, people aren't looking to chat with that many documents. What they want is to be able to push that volume of content through a secure, robust, scalable platform that can guarantee you a high quality of structured data. Now, inside of those platforms, GenAI and LLMs can be incredibly powerful as tools for data science-led solutions. But these things need a front end. There has to be a user experience that wraps around these things.
SD: I think that GenAI and LLMs can help with our providers. They send us all of this information and we check, and, when things don't look right, we circle back. Now the process of validation and verification checks are also automated for us, and our system puts it into an exception report, and that's how we validate this information. But can they do some of this themselves before they send the data? With these technologies, our providers could figure out that things have changed from the prior report they sent.
FS: I'll add to that. We shouldn't have to iteratively download a portfolio’s quarterly data set until we have a complete feed. This repetitive operating rhythm uses up our time and resources. We would prefer the data service provider publish the data feed once they've totally completed the portfolio before making it available to us. Currently, incomplete portfolio data is made available where the data service provider may not have all the look-throughs verified and completed for the portfolio. As a result, we're subjected to an iterative download process. This process needs improving, becoming cleaner and more timely. These new technologies could be a tool to achieve that.
Z M-S: One use case that my clients are leveraging AI for right now is going back to those credit agreements that are 200 to 300 pages long and feeding those unstructured data sets into an AI model to pull out the key data points. These are fed into a valuation mode or into your analysis, recognizing that those agreements may vary each time in terms of layout and structure and format.
SD: And oftentimes it's what's missing from the documents. There are thousands, perhaps hundreds of thousands of pages per investment. So yes, some of it, in fact a lot of it, is boilerplate. But some of it is not. So are there LLMs that could come in and see, not just what stands out, but what is missing?
MA: Gen AI can solve a number of things. It can help with data extraction from unstructured content. But can they then normalize that where, for example, you've got to rename “total capital committed,” which is called 57 different things across my 1,000 funds, into the same nomenclature? So, normalization and structuring are additional challenges. Driving insights from that data is an area where GenAI can really play a role.
SK: LLMs in their current forms are not experts in private credit constructs – they are not “anchored” to specific domain expertise, like the human specialists I talked about earlier, but they have transformative potential and significant opportunity for early adopters. The key is formulating and implementing the technical strategy to create domain-specific AI agents with domain-specific skills, knowledge and “reasoning.”
The first step is the algorithmic structures that unpack the underlying data constructs and codify the raw text. For example, the two words together of 'maturity date' in private credit have a very clear meaning. The two words separately, don't mean that at all. Connecting the concept of a maturity date to an underlying facility or terminology (which could be one of many in a single instrument or document) is a formula. Private credit is effectively a series of these semantic formulas, based upon unstructured data elements actually embedded in text.
Once documents are converted into a series of these data elements, which are then packaged into formulas, the elements and the formulas can be tokenized. This enables multiple layers of value to be created – for example, controlled, permissioned access into specific data elements within vast data sets, or the ability to create new data-driven instruments or algorithmic synthetics of a credit instrument or a component of it, all which can be monetized to create new value.
MC: One example that someone cited to me is to use AI in the investment analysis and decision process. So, in your investment committee, when you vote to make a decision on whether to green light an investment or not, you have AI be part of that process. Feed the investment thesis into an AI that should have all of the documents on the investments and ask questions like:
- "Does this look like a good investment for us?"
- “What risks are we not considering?”
- “What additional questions should we be asking?”
SK: An opinion-driven sentiment LLM, right? So, tell me: "Is this the best deal I've got?" We can tell the model the parameters of what “best” means. But your appetite for risk and your ability to make that judgment call comes down to a couple of things. One, how your institution judges investments, based on its strategy, and risk appetite, etc. And then, how do you take into account what's happening at the point in time that you're making the decision. If, for example, you've got a basket with one borrower, and that borrower's credit rating has suddenly dropped, you might want to rethink.
MA: LLMs are new and exciting toys, but how to really leverage and deploy them is the question. And, of course, there are issues around where the data is going. A publicly available LLM looks like you can do lots of cool things, but given the nature of private market investing, well, the clue is in the name! And people are absolutely right to have concerns around data security. “Is my data being used to train models that may benefit my competitors?”
SK: You've also got data that has historically lived in silos because of security. The ability to manage data privacy and confidentiality has, up until quite recently, been challenging, because the data lives in a document. Now, not every piece of the data in that document is going to be confidential. But the information cannot be detached from all the documents – not at speed and scale and with understanding of the data constructs I talked about before – so you've created these silos where you've just put arms around big chunks of documents, and big chunks of data, because there's no other way of siphoning between them.
Now you have technological developments where data can be detached from the underlying instruments and can be actually stored separately. So, if you think about blockchain technologies and tokenized debt instruments, you can take a data element, and you can attach it into a token, and within that token, you can have various levels of permissioning. So, for example, let's say it's my personal data: I can see my date of birth, my full name, my social security number; but perhaps you're only allowed to see my name. However, that bifurcation of permissioned views has, up until quite recently, not been possible. So, now you're starting to emerge from these silos and recognize that within these instruments there are certain components that have value to certain people, and you can actually permission it accordingly.