March 18, 2025

00:42:24

The Transformative Role of AI in the Credit Industry

The Transformative Role of AI in the Credit Industry
UVA Data Points
The Transformative Role of AI in the Credit Industry

Mar 18 2025 | 00:42:24

/

Show Notes

UVA School of Data Science graduates pursue many career paths, including government, health care, technology, retail, and... finance. In this episode, we hear from two UVA data science alumni who put their data science degrees to work every day in their roles at Octus, a financial services company that uses data to provide insights to its clients in banking and legal services.

They discuss the integration of AI into various industries, the challenges of information overload, and the role of human expertise. We welcome Charu-Rawat, who earned her Master’s in Data Science from UVA in 2019.  

View Full Transcript

Episode Transcript

Monica Manney 0:03 Welcome to UVA Data Points. I'm your host, Monica Manney. UVA School of Data Science graduates pursue many career paths, including government, healthcare, technology, retail and finance. In this episode, we hear from two UVA data science alumni who put their data science degrees to work every day in their roles at Octus, a financial services, company that uses data to provide insights to its clients in banking and legal services. They discuss integration of AI into various industries, the challenges of information overload and the role of human expertise. I now hand it over to Charu Rawat, who earned her master's in data science from UVA in 2019. Charu Rawat 0:43 We're so glad to join you for the UVA Data Points podcast. I'm Charu Rawat, director of data science at Octus, and a proud University of Virginia alum. Today we're talking about transformative role of technology in the credit industry. At Octus, we're leaders in credit intelligence, turning complex financial data into actionable insights that empower professionals to make smarter decisions across the credit lifecycle. Octus is a part of the Permira family, a global private equity firm that partners with forward-thinking companies to drive meaningful change. Joining me today are two exceptional guests, fellow UVA grad, Yihnew Eshetu, who's the director of AI engineering at Octus, and Ben Rogers, Vice President of advanced analytics at Permira. Together, we'll explore how these advancements are shaping the credit landscape and creating exciting opportunities for seasoned professionals and the next generation of data scientists. So let's get started. Let's talk about the credit market. Why do we think data science and AI have become so integral to this space? And what's really driving this transformation? Ben Rogers 1:52 I would say the AI space is really not specific to the credit market. What we're seeing is huge disruption across all industries, really. And you think about the scale of this, it's up there with the invention of the internet, introduction of cloud technology. And you know, every time there's a fundamental platform shift, like we're seeing with AI, it's going to create 1000s of new companies and really change the way businesses make profits. And I think we're seeing that in the credit space, but also in pretty much every every other industry. Yihnew Eshetu 2:23 Do you believe that data science and AI can maintain the level of speed that it's being integrated in credit market, or will the demand for it not match the expectation that's out there? Charu Rawat 2:35 I think AI just in general, suffers from like an expectation problem, and I think we're seeing that a lot right now with Gen AI, where it's so hyped up, and undoubtedly, there's so much that can, you know, continue to, you know, happen with this and Gen AI can be utilized in so many different ways, but there is a lot of hype, you know, surrounding it, and there needs to be a lot of work done at the fundamental, like, foundational level, to kind of set up teams and, you know, set up workflows for us to be able to, you know, iterate quickly and develop really good AI products that can keep up with the advancements that are happening in the field of AI. Ben, I'm curious to know, like you've been exposed to so many different companies and industries, is this something that you notice across different companies as well? Ben Rogers 3:17 Yeah, I think for us, the kind of biggest risk for companies. It's not a lack of access to technology and AI, but it's really their kind of corporate attitude towards it, and it's their ability to kind of think forward, think about what AI can do in the future as well, and make the right investment decisions. Today, companies that ignore the power of AI, or think it's not so relevant to them, won't go out of business immediately, but they're likely to struggle in the market long term. So at Permira, as a investor in many businesses, we really looking for management teams that are excited about AI, willing to test learn, you know, and fail as well, but fail quickly, yeah, and then, and I think that's the real key thing with AI, like, the answer is not super obvious. You have to test things with such a new technology. And I think those companies that are not afraid to test fail, but then quickly pivot onto those things that do work, I think will be set up for success. Charu Rawat 4:13 I think that's so true. And from you know, my experience at Octus, it's been the same, like we've had data science at Octus for so many years, like for over 10 years now, and, you know, even before Gen AI kind of came in, we were relying on, like, traditional data science techniques and model building, and there was a lot of failure there, you know, but that's what it's an iterative process, you know, you learn, you test, you build things, you fail, you learn from that, and you build better. But I feel like we did that for so long that by the time Gen AI came in and, you know, we had to build things quickly. There was already a strong foundation laid for us to iterate and move, you know, move faster and be able to, like, catch up. Yihnew Eshetu 4:49 Charu, your work in data science focuses on turning raw data into actionable insights. How has data science transformed the way that credit professionals process? And act on complex information at scale. Charu Rawat 5:02 I think one of the core challenges in, you know, in the credit space, and not just this, but even in other industries, can be information overload. One really has to sift through enormous amounts of data to assess risk, or, you know, uncover opportunities, and ultimately make informed decisions. So traditionally, much of this process, especially research, can be extremely manual and time consuming, so a significant portion of the valuable credit information exists in unstructured formats, and natural language processing allows us to extract, standardize, summarize, structure, all of this data and this information at scale, significantly you know, reducing manual effort. There are companies that are built just for data extraction, and then by leveraging machine learning models or other traditional data science techniques, we can surface, you know, early warning signals, credit risks, opportunities in real time. So instead of spending hours or days just analyzing documents or data sets, professionals can now focus on, you know, the high value analysis with AI powered insights right at their, you know, fingertips, and then ultimately, just going past beyond, you know, traditional data science with generative capabilities. Now, because of large language models, you can now analyze trends generate structured, you know, summaries and summaries out of, you know, long and really lengthy and complex legal and financial documents. And that makes it so easy for people to access information and do it really quickly. And you can also tailor that information to your persona or business. So all of this can be done in probably, like 1/10 the amount of time the traditional techniques, you know, used to kind of be utilized for up to even just like three years ago. So this transformation, I think, for professionals in the financial industry can kind of have them act with greater confidence, move faster and focus more on the strategic business decision, rather than getting bogged down by, you know, research and like money and sifting through, you know, documents and information retrieval and those sort of things. So, you know, the ability to effectively interpret and apply these AI driven insights is ultimately what's becoming more of an essential skill in the industry. Ben Rogers 7:15 That's a really important point. We see AI and automation in our services businesses that we invest into really about a tool for augmentation of human input and expertise, rather than trying to replace that human so it's really about using AI to take away the, you know, 30, 40% of the work that's not really value added additive, and freeing up that capacity for the humans who are The experts and what they do to provide value. So we're seeing that in a whole bunch of different companies, law is an interesting space where, historically, a lot of the lawyers time would be spent comparing documents, sifting through huge volumes of case files. Now the AI is able to do that for them, and they can focus on the complex case matters, spending more time with their clients, and we're seeing that drive both greater stickiness, actually, with customers, because that's really what customers want you to be doing, as well as better work satisfaction for the lawyers themselves in that example. Charu Rawat 8:15 Yeah, for sure. I mean, at Octus, we're also, you know, my team's currently working with legal analysts to, you know, develop models that can, you know, extract hundreds of data points from, you know, credit agreement and term sheet, kind of legal documents that run up to, like, 400-500 pages. This is something that, if you had to do manually, cannot, just like, take one day, but, you know, spill over to many days, and imagine at, you know, a peak time where you're dealing with so many documents at once. That's just a lot for, you know, someone to manually kind of do. And now, with a lot of these models that we're building, we have the ability to be able to automate, even if not 100% maybe it's just a 70% but do it, you know, pretty much within minutes. Yihnew Eshetu 8:54 Yeah, you bring up a good point. And I think this is something that we covered in our curriculum UVA, is that about 70% of your time is spent on collecting and curating data. Yeah, and I feel like with Gen AI that has reduced significantly. It can allow you to drive and focus on insight and not focusing on too much of the extraction and the collection of data and how clean your data is, but you can quickly get the from the raw data and do something actionable that can be used immediately on whatever way in that it needs, that it needs to and I think that's something that makes Gen AI so uniquely different from what we're so used to, being taught at school or even being used in the industry. Charu Rawat 9:34 For sure, I think prior to Gen AI, like analyzing complex legal and financial documents required extensive manual effort. Even with traditional data science techniques, it took a long time to build those models, like it could be weeks to months. And now with Gen AI, you can do a lot of that within days, which is really incredible, if you think of it. And even with frameworks like the retrieval augmented generation framework or the rag framework, right? We can now extract and synthesize critical insights from 1000s of documents within seconds. And you know, that's absolutely incredible. We've built credit AI at Octus, which is our Gen AI powered chat bot that you know, clients can use to like, quickly access and search Octus proprietary data. And you know, instead of spending 10 minutes finding that information, sifting through so many articles that we have, you cannot find that information within 10 seconds. And, you know, be able to converse with the chat bot in a very, you know, in a very like easy, natural way, and that that just makes information so easily accessible to people in a way that we haven't been able to do before, from open source frameworks to proprietary solutions. I'm kind of wondering like, how does your team evaluate and choose, you know, the ones that are best suited for business cases? And are there any criterias and benchmarks that actually guide your decision making there? Yihnew Eshetu 10:55 Yeah, I would say this is probably the most challenging aspect of determining what tool to use. There are so many large language models out there. There's so many different frameworks that you can use, and we always tend to align our solutions with whatever the business and use cases. So for example, when it comes to large language models, we look at open source performance benchmarks that are out there leaderboards that compare the model, accuracy, speed, scalability, how reliable it is, but we also look at cost and the benefit of implementing that particular Gen AI solution. So not only there's just a cost of operation, but what's the level of complexity to maintain it, and how quickly can we adapt away from that particular technology, or how quickly can the technology itself adapt as things are ever changing? You don't want to nail yourself to a particular solution that doesn't allow you to be robust, and there's no rule book to following what solution to use, and we tend to use a hybrid of different criterias that we look at to check off what we need and then decide what's best suited for us to implement. Charu Rawat 12:07 Ben from your vantage point of looking at other companies within Permira's portfolio and those that you worked with closely, what has their approach been to this? Ben Rogers 12:16 Yeah, I think it's a really good question. And there is no perfect approach. I don't think where we've kind of landed on is to encourage portfolio companies to focus on a business problem they're trying to solve. I think one mistake, you know, some companies out there are probably making is to think that generative AI has somehow changed the business problems they're facing. But actually, if you know, if you're sitting in the boardroom, it's the same strategic questions that a company faces, both, you know, now in the Gen AI world as there was before. So really think about, how do we apply that technology to solve that problem, and not getting kind of too distracted with, you know, shiny pilots, proof of concept, etc, to solve less valuable problems, and to that extent, given the dynamic change and the fast pace of evolution in AI foundation model landscape, I think it's best just to, just to pick one for the time being and kind of see how far you can go solving that problem. But then and not kind of over concern yourself with, you know, is this exactly the most accurate model versus the other one? Because, hey, guess what? Two weeks later, there'll be another model out there. The kind of horses in the race are constantly changing. And I think, as a kind of purest data scientist that can be, you can often go into rabbit holes, into the nth degree of accuracy across different models, when actually, you know, if it's 99% versus 98% does that impact your the business problem you're trying to solve? I think that's really the question Charu Rawat 13:47 that is so true in so many ways, exercising like some level of restraint can be a real value add. Otherwise, it's so easy to kind of get carried away with the hype and everything that's happening there, and that takes you further and further away from actually solving the problem. Ben Rogers 14:01 One thing I have seen is the fact that generative AI is so out there in our consumer lives, and it's so easy. Pretty much everyone's been on chat GPT. They know how it works. They can touch and feel it. I found that enabled adoption of AI in the enterprise much, much faster and better. So if you go back to the days of traditional AI and machine learning, you know there was always a huge hurdle on getting adoption and understanding of the of the tools when you try and deploy them to your business users, and finding, more often than not, there's actually excitement and people want to embrace using AI because they're more familiar in their day to day lives as a consumer. So I think that's really, at least a lot of companies I've been working with is generate a lot of pull from the business, versus having to push the solution. Say, Hey, you should use this because it's good. It's now people actually want to engage in and they're sharing ideas as well, which is, which is really cool. Charu Rawat 14:58 Yes, exactly. I think it's been, like, really instrumental for me also at Octus to kind of interact with different business leaders who now already have some baseline understanding of AI, be that, that it's just gen AI, but that just becomes, like, such a great starting point to, like, you know, ideate, different ideas and see, you know, what works and what doesn't, as opposed to before, where, you know, I had to be the one who had to completely bring people on board with what AI is and what it can do, and all of that. Yihnew Eshetu 15:26 Even outside of Octus in the AI space, you can see that there's so many people engaging, writing papers, publishing POCs, that even if you don't have the time or the bandwidth to spend you know, focusing on a new technique or a new model that's out there, you can leverage work that other people are doing to come up to a conclusion on if it's useful for you and what kind of use cases. Yet you can leverage it. You can use the community, the resources that are out there, the papers that are being written, to enhance your decision-making, Charu Rawat 16:02 yeah, I'm curious, like, how much of the time that you spend is kind of divided between, actually, like, doing a lot of the model building, the engineering, versus, like, keeping up with research, and, you know, everything else that's out there. Yihnew Eshetu 16:14 I would say I try to allocate somewhere around 20% of my time looking at what the community, Bai community is talking about, and so that's like following forums, web articles that get posted, or even on just social media, following particular influential people that are talking about these particular technology and seeing what they have to say. Every day. There's a new model coming out. There's new techniques, new frameworks. It's just so hard, so difficult to maintain and to be well versed about everything. So it's more about deviating and looking at all the different variations that are out there and what other people have to say about it. Charu Rawat 16:55 It's also just become a new skill that a data scientist or an AI engineer has to possess. Now, I mean, one always had to kind of be open to, you know, different techniques, and just generally have an open mind, because we work in a field where there are constantly, like, new things coming up. But I think that's just reached another pace with Gen AI, where there's something new almost every week, if not every day. And so as a data scientist or an AI engineer, you really have to, you know, almost like, religiously, like, just make sure that you're trying to keep up with what's happening, read up what's happening, and not necessarily, like, you know, taking everything and, you know, just start working on 10 things at once, but at least have some sense of what's happening out there in the world and see if that's probably, like, relevant to, you know, some of the work that you're doing. Ben Rogers 17:41 The way we tackle that across the Permira portfolio is we have this AI community. So we bring together all of our AI leaders across all of our 60 or so portfolio companies. And we have this idea of a innovation lab out there, all of these CTOs, heads of AI, all doing different experiments. And by bringing them together, sharing the insights, what's working, what's not working, it really exhis everyone's journey. And I think having that community is hugely valuable and in so many different ways, because it can be quite, quite a lonely place. As a leader of AI, when everyone's looking at you, the CEO is asking, What are we doing with AI? You know, why are we not doing this? Why is we're having that impact and actually being able to share across a community of like minded individuals. So I encourage all of our kind of data scientists and AI leaders to think about what community they have access to or can participate in. And we've seen huge success at Permira in in kind of cross fertilizing and also trying to find different perspectives from different sectors. So we're a big investor into technology companies, about 35 technology companies worldwide, and they're all at the cutting edge of AI. Zendesk, for example, is a customer service and support ticketing platform that's renowned for being first to market with AI agents so autonomously resolving customer support tickets, and we pair that organization with maybe some other companies who are further behind in their AI journey, and it really helps them accelerate, takes best practice from others and kind of seeds it into, into other organizations. Charu Rawat 19:20 I mean, there are always, like, so many challenges that, you know, come up with new technology, and I think that's the same with Gen AI. You know, I'm curious to know, particularly around scalability. From your experience, what are some hurdles that you faced in integrating Gen AI into workflows, and how have you sort of effectively addressed them? Yihnew Eshetu 19:39 Yeah, that's a very good question. Biggest thing with scalability is the amount of computational resources that you need in order to scale at a particular magnitude. So for example, here at Octus, we have so. With 1000s of gigabytes of data, being able to have a particular system that can handle that particular load of data, so not just data processing, but generating content, can be quite difficult, so having a robust, scalable workflow behind the scenes that can scale up and down based off the workload is very important, not only on the computational side, but also the time that it takes to complete such given tasks, as we talked about earlier. One of the biggest reasons that AI and data science is so important the credit market is the ability to provide real time insight, regardless of whatever workflow or whatever model that's out there, we need to be able to generate content, be able to make predictions within a reasonable amount of time, while being able to handle the scale of amount of data that we are processing within a particular time period. Ben Rogers 20:56 How do you guys think about differentiation in a world in which regenerative AI a lot of the use cases are built on open source models, or these models that are kind of freely available to every company. It's not like a traditional AI model where it was custom built, trained on proprietary data that just your organization has. It's a much more democratized technology. How do you think about AI as differentiating, you know, Octus versus some of your key competitors that have access to the same model? Charu Rawat 21:31 I think one of the key kind of components of how we build AI products at Octus has always been, you know, our focus has been on augmenting decision making, rather than replacing it, and because of that philosophy that we have that plays a big role in how we develop models. So, you know, we very, very deeply, you know, all the data scientists in my team collaborate very closely with financial and legal, you know, experts and analysts. So we work with these domain experts to deeply, kind of understand the nuances of their workflows, and their expertise inherently shapes how we build these models. So yes, the underlying like large language model could be the same, but it is so heavily fine tuned to adjust for the business use case to kind of, you know, address like what our clients expectations are to accommodate, like, edge cases and and I think, like, ultimately, that's the value that the data scientists are adding, rather than just kind of picking up in large language model and just using it. You know, an example is we're working right now, currently, on a project which is the extraction of data points from legal documents. And legal documents can be incredibly complex, like, I'm telling you, we've used llms on them, and even llms fall short. There's so much of fine tuning and a layer of training, or multiple layers of training involved on top of that to get it to, you know, the accuracy that is expected for, like, an industry application. And so I think that is, like, one big differentiator, and kind of how we, you know, leverage these existing pre trained models. And you know, apart from that, credit professionals specifically need to trust the AI driven insights, and we ensure that our models have clear rationales for their outputs. And that could be from using techniques like rag that link the responses to their, you know, source documents, or even, like traditional machine learning models that are actually able to explain or show influencing factors or variables in whatever they've classified or predicted. That's always been a very like, important tenet of how we've built, you know, AI products here at Octus. And along with all of that, we've always had a human in the loop, validation so AI can assist with pattern recognition, data extraction, summarization, all these things, but the final judgment often rests with the experienced analysts who can apply qualitative reasoning. And our models are constantly refined using expert feedback to improve accuracy, while kind of, you know, making sure that we are able to incorporate all the latest developments that are happening with all of these models. So the result is inherently a very collaborative, intelligent system where AI can enhance speed, scale and efficiency, but ultimately it's the human expertise that you know ensures nuances judgment and ensures that, you know, AI just in this just doesn't, just doesn't generate insights, but is also just grounded in real world expertise, essentially. Ben Rogers 24:34 Yeah, that makes a lot of sense. And one of the things I'm seeing is that the most successful data scientists in the business setting, and the ones that almost like kind of process analysts as well, if that makes sense. So they're thinking about the workflow in an organization, and they're kind of, yes, building AI models, but they're also redesigning the process and the workflow in light of what's possible with AI. So I'm working with some businesses where we've come. Completely turned on his head, off the process, and completely re imagined it from a blank sheet of paper using AI. And that requires knowledge of AI, knowledge of the business, and also just creativity and imagination. And those are kinds of data scientists and AI engineers that were really hard to find. But when you find that, we hold on to them really hard. Charu Rawat 25:20 Yeah, I completely agree. I've always felt that the best data scientists are those that are not just like, you know, just focusing on the core model and the algorithms, but can have the ability to wear different hats. You know, you have to be a little bit of, I'm not going to say domain expert, but you have to have some inherent understanding and the will to want to understand and learn about the domain, because essentially, you're working very closely with these experts to kind of transfer all the knowledge in there. So that's one. And then, you know, you also kind of have to have a little bit of a product, like holistic perspective, because, like you said, Ben, you know, sometimes it's redesigning the entire workflow or imagining that. So you can't just kind of be siloed to, you know, just your technical, you know, algorithm building or development, but really have to kind of take two steps beyond that and look at, okay, what are we trying to, you know, do, what's the problem that we want to solve? Not just the model that we want to build. Ben Rogers 26:11 Exactly. And I think your kind of day to day as a data scientist is also slightly different now in the generative AI world. So when I was learning my trade as a as an AI engineer, way back when it was, you know, me and my computer code for hours on end. Right now, I'm sitting next to the business user. I'm observing the process they're doing, and we're COVID developing prompts together, because it's all about iteration, and you're trying to capture that human expertise on how to do something in the prompts. So it's way more iterative, much quicker cycles of value release, and it's just a very different way of working. I think, as a data scientist in today's world. Charu Rawat 26:52 For sure, I think it's so interesting, because when we look at the larger conversation out there in the media about Gen AI and how that's automating a lot of different roles and jobs, et cetera. It is also changing the way a data scientist works today, you know, it's also changing the way engineers, AI engineers, you know, work. There's new roles, right? And I think the data scientist that I was four or five years ago is very different than the data scientist I am today, and the data scientists you know that work in my team today, just because of the different tools that we're working with, right? And so that Gen AI has fundamentally reshaped the role of data scientists and AI engineers. And you know, a lot of that shift has there's been a shift from just building models to integrating AI into, you know, real world decision making and the traditional workflow where data scientists could spend weeks and months training and tuning models that's evolved. You know, today, it's less about model development, and I think more less about model development from scratch, and more about, you know, leveraging these pre trained models, fine tuning them for specific tasks and embedding them into AI products. Ben Rogers 27:59 Yeah, I think that AI products comment is really interesting because I think increasingly UI is becoming important. The front end interact with these tools. It's just a new way of people working, and I've seen very different success with the same model, the same fine tuning, but presented to the user in two completely different ways. One of them, people love it and start to use it every day. The other one, people try once, they don't like it and they don't use it again. And people, I think, are quite fickle in a business world, if it doesn't work first time and they don't like it, they're not going to build the habit of using it. So I think that kind of creative eye and thinking about UI is also a key part of a kind of data scientist team's role and responsibility in the modern Gen AI era. Charu Rawat 28:52 it is, for sure. I mean, I think always with AI, with data science, you can process and analyze information in different ways, but ultimately it's not you're not fully solving the problem, or it's you're not taking it to the finish line until you really figure out what's the best way to present it to your clients. The reason why I think chat GPT really blew up is because they had this amazing, simplistic UI where anybody can now interact with AI in a way that we weren't able to before. You know what, according to you, would be like some key skills, or, you know, good mindset to kind of see as essential for aspiring professionals to succeed in this, you know, in this evolving landscape. Yihnew Eshetu 29:33 I would say the ability to to think outside of the box, especially if you're a new grad. The traditional way of solving a particular problem is not, or might not be the most optimal way of going about it now, and so that doesn't mean disregarding what you've learned, but using that with additional new resources that are out there to be able to iterate and come to a solution that's far superior. Than it was before, and good examples of that, and what we've dealt with here at Octus is like entity matching, the way used to go about that would be a combination of NLP with some keywords, or using other ML models, right? That requires collecting training data and so on, and iterating over it, but now with the combination of Gen AI, with rag retrievals and so many other solutions, when you're able to create a hybrid system that takes the best of both worlds, you're able to get to a solution that's a lot better than just a simple implementation of an ML model. And that thought process to be able to see the pros and cons that are out there of a traditional way of solving a problem and a new way of solving a problem, and being able to integrate it in the right, proper position within a business use case is, I think, the most, most important thing that anyone can obtain or train or learn throughout their process or their journey, throughout their career, I would say. Charu Rawat 31:19 If you look at the mindset, I mean, data scientist, I feel like one of the key things that I've always looked for when hiring data scientists for our team has been their ability, innate ability to be curious and want to know what's happening. You know, because you work with so many different you know, different business functions, and there are so many like, new developments that are happening right? There has to be, like an innate curiosity to kind of, and will kind of go out there and see what's happening. And, you know, how can I, you know, change what I'm doing today, or improve what I'm doing upon today? With Gen AI, there's so much happening at such an unprecedented pace that it is so important to be able to kind of iterate very quickly, right? And you know, also understanding what the limitations of AI are. You know, we talked about this before, but you just don't want to do AI for the sake of AI. That's a horrible kind of approach to or a horrible mindset to be in. But you know, knowing when and where AI should be applied and where human expertise is like irreplaceable is very, very crucial. And then I think in terms of just the technical skills, the best AI solutions right now aren't just theoretical, but they need to be scalable, efficient, and, you know, production ready. So to that end, having, you know, this engineering and deployment skills, you know, are so crucial, right, especially with a lot of these, you know, pre trained models being available, and, you know, a lot of it being like plug and play, the ability to kind of do all of that, that's, that's a new skill. Yihnew Eshetu 32:49 That's for sure. There's a big difference between a POC and having something in production, because you can, you can have a proof of concept, but how can it be used in a production system that can be used by internal or external clients? And going back to what Ben saying, right? How does the use and feel of it look for users? Are they going to be able to accept it? Will they find trust in it? Is it something that they can easily integrate into whatever workflow or use case that they have out of it, right? So all those are very important things make a particular AI system successful, and just a POC isn't sufficient enough. Ben Rogers 33:30 Yeah, very much agree with that. POC is good for getting buy in and excitement, but the companies that are having most success with generative AI in the Permira portfolio are the ones that actually went a little bit slower in the beginning to build the foundational architecture and infrastructure, and now they're able to run loads of independent experiments, and then as soon as one works, they can scale it almost immediately. And that's really the key. You don't want to be locked in a in a phase of, you know, lots of proof of concepts, but they're unable to scale them. So, yeah, couldn't agree more with with you. Charu Rawat 34:11 Ben, from your experience, or, you know, vantage point of working with different companies is, are all of them moving towards just using Gen AI heavily? Or are there, like, is there still, like, some philosophy on, no, let's just stick to, you know, the traditional methods, or understanding like, where do we stick to the traditional methods versus like, where do we use Gen AI? Ben Rogers 34:32 Yeah, it's a really good question I was actually going to make, make this point because I think as a young, expiring data scientist, is important to keep this in mind. Yes, every single portfolio company in in the portfolio is doing some form of generative AI experimentation. Some have embedded in their product and this kind of core others are doing, doing controlled pilots, I would say, at least 50-60% of my work, and where the value is accruing is within traditional AI machine learning. There's you've got to remember, generative AI does not solve every type of business problem. Yes, it's good at generating content, summarizing content, creating creative outputs. It's really not so good at classification problems, anomaly detection problems, prediction of binary events, does a customer churn? Yes or no? Are they going to be a good prospect for for upselling, cross sell, those sort of problems are still ripe in most businesses, and they can be best solved with applied traditional AI machine learning algorithms. And as I say, there's still a lot of value that we see, and we're still a big investor behind those those technologies. So it's it's doing both, not one or the other, and it's really important that businesses don't get too over focused on just generative AI, forget about the low-hanging fruit with the more traditional approaches, and we say traditional, but two years ago, most companies were not well equipped to benefit from what we now call traditional AI, and that hasn't really changed. Charu Rawat 36:20 I'm sure there's a lot of companies that whose AI, whose first AI exposure is just Gen AI. Ben Rogers 36:25 And particularly in industries that are highly regulated, financial services, health care, things like this, it's those traditional AI use cases that are often the best place to start, because they're much easier to control, have trust over, and then from there, you expand into the into the generative AI approaches. Yihnew Eshetu 36:44 How important do you think having a well constructed tech infrastructure plays a role in being able to ship out good quality AI solutions? Charu Rawat 36:54 Extremely important, from my point of view, like you can have a team of data scientists build out these models, but the models are just models, and listen all there's a place for them to be and be utilized by people, and that's where tech plays an incredibly important role. Ben Rogers 37:08 I think from a commercial perspective, result in how we look at it. We're looking for, where does a company have proprietary data that no one else has, or competitors don't have, because then you can apply generative AI models to that data and do something with it, and no other company can do. And that's when you really unlock, unlock huge enterprise, enterprise value, and can just grow the company very, very quickly. And I think in your guys' case, in Octus, you've been in the leaders in the credit space for north of a decade now, and no other company really has that amount of knowledge and data collection do your credit intel reports are best in class reports that no other business has. So using that to train models, I think, will give a huge competitive edge. And those are the kind of scenarios where we're looking and encouraging businesses to be to be investing into. Charu Rawat 38:05 As we wrap up, I'd love to hear from both of you. You know, as AI and data science continue to evolve rapidly. What excites you most about the future? Yihnew Eshetu 38:15 I can go first. For me, I think three years ago, if you were to ask us, What would AI look like, and where we're at right now with Gen AI is far ahead of what I thought we would be, and then the next few years, I believe we're gonna have more augmented AI agents that can complete more complicated tasks that are outside the limits that we have, which is right now with Gen AI, it's a series of prompts, prompt chaining to get to a particular solution, but we'll have superior models that can reach a level of reasoning and thought process, you know, could be 5x-folds of what we currently have, and something that can be leveraged to solve problems that we didn't think were possible. And with a combination of amount of data that we're collecting, what we can achieve is going to be a far superior like you said, Charu, expectations will always need to be met. I have a future. I envision the future where AI can achieve a lot of different things. Charu Rawat 39:24 Yes, for sure, now more than ever, I think, you know, you know, to your point, there's, there's two ways that I look at this, or like, two parts of this. One is that the underlying llms, the large language models that we have a country, are going to continue to get better. There's going to be, you know, the ability for them to ingest longer documents, and then also, like, stronger, you know, reasoning skills. And you know, a lot more flexibility there. And then, on the other hand, we're going to see a lot more to do with how these LLMs and, you know, different components are basically plug in. You know, different systems are created, and we're already seeing that with. So much hype around, you know, AI agents, and you know, all these amazing tools that are being built with agentic workflows, and it'll be interesting to see what else kind of comes out of it. Yihnew Eshetu 40:09 Yeah, and combining that with advancements in the underlying hardware, I think LLMs will be faster, smarter, cheaper and easier for more individuals and more organizations to leverage in so many different ways. Ben Rogers 40:26 From my perspective, what excites me most, I think 2025 really is going to be the year of agentic AI bringing that to the masses, moving away from a chat bot-style interaction to actually having AI go and do tasks for you, I think that's going to be an absolute game changer. And just from a kind of human level, I'm excited to have AI actually take away some of the mundane, boring, repetitive tasks that we all face in our day to day lives, regardless of the role we're in, and actually free up that capacity to focus on more high value, more rewarding, more personally satisfying tasks. And I think it's going to be wide-reaching across all sectors. I know we focus on health, on finance and credit here, but I think healthcare is going to be the real industry that's going to be transformed. And I know that that kind of Davos event last week, there was a lot of talk around this year could be the, it's likely to be the first year where we actually go through clinical trials with drugs discovered purely by AI. And that's just the tip of the iceberg in terms of what that could do to, for humanity. So, yeah, super exciting time to be a data scientist. I wish I was 18,19, 20 again, starting out my career in AI, but yeah, it's been a lot of fun, a lot more impact to come, I'm sure. Charu Rawat 41:51 For sure, definitely, like very exciting times that we live in. This was such a great conversation. Thank you, Yihnew and Ben, for joining me and sharing your insights. Hopefully the listeners enjoyed this enough to have us back again. And to everyone listening, don't forget to follow Octus and Permira on LinkedIn and X for more updates. See you next time on UVA Data Points. Monica Manney 42:13 If you're enjoying UVA Data Points, be sure to give us a rating and review wherever you listen to podcasts. We'll be back soon with another conversation about the world of data science.

Other Episodes