Ahead of the curve podcast

Agentic AI in Quant Risk: Adoption, Open Source and What Comes Next

Overview

In this episode of Ahead of the Curve, we explore how rapid advances in artificial intelligence are reshaping quant modelling, risk management and analytics development across the industry.

Hosted by Xabier Anduaga, Partner, Post Trade Solutions Quant Services team, the discussion brings together Stuart Smith, Head of Risk Services, and Joey O’Brien, Principal Quant Consultant, to examine how AI adoption in risk modelling has evolved since the early days of large language models – and why the last 6–12 months have marked a turning point for the quant community.

The conversation moves beyond hype to focus on real‑world applications, including how agentic AI is being considered in the context of model development, market data analysis, backtesting, regulatory calculations, and dynamic stress testing. The speakers discuss how custom prompts and domain‑specific agents are becoming critical differentiators, and why open‑source risk libraries are uniquely positioned to unlock the full potential of AI‑driven development in the quant space.

Looking ahead, the episode also explores how AI is changing the day‑to‑day role of quants and risk analysts, reducing time spent on manual date reporting and coding tasks, enabling deeper analysis, research and decision‑making. From the future of risk system APIs to the possibility of fully AI‑driven stress scenario generation, the panel shares practical insight into what risk teams should be preparing for next.

Listen to the podcast

Hello everyone, and welcome to Ahead of the Curve, the LSEG Post Trade Solutionpodcast series.Today, we'll be talking about AI. It's been, what, three years sincethe release of the first ChatGPT, and it's beena roller coaster. And I would say the last 6 to 12 months, we'veseen a massive momentum in the quant and risk communitiesin terms of adoption of AI.Probably motivated by the fact that the new AI agents areshowing significant improvements when compared to their predecessors.So today, we really want to focus on that.We want to talk about real examples that we see the quantcommunity, where the quant community is embracing AI.We want to talk about how we are utilizing AI for ourservices and do a little bit of forward thinking.I am Xavi Anduaga, a partner in the LSEG Post Trade Solution quantservices team, and I have the pleasure to have two colleagues with me today.On my right, Stuart Smith, who runs our risk services.Welcome, Stuart.Thank you.And Joey O'Brien, who is a principal consultant on our quant servicesteam. So welcome, Joey.And I would like to start with you, Joey.So your day-to-day jobis very similar to what a quant is doing in an investmentbank or on a buy side firm. You basically deal with the development of models,the testing of models, calibration of models.What are the things that you are seeing these new AI agents now beingbetter at, and what are the things that you are using them for?And if you can tell me a little bit more about what are the kind of things you seethem working pretty well, but also areas where they're stillnot performing the way you would expect.Thanks, Xavi. So like you said, it's roughly three years since we've had the firstkind of iteration of these models.At that point, they were, to be honest, quite dreadful in the quant space.They could not do anything in practice.Over the past year, I think, we have seen that really improve.So things like being able to translate trade,translate market data has been a huge improvement up to a year ago, roughly.And where a lot of the cutting-edge research in the quant space with these kind ofmodels has been in the market data space.So things like using synthetic AI to generate missing market data seriesor hypothetical volatility surface, that has been quite cutting-edge research,and there's been a number of interesting publications.But even then, up until a year ago, it was not really touching quantlibraries directly. And that was partly because the kind of frontier models thatwere available at that point really just weren't able to do it.They really struggled with implementing quant models or thinking aboutmathematical formulations.Over the past six months, though, we have seen a huge improvement in what thosemodels actually can do in terms of cutting-edge frontier agentic codingmodels. They really can start to handle development at this moment in time,whereby we are actively thinking about where we can use that to perform extensionswithin our library, and we think there's huge possibilities over the next fewmonths of improving how these things are done using these tools.And I think if you look at maybe a contrast, when ChatGPT first came out,we do a lot of backtesting. And backtesting involves going back, looking athistorical dates and understanding why they were exceptional days and causedproblems with your risk numbers. And then what we found was you could plug indates and say, "Give me a description of what happened on this date." We thought,"This could be kind of cool. You could automate this."And it gave back this amazing answer.And we thought, "Wow." We put in a different date.It gave us back exactly the same amazing answer. Both were completely spurious.It was just hallucinating and coming back with exactly the kind of thing we wantedto hear. But this was two and a half, three years ago.I think now we're using similar aspects, similar tools, and we're finding that theyadd an awful lot of value. Even that same task, they've transformed completelyto be much more accurate and also providing much more context to theanswers that you can easily verify it as well.So yeah, I mean, that change is dramatic and definitely, Ithink we see how we figure those tools and bring them into what we do, whether it'sin the development side or in the analysis side, it's going to be a keydifferentiator in how people analyze these things going forward.And how much do you think it's not just obviously theLLMs getting better, but also there's a bit of work we need todo to make the right prompts, right? And I think that helps a lot.Of course, that is one of the secret sauces, I think, in these models.It is developing correct prompts and correct agents to handle things.And what we are seeing is that groups that are actively doing that really aremoving ahead with how these things are working quite fast.And so if anyone has used Copilot or anything like that, they have had eitherplan mode or agent mode.That's specific agents that are developed to do specific tasks, right?So those things are very good at planning or very good at implementing.In practice, to really get ahead, you want custom agents within your ownlibrary. So having a portfolio risk agent who knows thedetails of your portfolio, of your library, how you do things, and you can reallymove ahead with that kind of engineering on the prompt side to make a big advance.Yeah. And I think it's really interesting in that you look at how open source isgoing to interact with this. Open source is going to be a really importantdifferentiator. If you've got a closed pre-built libraryand you put it to your AI agent, it has similar limitations to what real world youcan do with that. There's only so much you can understand and so much you canrework and extend. If you have all of the code base, it can benot so helpful if you're a standard developer who's just received a huge codebase. You have to try and understand.But for an AI agent, this can be amazing because it gives you the detailthat you can't get otherwise and the ability to extend, which is completelydifferent. So I think as a client, you're going to want very different things fromyour risk engine in the next few years than what you maybe thought you wanted fiveyears ago.Yeah. And on that point, I think any frequent listeners to this podcast have heardus talk an awful lot about black box vendor libraries and the transparencythat ORE, our own pricing library offers, that's really unseen across theindustry, right? It's open source by name, open source by nature.The code base is publicly available.If you're wondering how we do an interpolation of a commodity volatility surface,you can see that exact line of code in the code base.With those black box models, you can't really do that, right?You raise a ticket, and they answer your questions.And as Stuart says, if you have that limitation yourself, your armyof coding agents will have the exact same problem.They won't really know what's happening, and they won't be able to investigate thecode. But if the agent does have direct access to that code base, it can besuper sped up to improve things. And if you're thinking about using agents todevelop new thingsIf it's a black box system, it can't do that development. Right?So looking at an open source library like Huari, that really doesopen up the possibilities for these agents to make improvements and makemodifications to your own specific use case.And I think we've actually seen our first clients who are doing exactly that.So I think we met with a client last week who had extended the engine themselves.Well, I say themselves, they had an agent do it for them.They said, "I want this function. I want this feature." They extended it.Said it worked, it delivered the results they expected.Probably not quite at the coding standard just yet.But it shows the way forward, and it shows a really radically different way forwardto the traditional model as well.And that's how we would expect, let's say, an investment bank or any financialfirms that actually develop their own analytics to be able to have an agentthat is capable ofusing the analytics also for the analyzing things, analyzing data, analyzingtrades, but also to be able to code extensions to it.Yeah. Longer term, that's the goal, right?I think probably it's still not at that level whereby it can fully implement aquant model. Right? There's still a bit of a gap there.But if you had have asked me six months ago, was there a chance of that, I would'vesaid no way for a long time. Now I'm much, much, much moreconfident that it's coming sooner than we think, and we have to kind of get aheadof that and accept it is happening, and the people that are best placed to do thatwith custom prompts, custom agents and open source code bases will benefit themost from this new era of agentic coding.And Stuart, how are we using AIas part of our services, not just in terms of how we areusing it internally to help developing new features andthings like that, but also in terms of what we are offering clients that they cando?Yeah. So I think we spoke on a previous podcast about the things we've donealready. So we've been live for about a year now with our chatbot, which doessome simple, straightforward tasks, but really useful ones.So for instance, I made this trade representation for your engine.It doesn't quite come out like I thought it would.Can you tell me what might be wrong?And it's pretty good at saying, "I think you got this wrong.This isn't market standard. Try changing this." So at a basic level, we're alreadydoing that, right? We're already putting those tools in the hands of clients tohelp themselves have a smoother journey when they're using the risk services.I think when you look forward, really, we want to take that and we want to takethat forward four or five steps. How much more can we doin the daily blocking and tackling of risk management and takethat away from users, away from analysts who have to log into the system, and givethem back? We've already analyzed it.We've already found what we think the problems are.Here are some proposed solutions that we think were going to work and get you backinside so you can go and do the really interesting stuff you want to do around riskanalysis, around deep dive, around understanding the numbers.And that's stuff that we're already working sort of heavily in R&D on, looking backthrough the history of past cases to understand how we cansolve those more simply in the future.And frankly, we think there's a really good chance that we can solve anawful lot of those problems.If you look back at, I think the first time I sat down with a risky user,we just implemented a market risk system, and I sat down with him to kind of trainhim on the system we'd implemented and how he could use it.And he described his job, and his job was to come in, look at yesterday's numbers,look at today's numbers, and find all the really obvious errors that had flowedthrough from the various systems, fix them all up so that he could get a clean setof risk reports out the next day. And that is the reality for a whole bunch ofjunior analysts who work in this industry.That is predominantly what their day job has been for a long time.That's a job that I think you can see is not going to be there anymore.That's a set of things that could be pretty easily automated to go through,understand those things, and then if you're able to chat between differentdesks, push back on the front office without having to manually do it,understand what trades are real, "Oh, that one's spurious.This is a 10 times error.That one got closed out yesterday but didn't make it through the cutoff period." Ifyou can resolve those things, then you're coming in in the morning and you'realready clean, you're already doing the actual job of risk management, not justthis kind of cleansing that could take up half the day beforehand.Yeah. And that's really interesting.And I think just for our listeners, we are recording this onMarch 2026. We know this evolves pretty quickly.Yeah.And we will be seeing a lot more on this spacevery soon. And Joey, I know that there'smaybe a couple of kind of deep dive in a couple of examples such as, forexample, the calculation of standardized calculations or eventhings like dynamic stress testing.How do you see AI fitting into thoseframeworks?Of course, there are a number of low-hanging fruits I think that AI couldprobably do right now in terms of model development, right?So thinking about new models is probably still a bit of a gap for it, buttaking an already prescribed model, like standardized calculation from aregulatory framework, that's the kind of thing an AI agent now could very quicklyread that documentation and start an implementation, particularly when there's aset of tests, right, so it can make sure that it's doing things correctly.It's making sure it's mapping the equations correctly.So right now, we're waiting on a new regulatory model to come.But I'm very excited to see what happens when that does happen with agents, becauseI think there's a huge scope there for improving those implementations and reallycutting down the time taken to make a start on those calculations internally.Yeah. And you think, "Oh, well, these things are easy.They're regulatory, they're standardized." Something like a SACA calculation,surprisingly time-consuming to code because it's a product-by-productimplementation. There are hundreds and hundreds of products to work through.Actually, again, an AI that can sit, interpret the rules, understand them,apply the various FAQs that have been developed across the industry, that's areal game changer in terms of how fast you can bring that to market, and also thetransparency in that decision-making as well.You say, "Oh, AI isn't very transparent." Well, to be honest, neither is sort offive, six quants in a room discussing it unless they're producing incredibledocumentation, which isn't always the case, right?At least the AI, it could be set to put out that information.Here's why I reached this judgment. Here's how I got there.Here's what I'm going to do next. So I think, yeah, transformative for this.And again, it's just going to change the landscape of what risk engineslook like, what the value is that comes from different engines, and howquickly competitors can come in and do something different as well.And I think on Javi's second point there about dynamic stress testing, that's anobvious thing that these agents will do eventually for banks.So when you think about how stress testing is done at the moment, right, we look ina rear view mirror and we say, "What would happen if the '08 crisis happenedagain? How would your portfolio look?" And we look backwards or we say, "Whathappens if the Bank of England cut rates by 100 bps tomorrow?" Right?We try to generate hypothetical scenarios, and then we stress our portfolio andit's done in practice everywhere, but it can be improvedgreatly. And I think one way that can be improved is dynamic stress testinglong-term, and agents will be key for that. Right?So you could have an agent who, first thing in the morning, takes all the latestnews and thinks about all the latest factors. What if there's an oil crisis?What if there's a new tariff introduced?And it can assign probabilities to them and generate those stress tests on the fly,which are much more realistic and relevant to your portfolio.And I think that is one of the first things that agents will do in terms of thequant risk space that really does change the industry.And when we said at the start, they've always been notoriously poor foroccasionally coming up with some spurious answers, and that's obviously got better.But again, in this case, for stress testing, that really doesn't matter.I want you to come back to me with five, six provocative cases and say, "These arethe things that could happen. This is what would happen to your book.What are you going to do next?" And we talk to really effective risk managementteams. This is a lot of what they do.It's not just looking at the numbers, fixing them up, and getting them published.It's going, okay, based on that, if that happened, what would we do?How would we exit from that scenario?How would we come out on the winning side of any sort of market event?Do we have a strategy for that? And again, something external, being ableto push you and say, "Here's some really viable things that could happen,"and translate those into scenarios you can implement, which is such a hard partof what you do. That is a huge task that takes, todo well, really, really bright, intelligent people because it'shard work. You need really detailed knowledge of the engine, the scenario, howthose factors interact. That's a complex piece of stuff, but actuallysomething AI's probably going to be really good at.Data-based, rule-based, detail-based.This could be something that's really transformative and yeah, to be able to say,"I want to understand these five big macro scenarios tomorrow.Go write me a stress test, run it for me, bring me a report back." It'sgame-changing, right? In terms of the capability you can have.And that really fit nicely into the second part of the things I wanted to discuss,which is to do a little bit of forward-thinking.And you mentioned a little bit some of the things that orhow we are kind of embracing AI as part of our services.Yeah.We constantly release new services, new features.How do you see some of that evolving?But also if we're a bit bold and think about in two, three years'time, how do you see kind of a risk systembeing in track with AI, basically?Ithink there's so many things that are changing.It's an incredibly hard question to answer.I think there are a few key things that you can take away from it that are going tobe key, right? One is thedesire from what a risk team wants from their risk engine is going to change.Five years ago, maybe they wantedan amazing user interface, an easy-to-use slice and dice tool,a way they can analyze their data, and run things.And that's maybe going to move away from that because probablyyou're going to have fewer people running this, empowered with more agents.So then maybe what becomes more important is the APIs that sit behindand the APIs that are accessible to various largelanguage models and agents that are out there that can then do things that are moreinteresting for you. So, for instance, an API to create a stress test, an API torun a set of calculations,and these kind of APIs that can then be used by those large models to rundifferent analytics for you. You can imagine someone sat there simply saying,"Okay, here's my VaR for today. Okay, what happens if this moves to thisand this moves to this? Can you rerun those calculations?Can you produce me a report that looks like this?And can you give me some sample hedges that I might want to put on againstthose scenarios?" That's potentially a full day or a full week of work todayfor somebody, and that's something that really you think that's going to besomething done by a model, and that's going to need a different set ofAPIs, a different set of capabilities to make sure that it can run them.And at the same time, you've got cloud compute just slashing the price ofstandard compute every week, right?The standard compute has got incredibly cheap.So that paradigm, which was you had to be quite efficient about the way you raneverything, don't run things twice because it's too expensive.A lot of that isn't quite the same anymore.Again, you can just think of blasting these things out into the cloud, havingbig compute bring those answers back for you in an asynchronous way,managed by the model again. It's quite a different look andfeel to how that risk system could work.And Joey, on your end, how do you see the quantrole evolving in these next two, three years?What are the things that you expect institutions will expect from aquant versus what they are doing today?Of course, it will change massively, to be honest.I think even right now you're feeling that.So if I think personally, two, three years ago, 50% of my time was spentwriting code of some sort.That's down to less than 10 at the most here.And that is because now it's much more about making plans, providing prompts toagents, reviewing their code, and that has really changed the role, right?We do not write as much code as we used to, and that will continue to be the trend,I think, over the coming years. And what I think the quant role will really changeto, particularly in the development space, is that rather than being anactual developer themselves, they will become closer to a project manager,whereby they have 10 or 12 agents working on developments.They make plans, they review the work those agents have done and make sure thetesting looks correct, and that will become the role.That will be really important to be able to do that efficiently and effectivelymanage those different coding agents to do tasks.And one thing that's quite nice is that, as we've said, the more mundane kind oftasks day-to-day, right, doing VLOOKUPs or LEFT JOINs to do a model validationreport, those things are disappearing, and they will save a lot of time, asStuart has kind of mentioned there.And once these quants have this free time or additional capacity,that gives moreopportunities to investigate deeper line questions, right?So new models, new frameworks, and bigger questions rather than the more mundaneexercises. So I think there's potential here for a huge amount of scope in termsof new models, new frameworks, and interesting research in the quant space.Yeah. Maybe there's two problems that have been there since I started working inthis field that are, for me, still largely unanswered, right?So reverse stress testing. How do I find the scenario by which is acatastrophic scenario for my book?And again, we've sort of thrown around different ideas and different models forthis, but no one, to the best of my knowledge, has ever come up with a reallyviable solution. Again, this feels like an area that could be transformed,and we know reverse stress testing would be such a big driver for the industry.So this is one. The other one is wrong way risk.Again, wrong way risk, really complex topic.Really sort of present, right, in the current way we look at things.And again, howcould a model go out and help you find the scenarios where you thinkthat you should look closer here? Because this is susceptible to wrong way riskin this way. Again, this kind of large data,large context requiring thing as well, could be really transformedby that.So yeah, really interesting.And I think the things that these models are historically very good atis finding patterns, right?Yeah.So they're very good at looking through streams of data-Mm-hmm... and doing these kind of estimations and finding new patterns which we've missedto date.Yeah.So if you think about the industry right now, the Black-Scholes model, thatlooked at return data and fit in a mathematical framework to do that.But that's because we saw that distribution of returns at that point, and we lookedat that pattern and how we could enhance our models based on that.There could be a lot more hidden distributions and hidden patterns that we havenot seen yet. And these models very quickly could find them and totallychange how some of these foundational models really are constructed and thingswe've missed to date.So maybe come back to also how do I think this will affect a quant?Maybe take our specific example where we're not just quants of any engine, we'requants of an open source engine. I think we could find our role quiteradically transformed. We've already seen the first clients developingon it with agents.They're probably going to want that code to come back into the base system.It's quite possible we're going to see what is today a really nice group of peoplewho gradually evolve that code base exploding out to being avery large group of people who are able to read, understand, and develop that codebase, which is amazing. That's what you want open source projects to be.But managing that as well is going to, I think, become quite a challenge of how doyou keep the kind of coherence of the libraryand take advantage of all of the code that's available.Yes, and on Stuart's point, every time there's a new invention, really, in thecoding space, so when you think neural networks, there's always a core library atthe center of it. So TensorFlow, right, in that case was a huge part.And as we move forward in the agentic coding era of quant finance, therewill have to be a centerpiece open source library to be the foundational part ofthat. And I do not see an alternative right now except for OREacross the industry, so I think we are extremely well-positioned to be thefoundational large language quant library as agentic coding reallytakes a big part in the role.Thanks, and that sounds really exciting.Definitely something that we look forward to.Thank you, Joey, and thank you, Stuart. Thanks everyone for listening to us today.You can find us on Spotify, YouTube, and also on our website,lseg.com. Thanks again for joining us, and we'll see you soon.

Also available on