Ahead of the curve podcast

Harnessing counterparty credit risk using the Open Source Risk Engine

Overview

Join our host Roland Stamm, alongside his colleague Joey O'Brien, Senior Consultant as they explore the topic of Backtesting of Future Risk Factors. This discussion is inspired by a recent client engagement, focusing on the utilisation of the Open Source Risk Engine (ORE) library within a counterparty credit risk framework.

The conversation shifts to the broader regulatory landscape, touching on recent developments from the Bank of England and the increased scrutiny following high-profile defaults like Credit Suisse in 2022 and Archegos. They discuss the implications of Basel III and FRTB on standardised models and the cost benefits of maintaining Internal Models Method (IMM).

Listen to the podcast

Hello and welcome to another episode of Ahead of the Curve,an Acadia podcast series that takes a deep diveinto the derivatives industry. My name is Roland Stamm.I'm a partner at the Quantitative Services in Acadia.With me today is Joey O'Brien, a Senior Consultantfrom the Quantitative team. Welcome, Joey. -Thanks, Roland. Delighted to be here.It's great to have you on board. What we're going to talk about today is,especially counterparty credit risk, potential future exposure, PFE,as it's also known. To take a step back, first,we would like to speak a little bit about the open source risk engine, ORE,which is a tool which we are using to calculatethese kinds of things. Can you tell me a bitabout what ORE actually is? -ORE is an all-in-one suitein the financial risk space. It does everything from pricingto calculating sensitivities to capital charges,which we'll discuss during the course of this podcast.It's a C++ library that is enhanced for performance,which is crucial for the industry, and it has almost 20 years of experience.I think you could probably speak a bit moreto that founding story of the software. -ORE, the open source risk engine,started as a small pricing library in the early 2000s,where we started using it to price complex tradesthat were not possible to price in other systems.It gradually grew into a credit risk library,especially to calculate things like credit value adjustments, CVAor potential future exposure, which we are going to talkabout a bit later. That's its origin.The idea to make it open source grew in themid-2010s. The market risk suite, on top of that,came a bit later. -I think probably where ORE is used moston a daily basis is within Acadia's SIMM services.Whether that's calculating SIMM or the sensitivities which feeds them,and SIMM is the standardised international margin methodology.What most institutions do on a day-to-day basis,there are over 200 of them at the moment, is they use the graphical user interface,they click and play pretty much to calculate both sensitivities and SIMM.The workhorse behind the scenes that does those calculations is ORE.The whole industry is practically using ORE every day,and perhaps they don't know it. As part of that process of Acadia,Acadia utilising ORE. We've had to move through all the phasesof clients over the number of years. As some brilliant, it's its use cases.Those clients generally have all kinds of asset classes,different types of derivatives, and they generally have in-house systemswhich calculate sensitivities themselves as well.As part of that process, ORE has essentially gonethrough a validation process over the yearsbecause the numbers had to align with the systemsfor the confidence to be there. We've definitely increased in confidenceabout ORE's capabilities as a result of that,so much so that we started to increase the amount of open source functionalitywhich is out there every three months. We're practically introducing a releasewhich increases the functionality of ORE for all kindsof risk and pricing analytics. What's probably most excitingin the present and coming years, we hope, is institutionsembracing ORE as their in-house risk system.Exactly as was the case in the project which resulted in the white paperwe're going to talk about later, but maybe you could describe the projectin a bit of detail first. -We have a large investment bankfrom the US that was interested in replacing first one in-housesystem with ORE, and then started talking about replacing othersas well to make it more streamlined to save moneyon licensing costs, but also on infrastructure, becauseobviously when you only have one system that does multiple tasks,it's a lot cheaper to maintain that, even leaving the license feesthat you don't have to pay for ORE side. The project startedreplacing the XVA engine, XVA, meaning all sorts of value adjustmentsthat you have to do when you are pricing derivatives.From there, it grew really into a multi-system projectwhere one of the tasks was to replace the stress testing for market risk,but also, and this is where the white paperthen comes in to do a potential future exposure enginereplacement, plus the back-testing of the potential future exposure engine.That is actually what we want to talk about today, the back-testing,and maybe you can shed some light on that. -First of all,maybe in terms of what potential future exposuresare for someone who doesn't know, it's the ideathat you have a derivative with a counterparty,and at different times in the future, that derivative will have some value.The question is really what happens if the counterparty defaults?What is the amount of value that derivative has,which would be at risk to you in the case of a default?The idea with potential future exposures, or PFE, is that you want to providean estimate as to what that is. What the client wanted to dowas to calculate PFEs across their portfolio,but to do that, they had to have confidencethat the worry was accurate and capable of calculating those numbers,because it's a difficult task. You have to have a simulation engine,first of all, so you have to be able to generate future valuesand determine the value of the portfolio in the future.That's a considerable task, first of all. If you think about, for example,a simple derivative like a EUR/USD swap, the main driving factorthere is going to be of course, the EUR/USD spot,but the question is, what's your EUR/USD spot in 30 years?You have to have models which can accurately reflectwhat might happen in the market. Of course, if we could do that,we probably wouldn't be sitting here. We'd have more things to do.The only way you can do that is thinking backwards in time,and that is the idea behind the back test of this potential future exposure.The idea of what we did for the client was we took a number of risk factors.Just like the EUR/USD spot and we looked backwards.We pretend that we were at a point in time in the pastand then simulated as of that date. For example,if we think of 1st of March 2013, we think about the market data.On that date, we calibrate ORE to that market data,and then we simulate that spot rate, EUR/USD, towards the future.Say one year. On the 1st of March 2014, we now have a distributionfrom the model of that risk factor, that EUR/USD spot rate.Thankfully the that date has happened. We can see what the real valueof the EUR/USD spot rate was and see how that compared with ORE's estimation.In terms of doing the back-test, what we wanted to dowas to do that on a number of historical dates,compare how the model did, performed in a number of scenariosfor different risk factors, so FX spots and discount factorscoming from yield curves to give an indicationas to how ORE would perform. -What was the resultof those investigations? -In terms of the backtesting process,we looked at over 10 years of individual dates.We took a daily simulation on every single dayover the past 10 years, and we calculated the risk factorvalues two years in the future on each date.That resulted in over 2000 observations of where risk factorwould lie in the relays distribution. In terms ofestimating how the back-test went, that's a challenge in itself.You have the realised value which you want to compareto your simulated ensemble of risk factor paths.In order to understand whether or not you're performing well,you have to think in a detailed manner. For example, one of the subtletiesand there are a number is the fact that there's goingto be overlapping windows. If we go back to this 1stof March 2013 date, if we just think about simulating 10 days into the future.Based on the 11th of March, we would do that,and we'd have a distribution, and we could compare the 11th of Marchrealised value to that distribution the following day.We did the exact same thing on the 2nd March 2013, simulate the 12th.We'd have the entire process done again, but there would be an overlap.In the first case, we've done the 1st of March,the 2nd of March, the 3rd of March, and so on.On the next date, we're doing the second, third, 4th March onwards again.There's an overlapping window which has certain effects.You need to consider from a statistical perspectiveas to how a back-test does. There are a number of different subtletieslike that you have to consider. Despite that,we could generate good results. If we consider an acceptable numberbased on that statistical approach, we passed every risk factor we looked atin terms of FX spots and discount factors. -We should probably mention at this pointthat the market data that you use for that backtestingis extremely important because, as you say,you're looking back 10 years, even 15 years, whatever.You need accurate and clean market data. We should mention herethat we have that data because we also alreadyhave a back-testing service for SIMM itself,which is a different type of back-testing, but the idea is the same.You want to check that the predictions of the SIMMVAR, it's basically a valued risk number, are accurate over timeand realise correctly as you would expect. What we also offer in Acadiais a SIMM back-testing service, and for that service,we have built up a 15-year data history, and we are using that in that projectto do the same, but with the PFE back-testing.What we should also probably mention is why is this even relevant for others?Obviously, it was relevant to the project for the people to get confidencethat we are doing the right calculations, but it's actually somethingthat is important for the entire industry. We are seeing more and more regulation,putting more pressure on the way that counterpartycredit risk is calculated. PFE back-testing is something,if you look at the recent Basel Committee Guidance Guidelinesthat they just published end of April. You will find that PFE is a topic,a very important one, and that back-testing of PFEis also a topic. It is relevant for everybody,not just for our client in that particular project.I think counterparty credit risk was a very,very hot topic around 2007, 2008. The market has been a bit more stablein maybe the 10 years after that, but over the past three or four years,it has started to rear its head again, both through the Capitalistand the Archegos default events. There is, as you mentioned,a growing regulatory emphasis on these counterpartycredit risk modelling to make sure that the industryand the markets are protected in the event of a defaultby a large player. One thing to point outis that the client project that we're talking aboutare using ORE from a counterparty credit risk modelling,a fully fledged situation whereby they are simulating risk factors.They are doing PFE evaluations using ORE. That would fallunder the internal model method, which is probablythe most involved potential future exposure frameworkpossible, and that you are simulating PFEs and trying to estimatethe potential value that your portfolio might have.An awful lot of institutions probably don't have the capabilityto do that if they're smaller, and also, there's riskassociated with implementing those models. They're hard to get validationand approval for. Are there simpler approaches?Do you think that institutions could be taken?Yes, I think so.Some smaller institutions will definitely gowith standardised methods, but there will always be the big banksthat will have to do internal modelling in order to keep their capital costs down.I see that both the internal modelling approach and the standard approacheshave their value and will be used, and we will help the industry with bothto get to grips with the regulation in that space.Maybe one thing that our listeners might be interested in.What's the name of the paper? -The white paper is called Back-testingof Future Risk Factors. It's available on acadia.inc.It's short, but it's it provides good detailabout the regulatory guidance on back-testing PFEsand also the results from this POC, which hopefully the number of clientsor potential clients even will find interesting.That was very interesting in terms of what we're doing currently.Can you tell us a bit, maybe about what the futuremight hold for Acadia? -Like we've mentionedin the earlier conversation, the industry is very focusedon counterparty credit risk and market risk in general at the moment.There's been a huge amount of literature coming out in the past year or twoabout the Basel III Endgame and the fundamental reviewof the trading book FRTB and its implementation.What's interesting about FRTB, I think, is that it's fairly standardised.It will be the same for practically everybody.If you think back to the last great standardisationof the financial market, which was SIMM, we think Acadia is the brand name of SIMM,and ideally, over the coming years, we'd want it to be the brand namefor these new standardised calculations, which will come down the line.Based on what we're hearing in the industry,there's an awful lot of institutions of all sizeswhich are leaning towards these standardised calculationsin terms of market risk, capital charges, and ideally,especially in the case of smaller ones who maybe don't wantto implement a calculation themselves, they would leverage ORE downthe line in terms of doing these calculations.Do you think ORE is placed to do that, and what benefitwould there be to use ORE to do that? -First of all, obviously,because it is free of charge. That's a big advantage for everybody.I would say it's relatively easy to use to integratefrom our experience with integration projects.I think another really big point is its transparencybecause it is open source. You see what it's doing.There's a truckload of documentation. There's support.We have done model validation projects left and right,with our clients' model validating ORE itself.As you said, the industry has validated it in terms of the servicethat we are offering. You can rely on it.It's robust, it is well documented. It's transparent.That is something that auditors and regulators love.I think that's a big, big selling point internally. If you want to sella migration project to your managers. -I know we're talking about standardised,but from the air, the conversation about internal models,does that transparency within ORE help in the regulatory submission processof actually moving the models to production?Is that a good thing to have? -Absolutely. Like I said,there has been a huge drive in the industryover the past 10 to 15 years towards transparency as well.Moving away from third-party systems, of course, many large bankshave their own in-house built systems, so they knowwhat they're doing most of the time, but we've also encountered situationswhere the person who originally built the systemhas left, so nobody knows what it's doing anymore. People are happy to have a systemto replace it that is fully transparent, so yes, that is a big point that we see.It sounds practically like if you want to doan internal model approach, you can do PFEs with OREif you want to go that route, but we may also helpin the standard eyesight as well. It's an all-in-one suite,ideally by the end of it, and hopefully, it increases the use of OREacross the industry. -I surely hope so,and I think that those use cases are real. I hope we will seea move in that direction in the future. -Exciting times ahead.Indeed. We're running out of time. Thank you very much, Joey,for joining today. -Thank you.That brings us to the end of today's podcast, Ahead of the Curve.Thank you very much for listening. If you want to learn more,go to acadia.inc. We also have fantastic videosin the ORE academy if you want to learn more about that,but also listen to our past episodes of Ahead of the Curve.Thank you very much.Hello and welcome to another episode of Ahead of the Curve,an Acadia podcast series that takes a deep diveinto the derivatives industry. My name is Roland Stamm.I'm a partner at the Quantitative Services in Acadia.With me today is Joey O'Brien, a Senior Consultantfrom the Quantitative team. Welcome, Joey. -Thanks, Roland. Delighted to be here.It's great to have you on board. What we're going to talk about today is,especially counterparty credit risk, potential future exposure, PFE,as it's also known. To take a step back, first,we would like to speak a little bit about the open source risk engine, ORE,which is a tool which we are using to calculatethese kinds of things. Can you tell me a bitabout what ORE actually is? -ORE is an all-in-one suitein the financial risk space. It does everything from pricingto calculating sensitivities to capital charges,which we'll discuss during the course of this podcast.It's a C++ library that is enhanced for performance,which is crucial for the industry, and it has almost 20 years of experience.I think you could probably speak a bit moreto that founding story of the software. -ORE, the open source risk engine,started as a small pricing library in the early 2000s,where we started using it to price complex tradesthat were not possible to price in other systems.It gradually grew into a credit risk library,especially to calculate things like credit value adjustments, CVAor potential future exposure, which we are going to talkabout a bit later. That's its origin.The idea to make it open source grew in themid-2010s. The market risk suite, on top of that,came a bit later. -I think probably where ORE is used moston a daily basis is within Acadia's SIMM services.Whether that's calculating SIMM or the sensitivities which feeds them,and SIMM is the standardised international margin methodology.What most institutions do on a day-to-day basis,there are over 200 of them at the moment, is they use the graphical user interface,they click and play pretty much to calculate both sensitivities and SIMM.The workhorse behind the scenes that does those calculations is ORE.The whole industry is practically using ORE every day,and perhaps they don't know it. As part of that process of Acadia,Acadia utilising ORE. We've had to move through all the phasesof clients over the number of years. As some brilliant, it's its use cases.Those clients generally have all kinds of asset classes,different types of derivatives, and they generally have in-house systemswhich calculate sensitivities themselves as well.As part of that process, ORE has essentially gonethrough a validation process over the yearsbecause the numbers had to align with the systemsfor the confidence to be there. We've definitely increased in confidenceabout ORE's capabilities as a result of that,so much so that we started to increase the amount of open source functionalitywhich is out there every three months. We're practically introducing a releasewhich increases the functionality of ORE for all kindsof risk and pricing analytics. What's probably most excitingin the present and coming years, we hope, is institutionsembracing ORE as their in-house risk system.Exactly as was the case in the project which resulted in the white paperwe're going to talk about later, but maybe you could describe the projectin a bit of detail first. -We have a large investment bankfrom the US that was interested in replacing first one in-housesystem with ORE, and then started talking about replacing othersas well to make it more streamlined to save moneyon licensing costs, but also on infrastructure, becauseobviously when you only have one system that does multiple tasks,it's a lot cheaper to maintain that, even leaving the license feesthat you don't have to pay for ORE side. The project startedreplacing the XVA engine, XVA, meaning all sorts of value adjustmentsthat you have to do when you are pricing derivatives.From there, it grew really into a multi-system projectwhere one of the tasks was to replace the stress testing for market risk,but also, and this is where the white paperthen comes in to do a potential future exposure enginereplacement, plus the back-testing of the potential future exposure engine.That is actually what we want to talk about today, the back-testing,and maybe you can shed some light on that. -First of all,maybe in terms of what potential future exposuresare for someone who doesn't know, it's the ideathat you have a derivative with a counterparty,and at different times in the future, that derivative will have some value.The question is really what happens if the counterparty defaults?What is the amount of value that derivative has,which would be at risk to you in the case of a default?The idea with potential future exposures, or PFE, is that you want to providean estimate as to what that is. What the client wanted to dowas to calculate PFEs across their portfolio,but to do that, they had to have confidencethat the worry was accurate and capable of calculating those numbers,because it's a difficult task. You have to have a simulation engine,first of all, so you have to be able to generate future valuesand determine the value of the portfolio in the future.That's a considerable task, first of all. If you think about, for example,a simple derivative like a EUR/USD swap, the main driving factorthere is going to be of course, the EUR/USD spot,but the question is, what's your EUR/USD spot in 30 years?You have to have models which can accurately reflectwhat might happen in the market. Of course, if we could do that,we probably wouldn't be sitting here. We'd have more things to do.The only way you can do that is thinking backwards in time,and that is the idea behind the back test of this potential future exposure.The idea of what we did for the client was we took a number of risk factors.Just like the EUR/USD spot and we looked backwards.We pretend that we were at a point in time in the pastand then simulated as of that date. For example,if we think of 1st of March 2013, we think about the market data.On that date, we calibrate ORE to that market data,and then we simulate that spot rate, EUR/USD, towards the future.Say one year. On the 1st of March 2014, we now have a distributionfrom the model of that risk factor, that EUR/USD spot rate.Thankfully the that date has happened. We can see what the real valueof the EUR/USD spot rate was and see how that compared with ORE's estimation.In terms of doing the back-test, what we wanted to dowas to do that on a number of historical dates,compare how the model did, performed in a number of scenariosfor different risk factors, so FX spots and discount factorscoming from yield curves to give an indicationas to how ORE would perform. -What was the resultof those investigations? -In terms of the backtesting process,we looked at over 10 years of individual dates.We took a daily simulation on every single dayover the past 10 years, and we calculated the risk factorvalues two years in the future on each date.That resulted in over 2000 observations of where risk factorwould lie in the relays distribution. In terms ofestimating how the back-test went, that's a challenge in itself.You have the realised value which you want to compareto your simulated ensemble of risk factor paths.In order to understand whether or not you're performing well,you have to think in a detailed manner. For example, one of the subtletiesand there are a number is the fact that there's goingto be overlapping windows. If we go back to this 1stof March 2013 date, if we just think about simulating 10 days into the future.Based on the 11th of March, we would do that,and we'd have a distribution, and we could compare the 11th of Marchrealised value to that distribution the following day.We did the exact same thing on the 2nd March 2013, simulate the 12th.We'd have the entire process done again, but there would be an overlap.In the first case, we've done the 1st of March,the 2nd of March, the 3rd of March, and so on.On the next date, we're doing the second, third, 4th March onwards again.There's an overlapping window which has certain effects.You need to consider from a statistical perspectiveas to how a back-test does. There are a number of different subtletieslike that you have to consider. Despite that,we could generate good results. If we consider an acceptable numberbased on that statistical approach, we passed every risk factor we looked atin terms of FX spots and discount factors. -We should probably mention at this pointthat the market data that you use for that backtestingis extremely important because, as you say,you're looking back 10 years, even 15 years, whatever.You need accurate and clean market data. We should mention herethat we have that data because we also alreadyhave a back-testing service for SIMM itself,which is a different type of back-testing, but the idea is the same.You want to check that the predictions of the SIMMVAR, it's basically a valued risk number, are accurate over timeand realise correctly as you would expect. What we also offer in Acadiais a SIMM back-testing service, and for that service,we have built up a 15-year data history, and we are using that in that projectto do the same, but with the PFE back-testing.What we should also probably mention is why is this even relevant for others?Obviously, it was relevant to the project for the people to get confidencethat we are doing the right calculations, but it's actually somethingthat is important for the entire industry. We are seeing more and more regulation,putting more pressure on the way that counterpartycredit risk is calculated. PFE back-testing is something,if you look at the recent Basel Committee Guidance Guidelinesthat they just published end of April. You will find that PFE is a topic,a very important one, and that back-testing of PFEis also a topic. It is relevant for everybody,not just for our client in that particular project.I think counterparty credit risk was a very,very hot topic around 2007, 2008. The market has been a bit more stablein maybe the 10 years after that, but over the past three or four years,it has started to rear its head again, both through the Capitalistand the Archegos default events. There is, as you mentioned,a growing regulatory emphasis on these counterpartycredit risk modelling to make sure that the industryand the markets are protected in the event of a defaultby a large player. One thing to point outis that the client project that we're talking aboutare using ORE from a counterparty credit risk modelling,a fully fledged situation whereby they are simulating risk factors.They are doing PFE evaluations using ORE. That would fallunder the internal model method, which is probablythe most involved potential future exposure frameworkpossible, and that you are simulating PFEs and trying to estimatethe potential value that your portfolio might have.An awful lot of institutions probably don't have the capabilityto do that if they're smaller, and also, there's riskassociated with implementing those models. They're hard to get validationand approval for. Are there simpler approaches?Do you think that institutions could be taken?Yes, I think so.Some smaller institutions will definitely gowith standardised methods, but there will always be the big banksthat will have to do internal modelling in order to keep their capital costs down.I see that both the internal modelling approach and the standard approacheshave their value and will be used, and we will help the industry with bothto get to grips with the regulation in that space.Maybe one thing that our listeners might be interested in.What's the name of the paper? -The white paper is called Back-testingof Future Risk Factors. It's available on acadia.inc.It's short, but it's it provides good detailabout the regulatory guidance on back-testing PFEsand also the results from this POC, which hopefully the number of clientsor potential clients even will find interesting.That was very interesting in terms of what we're doing currently.Can you tell us a bit, maybe about what the futuremight hold for Acadia? -Like we've mentionedin the earlier conversation, the industry is very focusedon counterparty credit risk and market risk in general at the moment.There's been a huge amount of literature coming out in the past year or twoabout the Basel III Endgame and the fundamental reviewof the trading book FRTB and its implementation.What's interesting about FRTB, I think, is that it's fairly standardised.It will be the same for practically everybody.If you think back to the last great standardisationof the financial market, which was SIMM, we think Acadia is the brand name of SIMM,and ideally, over the coming years, we'd want it to be the brand namefor these new standardised calculations, which will come down the line.Based on what we're hearing in the industry,there's an awful lot of institutions of all sizeswhich are leaning towards these standardised calculationsin terms of market risk, capital charges, and ideally,especially in the case of smaller ones who maybe don't wantto implement a calculation themselves, they would leverage ORE downthe line in terms of doing these calculations.Do you think ORE is placed to do that, and what benefitwould there be to use ORE to do that? -First of all, obviously,because it is free of charge. That's a big advantage for everybody.I would say it's relatively easy to use to integratefrom our experience with integration projects.I think another really big point is its transparencybecause it is open source. You see what it's doing.There's a truckload of documentation. There's support.We have done model validation projects left and right,with our clients' model validating ORE itself.As you said, the industry has validated it in terms of the servicethat we are offering. You can rely on it.It's robust, it is well documented. It's transparent.That is something that auditors and regulators love.I think that's a big, big selling point internally. If you want to sella migration project to your managers. -I know we're talking about standardised,but from the air, the conversation about internal models,does that transparency within ORE help in the regulatory submission processof actually moving the models to production?Is that a good thing to have? -Absolutely. Like I said,there has been a huge drive in the industryover the past 10 to 15 years towards transparency as well.Moving away from third-party systems, of course, many large bankshave their own in-house built systems, so they knowwhat they're doing most of the time, but we've also encountered situationswhere the person who originally built the systemhas left, so nobody knows what it's doing anymore. People are happy to have a systemto replace it that is fully transparent, so yes, that is a big point that we see.It sounds practically like if you want to doan internal model approach, you can do PFEs with OREif you want to go that route, but we may also helpin the standard eyesight as well. It's an all-in-one suite,ideally by the end of it, and hopefully, it increases the use of OREacross the industry. -I surely hope so,and I think that those use cases are real. I hope we will seea move in that direction in the future. -Exciting times ahead.Indeed. We're running out of time. Thank you very much, Joey,for joining today. -Thank you.That brings us to the end of today's podcast, Ahead of the Curve.Thank you very much for listening. If you want to learn more,go to acadia.inc. We also have fantastic videosin the ORE academy if you want to learn more about that,but also listen to our past episodes of Ahead of the Curve.Thank you very much.

Also available on