Hedge Fund Huddle podcast

AI at Work: Inside the New Era of Hedge Fund Research and Trading

Episode 2, Season 4

How are today’s hedge funds really using artificial intelligence — beyond the hype? In this episode, we sit down with Andrew Delaney, President of A‑Team Group and Nishant Gurnani, Partner and Quant Researcher of Versor Investments for a practical look at how AI is transforming research, alternative data, strategy design, and risk management. From treating AI agents as “junior analysts” to building proprietary model stacks and navigating crowded trades, our guests unpack how technology is reshaping modern investment workflows and the competitive edge in markets today.

Watch the episode

AI agents should be thought of as junior researchers,and their goal is to help you get to decisions faster.But the conviction still has to come from you.Hello everyone, and welcome to another episode of Hedge Fund Huddle with me,your loyal host, Jamie McDonald.And today we are talking about the tiny topic of AI.Now, more specifically, we're talking about the practical uses of AI and how people areusing it in terms of investing and trading today.And luckily, I'm joined with two experts to help me out.They are Andrew Delaney, who is President of the A-Team group,and Nishant Gurnani, who's a Partner at Versor Investments.Gentlemen, welcome to the show.Thanks for having us, Jamie.Yeah, really great to be here.Good. Well, guys I like to start by just getting a bit of background from our guests abouthow they started their work careers and how they got to where they are today.So, Andrew why don't we start with you?Sure thing. Again, thanks for the opportunity to share everything here.A-team group is an online publisher and I'm a journalist by trade.What A-team does is it focuses on the business of data and technology in global financialmarkets. We've got four main areas, one of which is trading tech.And we cover a lot of the use of AI in the trading and investment workflows,in that segment. And for each segment we have,we offer news analysis conferences, webinars and so on and so forth.So lots and lots of free content and I look after most of that.But as I say, I'm a journalist by trade.But I started my career, with data very much at the forefront of what I was doing.Straight out of college I got a job luckily as a news assistant at the Wall StreetJournal. And as part of that job, the deal was basically we'll teach you how to be ajournalist, but you've got to deal with data in our the back of our book.So my job was basically to take what was then known as a telerate terminal.It's a little video screen sat on my desk with a little keypad.And every evening just before midnight, which is when we put the newspaper to bed,I would punch out the government bond prices from Cantor Fitzgerald on a little printer,and then I'd turn around to a screen on my desk,another screen, and punch those numbers into the galley at the back of the book.And it was fairly a menial task and probably not as glamorous as I'd like it to be.But it taught me two really important things about data.And the first one was that the importance really of exclusive or difficult to findinformation and analysis and insights.Basically this was the European edition of the Wall Street Journal.The people who bought the newspaper, many of them bought the newspaper for those bondprices because you couldn't get them anywhere else in Europe.This was pre-internet pre just about anything else,electronic. Unless you had a teletype terminal,you couldn't get those bond prices.So exclusive data very important.The second lesson I learned was the importance of integration.And of course, although the Wall Street Journal and Telerate were both owned at the timeby Dow Jones, I was the integration layer.There was no integration between the two systems that we were using.And so I just used to punch in seven, six, seven,six, seven, six, seven, seven and get those pages,print them out and get those bond prices, tap them back into the galleys for the paper.That was the level of integration we had at the time.And so that was, that's how I got my start in this career.Moving forward a bit, I ended up in New York for 20 years initially I was the launcheditor of a publication called Inside Market Data,which became the Bible of market data.And then later in 2001, we launched A-team and A-team,as I mentioned lots about data market data, reference data,etc.. But to bring us full circle to this podcast.Last year we acquired the alternative data conference business of Eagle Alpha.We are now running the Eagle Alpha events.But as part of that this connection between alternative data and AI became a veryimportant part of what we're doing.A lot of alternative data services are unstructured.And we found that as AI became more accepted,AI models could be used to add structure to alternative data services,making them more important. So that's become a very important part of what we do.So a little bit of a round table, a round about way of getting here,but that's how I ended up on this podcast.Oh, thank you, Andrew. And a good little history lesson on how far we've come.I mean, in our lifetime to be punching numbers into newspapers.Nishant, kind of the same question to you, before we talk specifically about Versor,perhaps you can give a little bit of a background about your career and how you got there.Sure. So I'm Nishant Gurnani.I lead the features and FX strategies at Versor,which is a large quantitative systematic investor.My background was pretty traditional from purposes of being a quant.I was a geeky kid who loved math and math and science and computers growing up.And I studied math in college and statistics in graduate school.And then in college, I was very fortunate to be able to spend two summers at some very,very well known quant shops.I spent a summer at AQR, and then I spent a summer at what was then called SAC multiquant, but is now called Cubist.In terms of my background on data and AI specifically,I took this class a long time ago as an undergrad that sort of changed my life.It was a year long sequence on AI and machine learning taught by two luminaries in thefield. And it was very clear to me that this was going to be the cutting edge of whatneeded to be done. And the progress we've made since then has been pretty,pretty incredible. Finance specific, one of the jokes that my friends used to crack incollege was that normal people would list other careers that they might be like a doctor,a lawyer, and I would only list finance careers.So I've always, always been interested in finance.And before Versor, I spent a little bit of time out in San Francisco working for afintech, again focussed highly on alternative data sets,specifically with regards to underwriting.Nishant, just giving that introduction, did you ever just start trading stocks or index orindices yourself? Did you ever just think I could try and do this myself?I definitely did stocks as a teenager and definitely not indices.I don't think I was that mature in my development just yet,but definitely stocks and definitely some other hare-brained ideas of things that Ipotentially could have traded and done.Yeah. Okay, well, let's get into today's topic.We're talking about the practical sides of AI rather than anything more theoretical,really what tools are being used today by hedge fund managers by personal traders toeither filter ideas or monitor ideas or help them with risk profiles.And Andrew, perhaps we'll start with you. Going back five years,we were really talking, I guess, specifically about just generative AI,large language models helping people to cipher and filter better ideas.But what's happened over the past, let's say,3 to 5 years to where we are today in terms of how AI is being used?Sure thing. Well, obviously we've been following AI for a bit longer than that.We saw things like machine learning, deep learning coming through,I'd say probably six, seven years ago.And then of course we had the launch of ChatGPT the birth of generative AI,I would say. Since then we've been following that pretty closely.We've conducted probably 6 or 7 market surveys over the past couple of years.Looking at how people are using this data and we run a number of advisory boards where wetake practitioners in our marketplace to lunch and pick their brain over something nice toeat. And from all those activities, we've really whittled things down to three types ofuse cases that we are seeing in the marketplace.And these are efficiency, growth and control type use cases.In terms of efficiency use cases these are things like summarising meetings and actions,things like using AI to code more efficiently and test code more efficiently.And to extract data from unstructured data documents and alternative data sources.In terms of the growth type use cases, we're seeing people using AI for asset allocation,investment modelling they're using AI tools to drive client retention.And really looking to identify cross-selling opportunities.And then finally on the control side we're looking at things like risk modelling,stress testing, scenario planning some credit and market risk assessment and regulationinterpretation and to a little extent digitising of contracts.So they're the kinds of things we're seeing in terms of models being used,obviously all the household names as they are now from co-pilot.Increasingly Claude and so on and so forth.But I think the real action in the hedge fund area is around developing your own AIstacks, own large language models and even more specialised models to add that secretsauce. So that's sort of been the development that we've seen after the past two years,I would say.Oh, great. Andrew. And Nishant, over to you.Perhaps you could start by talking a little bit about Versor,which strikes me as a very, AI driven boutique hedge fund.You can talk about which strategies you employ.And then once you've spoken about Versor, perhaps you could just go into a little detailof how you're using AI specifically today.Sure. So Versor is a systematic quantitative investment firm based in New York.And our focus is explicitly on absolute return strategies.We are purposely designed as a systematic research driven boutique and alternative dataand AI have been our pillars from very early on.So we started working with alternative data very early on.And one of the things that we're going to get into is that there is no extracting insightsfrom alternative data without having AI techniques there.You can't construct sentiment scores from text unless you use natural language processingmethodology in that. So this has been a core part of what we're focussed on.We have three main strategies that we run.We run a systematic equity strategy, a event driven strategy that focuses on mergerarbitrage and then managed feature strategies,which I'm personally in charge of.And when it comes to our philosophy on alternative data and AI,this cascades firm wide.So I want to talk about some of the specific examples that we use on a day to day basis,some of which Andrew alluded to. But I want to use some specific examples that we thinkof. So our philosophy is fundamentally we are systematic investors,and our job is to speed up the velocity with which we get and are able to make goodinvestment decisions. That framework is generalised so that it is not specific to quantsor discretionary. Our goal is to get good actionable investment ideas.And so AI is used on a day to day basis in helping us do that.First and foremost on speeding up research.Asking more detailed questions.Helping us organise our day to day management.We use a tool called Motion.ai that dynamically adjusts tasks and projects based onpriority. So all we see is that throughout our investment process,these efficiency gains, even though they may not be super glamorous on an individualbasis. They compound so that we're able to do things that you're not able to do before.In terms of the research specifically, there are really two big ways that we see that AIis impacting us. One is the speed at which we're able to evaluate research ideas hasincreased significantly.So if I have a question like I want to know the number of dissenting votes in the FOMCmeetings going back 20 years, that's a question I could have answered before AI,but the speed at which I can answer that right now,using either off the shelf large language models or fine tuned models that we fine tunedourselves has gone up significantly.The second thing that we're able to do is tackle a complexity of research ideas that wewere unable to do previously.So as a very concrete example, consider an idea that you have.And the idea is. Currently, there are a lot of high quality podcasts that upload on adaily basis where investors come on and give a lot of interesting colour on marketsentiment and their views.And so maybe your investment thesis is, I want to get some sort of consensus understandingof what people are saying.15 years ago, this was very difficult and not possible.2012, we saw the big vision paper come out of AlexNet.2018 is when we see Google launch the Bert paper.2017 I was at a conference where actually the attention is all you need Transformers paperwas announced. And even this idea of summarising,synthesising a vast amount of market sentiment from podcasts would have been impossiblethen. And today, using Claude, using the latest large language models,I could do it in a weekend and not even a weekend.In a few hours I could make a large, large amount of progress on there.So I think now the question has really become from the investment process,not about are you using AI?I think all knowledge work in general requires using AI meaningfully in order to speed upefficiency. But the question is really around how are you using it?Where is it providing the most value and how is it fitting into your investment process ingeneral?Nishant I have two quick follow up questions on that.Firstly, one thing that struck me as you were talking was when again,this is going back to when I was running a book myself.Conviction was kind of everything.When it came to an investment idea, I needed to personally have conviction in an idea toknow when to add or when to, take positions off.And part of getting the conviction is the friction that you feel,putting work into an idea.It was reading the 10-Ks.It was meeting with management and it could only really come from within.Now, if you're relying on AI, obviously your conviction can only be as high as theconviction you have in the AI platform.So question number one is, if conviction is still a big part of the trading,how do you get conviction in AI?And part two of that is, as Andrew was alluding to,there's a lot of platforms out there. You mentioned Claude and Motion.ai and Gemini.Do you rely on third party AI platforms or have you started to build your own?And which do you see as more useful to you?Sure. So with regards to conviction, the way we think about it is the AI agents,and we've been spending a lot of time on building out our agentic capabilities,should be thought of as junior researchers.And their goal is to help you get to decisions faster.But the conviction still has to come from you.So let's give some concrete examples.In the past, if you're a junior researcher who joins our signal research team,one of the projects that you might be asked to do is I as the lead,have a specific academic article that I've read,maybe in the Journal of Financial Economics. I think the idea is interesting, and your jobwould be to read the article, implement the idea,test it using our internal evaluation framework,and make an argument in a research case for why that signal is predictive and should beadded to the strategy.What we can do with AI tools is we can systematise this process.So not only do we have researchers doing this work,but we can have AI agents automatically read papers,suggest ideas, but then they go through the research process where the PM or strategy leadstill has to evaluate it on a rigorous basis.The thing with using AI tools in general is evals are really important.There are constantly new models coming out.So two weeks ago we saw Opus 4.6 on Claude.OpenAI launched Codex 5.3.So we're constantly seeing these new things come along.And so the conviction and confidence in these models is a function of structured evalsprocess. So leading it back to what you said,Jamie, in your context, what you would do is,you tracked insurance stocks, if I remember correctly,and you would have a historical data set that you've worked with where you sat and you didthe work and you did the effort, and every time a new model would come out, you would getit to score it on that.And depending on the quality of the score, that's the conviction level you have in thatparticular agent that is using the tool.So a really fun example for everybody who's listening to the podcast is go into your ownfavourite LLM that you like and try this very simple evaluation.So just go, so, my car needs to be cleaned.The car wash is 50m away.Do I walk or drive? And if you just try something really simple as this,like very simple reasoning about, you're going to be shocked by some of the answers thatyou get. And if you go back models, as you can click through,you'll see how it's getting better.So conviction is still very human driven, but getting the idea to a point where the humancan start working with it and think of an actionable idea,that's the velocity I'm talking about where I have a random idea.I read this paper, I don't necessarily have the time to spend a week looking super deepinto the paper, because maybe my initial conviction on the idea is low.I can have Claude in the background running, and I explicitly do right now.I have Claude running on a couple of different problems that I'm looking at, and it'llgive me back enough structured output that as a human,then I can say actually, this is really interesting.Let me go and pursue this further.This has legs to it, this idea actually.Thanks, Claude. You did the work. I have convinced myself this idea I need to throw away.So that's on the conviction side.On your second question about internal versus external,which Andrew also alluded to, I think you're going to see an evolution of both use cases.So if we take a step back, when we talk about AI,we are really talking about large language models specifically because that is one of thekey tools that is helpful in the investment process in order to train a large languagemodel, there are two steps.There is a pre-training step where you run the model on a large corpus of text across alarge amount of compute, and that it learns some generalised knowledge.And then there's the the second step, which is the tuning step where you suddenly decide,this is my specific use case.And, then you, you feed it examples to help understand.So the way that I suspect investment firms are going to do.And the way that we started thinking about this problem is you have to design your systemsso that you use whatever model is best.And that may be something that's off the shelf.It may be something that's off the shelf that you then fine tune for your specific usecase. It may be an open source model that you downloaded off huggingface like DeepSeek,and then decided to have a bunch of internal evals that help you tune it specifically tosay, FOMC statement analysis or unstructured data parsing,whatever be that use case.And then the third, which I think is actually the toughest.And we won't see firms do this.And I have my reasons why is training very large end to end models entirely from scratchthemselves. I think the whole purpose of LLMs and fine tuning has been that you can takesomething that another person has trained on a generalised corpus,and then make it smarter for your use case.And that's the real moat.Because the truth is, most investment firms, including the largest firms,do not have the level of compute that the Googles and the Microsofts and the OpenAI's andAnthropic's have in order to do it.And frankly, they are solving different problems.One is like the Gemini model is supposed to be generalised to generate text videotranscripts, whereas investment folks are really focussed on the investment problem.And so the highest leverage thing is take something that exists,adapt it to your specific use case and improve it.And that's where we're going to see this proprietary differences, where firms that havespent a lot of systematic time on how to improve their internal models will diverge in theskills with which they're able to deploy them.Perhaps Andrew, you can comment a little bit on what Nishant just said.What else you're seeing elsewhere in the market in terms of using your own platformsversus external? And then also Andrew, Nishant said something there which was think ofthese platforms as a bit of a junior analyst.So, there'll be people listening wondering how many jobs will be open at the junior levelin years to come. And I wonder, Andrew, if you could maybe just comment on that a bit.Sure thing. So I totally agree that the world has been looking at how to on board AI ifyou like. We've got this idea that the firms are building their tech,their AI stacks, they're putting in place governance rules,policies and so on and so forth to really put the best footforward, but have the right tool for the right job.And I think that is a process that's ongoing.I don't think anyone's got it down as just yet.We're seeing a lot of appointments of chief data and AI officers now people who areincorporating that kind of discipline into their adoption of AI,making sure that people within the organisation know which tools should be used for whichprocesses and which tasks.So I think it will be a mix of internal, external.I don't think as Nishant said, I don't think it'll be a massive build from scratch,but the nuance, the uniqueness will come from the mix.The mix of what you've got internally and of course the data you've got access to which wecan talk about. I know we're going to talk about in a little bit.So I think that is the path we're on in terms of using agents to do various tasks.I mean, we are seeing that in real life part of we've just completed a survey a bit widerperhaps but certainly within the investment bank and investment management side of theworld. And we're seeing large organisations put in place teams of AI agents to performtasks. We're seeing that, we're seeing evaluation of these agents as if they areemployees. They get ranked, they get evaluated,they get trained, they get told off and are told to go and perform better if they don'tmeet certain requirements.And ultimately they get terminated if they don't work.So you're seeing a whole sort of corporate structure of these models starting to or theseagents, I should say, starting to emerge.I think in terms of the light at the end of the tunnel,if you like or the silver lining for perhaps junior staff is that collecting the data thatneeds to be used to train models.And indeed to pull into these models is still something thatfinding unique data sets is something that's very much a mix of manual andautomated. We see a lot of human in the loop for this kind of stuff.And it's getting back to trust in the data and making sure that people do feel that theydo have, they're getting the right data.That's being used to train these models.I think that's an imperative that that will continue.Andrew, just then you mentioned the constant strive for unique data sets and even when Iwas running a book ten years ago, I was always so worried about crowded trades,but I can't help feeling that these platforms are going to continue to create thesecrowded trades. So perhaps Nishant, you can you can talk a little about that.I mean, how do you make sure that the prompts that you are using for idea generation arenot similar to the Citadels and the other big companies out there.And again, maybe going a stage further, let's take a black swan type of event like tariffslast year. How does a Versor perform in that kind of environment?And how do you protect yourself?Sure. Look, that's entirely a fair question.And I think it goes back to investment edge.So as AI becomes more accessible, it is accessible,right? Like I don't want to make it seem like that's a future statement. It is accessibleright now. It is pretty easy to get started and using it on a day to day basis.The edge doesn't come from having more models or compute or more data.It really comes about how are you using those tools in a meaningful way.So when we think about how we're using these things,our advantage really comes from our investment process,which we believe is differentiated, thinks about markets in a very specific way.Our usage of alternative data for each one of the strategies is defined in a veryspecific, unique way that we don't believe people are doing. So let me talk specificallyabout our flagship managed futures strategy, which I work on.One of the things that we do there is we have this view that you need to look at equityindex futures from two perspectives a top down macro perspective,but also a bottom up stock level perspective.And so we have alternative data that we collect on 24 markets,equity markets globally.That's 10,000 stocks. We aggregate those stocks individually and at the country level.And we construct signals doing that, and we believe that's not a common approach.There's a lot of skill and nuance that goes into applying those things and thinking aboutthat problem in general.And there is a research focussed idea that results in that.If I give a quick example, just from the general AI world,one of the things that has come up previously,but maybe not been discussed in as detail is.So if we, if we roll back the history a little bit and just look at the timeline of thedevelopment of Transformers.In 2017, this really important paper comes out called Attention is All You Need.That introduces the attention mechanism, which is the heart of the Transformers.It comes out of Google, and a year later, Google actually releases the first transformermodel, which is called Bert from Sesame Bert.The T is the transformer.Yet it was the OpenAI GPT series that ended up winning.Why is that? The reason that happened is one,their focus was very different.The Google models were very focussed on the Google problem of search and understanding.And so what Bert was really good of, was understanding text.It was a really good reader of text and understanding.Open AI took a very different approach where they really focussed on this generativepiece, thinking about how it looks and feels to generate text,and thinking about what is the likely next thing that somebody is trying to say andgenerate stuff that goes according to that. And it turns out that that approach wasactually the approach that ended up scaling better and led to the GPT advances.So there are two teams.Google is significantly better resourced as significantly more researchers.OpenAI in 2018, I used to go to their offices because they were pretty close to where Iused to work at Brex. There were about 80,80 odd people there working on the early GPTversions. And it just turned out that that approach was the right one.So when we think about the investment process and commoditization of AI,and this comes down to what Andrew was saying about the uniqueness of alternative datasets. Also, it's about thoughtfully thinking about this is what I'm doing,and here's how this is going to lead to differentiated alpha.Ultimately, rather than necessarily being concerned that,oh, everybody's putting in the same inputs.Because look, frankly speaking, markets are competitive.If you just do the same thing as everybody else,you're not going to make money. And so a lot of the focus is certainly on that.And so when you reference time periods like Black Swan events,there are two specific things we think about in that sense one is just experience.We have structured our strategies across the board to have risk as a core part of theirphilosophy. The founding partners have navigated multiple cycles,.com bubble. Great financial crisis, Brexit.And so having a good risk framework that takes a realistic notion that liquidity is goingto dry up. A lot of stress scenarios can happen.And designing strategies that are going to survive those those periods is important.In my particular case for the flagship managed future strategy,there's a focus on something that is referenced that we call convexity,which is the ability to do well in up and down markets.And so it's part and parcel of the design of the strategy itself.And we've been trading on this philosophy where we look at cross-sectional differencesbetween equity markets worldwide.So regardless of whether they're all falling or going up,we should be able to make money.And this has worked well for us over the past eight years the strategy has been live,not just during Covid, but SVB and so on and so forth.So the goal isn't really to be immune to market shocks.That's unrealistic. The goal is to take your specialised investment process and design itso that it is resilient to different market environments.And when where AI and alternative data fit in is helping you design that process well andin a robust way and in a unique way so that you're not competing with others and you'reactually able to make money in differing markets.So Nishant, sticking with you, that's really interesting.So we've spoken a bit about research and investment idea generation being automated versushuman led and the relationship there.What about execution? And again, this touches on risk.To what extent do you have AI programs in place that will,without a human being being involved, change the percentage makeup of a portfolio,i.e. will trade without a human being involved.Because that that seems to me like a a bigger step.I was thinking earlier, it's a bit like booking an Uber,but there's a human driving it versus actually getting into a Waymo where there's now nohuman driving it. Like, are we at that stage yet?So I think it's a spectrum.Look, Jamie, even without AI, there are a lot of high frequency trading algos in themarket right now that are trading autonomously without any human intervention.That's just the truth with where we are.When it comes to agentic systems in general, I don't think we're entirely there yet.I think what the advantage of the Agentic approach is,you've imbibed a little bit of intelligence in all the various components.So if we talk about execution in general.I was being specific about discretionary trading there.Sorry.Yes. So in discretionary trading I don't think we are necessarily there because again,I don't think there is a level of trust in the LLM output.But we are so close that these are the things that could happen.So let's be specific on an example.So you are watching some stock.You have a large position in Apple for whatever reason,and you've built a bunch of agents out there that are looking out for black swan events,right? So you are reading the news feeds.You are looking on Twitter.As soon as somebody says something about Apple that you've designed and think is going tobe super negative. You have an alternative data source that is looking at payment volumecoming in on number of iPhones sold.Right now, we're at a place where you might have alerts and the alert goes off and thenJamie gets called and then, you do something.With agenetic systems.I think we're a step further.We've given them all a little bit of intelligence.So not only will they call and say, hey, there's something wrong with this Apple position,they might have a recommendation that says, actually, you need to cut your position byhalf. I don't think we have reached the stage just yet where we are fully comfortable withthem doing that execution, because again, there is this human component that is stilldriving the investment decisions on the discretionary side.That being said, I think we're probably less far away from that than we think.It's a level of comfort.So if you've been following the news there, Meta recently bought this open source agenticplatform called Claude bot.And what Claude bot is, is a personal assistant that's in your emails.It's sending emails on your behalf. It's scheduling meetings and doing stuff. And if youstart using that, there's a deep discomfort with doing that potentially.But once confidence grows, i.e.you systematically evaluate how its recommendations have behaved over time,I think you'll see people converge to a place where they're more comfortable letting ittrade on their behalf, just like you've seen people get much more comfortable with usingWaymo's. Like if you've been in a Waymo the first time you go,it's very scary, but then it quickly gets very boring because you're so used to theconsistency with which it's able to do the thing that you want it to do.And so I think we're on that journey.I don't think we've necessarily got there on the discretionary side.Yeah. Andrew, moving across to you, I wanted to ask a little bit about regulation and howthat may play a part in all of this.We've obviously talked about AI as a good and helpful agent,but I'm sure it can be used out there for if you're long Delta and short AmericanAirlines, you can presumably program something to click on Delta Airlines website as manytimes as possible to try and like give the impression that everyone's starting to flyDelta or whatever. But to what extent are our regulators trying to get involved,and what do you see any effect that might happen?We've had regulators come and speak at our events.They tend to come and try to assure everybody they're not going to be too heavy handedaround this, given our audience they seem to have a light touch.I think they're at the very much learning stage at this point.Some of them are running all manner of sandboxes and trials and so on and so forth wherethey get people to play with things that they think could help in things like reportingand so on and so forth.But in terms of actually getting more proactive around the use of AI in that investmentprocess. I think we're still waiting to see, there's been a lot of talk of the EU AI actand so on and so forth.Which are more, more generic in flavour, if you will.I don't think that's trickled down yet into our world.And when I talk to our regulators, the financial regulators we haven't seen them applythat as yet to our activities, if you will.So I think the jury's a bit open really, as we are now.Not quite there yet.Okay. And Nishant just, I guess tangentially on that do you,do you people like having conversations with regulators?I mean, I'm sure that they're out there talking to people inside the market.And then maybe it's a second follow up.Maybe this is more like an investment question about AI,but we're obviously in some kind of bubble.Maybe when it comes to tech and a new piece of transformative technology,like AI what are the signs that a bubble might be bursting?What do we need to look out for?Do you even believe we are in a bubble?Certainly. So answering your first question about regulators,we as a SEC and CFTC regulated firm are properly regulated by the authorities as towhether they're coming and speaking to us about regulations, but not to the best of myknowledge. But certainly, somebody at the firm can correct me if I'm wrong.I don't believe so. One of my thoughts on the regulation piece is,I do think there are existing rules in place that govern how algorithms are deployed infinancial situations that I think work really well.So the person who deploys the particular model is the one who is ultimately to blame.So during, during my summer at on the SEC multi quant desk,the Knight Capital algo situation happened and there it was very clear that the blame didnot rely on the algorithm itself.The blame ultimately goes to the people who are deploying it.So I think there is that framework that exists even for AI development.So issues with way more cars are immediately related to Google.And like they are the ones that should be held responsible.So that's certainly my thoughts on the regulatory piece.With regards to bubble, not bubble.To be entirely fair, I am not the right person for that.I am a systematic quant investor.So for me, this is a natural evolution of the process.I really just think of these things as tools that are helping me be a better investor.And so there's a little bit of bias in my thinking because more data,more compute are my lingua franca on a daily basis.So I certainly don't see this necessarily being super bubbly,but I am not the right person.I am not looking at the CapEx spending.I am not looking at the the mismatch between what compute is required and like the powerthat is required to generate that compute. I think there is some mismatch between thosethings. I think the amount of compute that we are trying to build is not supported by theamount of power that we are able to generate.I think that's a key bottleneck that has been pointed out.And a question really for both of you, I guess there'll be quite a few people listeningwho are perhaps trading their own portfolios and they want to get better at using AI toolsto help either come up with ideas or help them monitor them.What sort of advice can you give them?Of where to be looking to try and find the right platform for them.And I guess the second question is, I never really thought about it until just now,but how careful do people need to be with their prompt writing?Should they should they actually spend some time actually trying to work out,what they write in those prompts is obviously quite important.So maybe a few words on that.I would generically speak on the prompts side.I mean, I hear of people keeping prompt libraries up and again,as part of the the governance, if you will, of AI deployment is that this is the way weapproach this kind of a prompt.This is the way we approach that kind of a prompt to get the best results or to safeguardagainst, perhaps generating something questionable that won't be defensible ultimately.So I think, there's some governance to be done about that.And I think people are starting to do that.And then my other bit really is about data quality.There is this constant search for new data sets.I can see it. And I think to some degree, AI is something that encourages that and makesit feasible, but it is about ensuring you've got the right processes in place to make surethat, what you get in the end, that secret stuff,that secret sauce, the nuance, the unique approach is really optimised by making sure thatyour data quality is good ultimately.That would be my two cents, as it were.Yeah. I think Andrew's 100% right.Having a prompt library is something that we highly recommend from a quant perspective,because we treat these as models and we want as much of a deterministic output.We do need to store the type of prompts that we write.And over time, you sort of learn how certain prompts help the model make certaindecisions. The worst thing that can happen as a retail investor trying to apply thesethings is one day you put in one prompt and you get recommendation A on the other day,you put another prompt, you get recommendation B. And so there are simple things that youcan do to make this process more robust.One is any large prompt that you're writing.That's an investment thesis. You should record it in your trading journal,and it should go there as a specific input in your investment process.The second thing that I highly recommend that people do,and this is something that we spent a lot of time building internally,is if you use a particular tool to get some sort of output that is used in the investmentprocess, have another LLM score it.So if I just think of discretionary investors,if you are parsing FOMC statements, say you're a retail investor and you're trying toparse an FOMC statement, you can do that pretty easily off the shelf.Maybe say using ChatGPT, have Gemini score it for you and have that consistent frameworkso that you're having different models, different high calibre models sort of evaluateeach other because it keeps the output a lot more honest and it gives you a better sense.One of the agents that we're building internally that I'm very excited about is what Icall the always critic agent.So we were talking about conviction, and I really am looking for an agent that is justevery idea that gets proposed to it.It tries to point out issues with the idea.And that's very helpful because it's like having somebody out there who's carefullyreviewing it. So there's actually a really great open source tool called so-called RoboRev that does this continuously for code.But you can also do this for your research process.Have another agent sit and score the output of a specific agent,poke at holes, use that output as an input loop to improve the structured response.So you can have this back and forth going so that you get a couple of rounds of,here's an idea, here's where it sucks, here's an idea,here's how it's better before it even gets to the human to look at and say interesting.Maybe I can action that or not.That's some really excellent advice.Thank you both for that. In fact, if people are listening,they should go back and listen to those answers again.Some, some excellent practical advice in there. And so gentlemen,we're slightly running out of time here, but I wanted to considering we're talking aboutpractical uses. Finish on one question for both of you.If it's not too personal, how are you both using AI in your own lives just to make lifeoutside of work a bit easier.With me and my wife, it typically seems to be about what to do with the children,essentially. And then an argument ensues which we then ask AI to try and solve.But that aside, what kind of things do you find find useful.Outside of work, you're saying?I mean, I sit there and answer my daughter's homework really with my Perplexity on myphone. Daddy, what's this mean?I'll be off quietly and run a couple of questions on that.So it's become the new Google for me just on my phone for sure.And then, I had my second project, which I haven't executed on yet is our back garden herein London. Which is a bit messy.So my plan is to use and to take a few pictures and then run these pictures throughsomething or other and get a decent design for the back garden.So I think there's all sorts of things you can do with this.I like it.So I have a nerdier answer because I'm inherently a nerd.And so a lot of my AI usage is actually building lots of silly projects that I can usearound the house. So I have this dashboard of the real time number of bikes availabledownstairs by my apartment because I bike to the boxing gym every morning around 5:30.And it's really an unpleasant ride if I don't have one of those electric bikes to take methrough. So I have all these like little cute things about automating stuff.And I'm actually driving my wife crazy because I'm trying to build her a little app withour own schedules and like little things.So I am, I'm very much a kid in a candy store who's just like,any ridiculous idea I can think of, I'm using Claude to spool up an app and start usingit.Well, you're preaching to the converted. I get obsessed about the time it takes to getfrom one place to New York City to another.And I know there's a lot of different platforms that give you different answers and yousee which one's best. Anyway, gentlemen, I have taken up too much of your time,but you've both been amazing guests and I want to thank you for today's podcast.If people want to get in touch with Andrew the A-Team group have their own website to seewhat they're up to. And of course, Versor Investments have their own site too,if you want to get any more answers. But gentlemen,if you want to give a few parting words.Just a thank you very much and great, great conversation.Fascinating topic. I'm sure we'll be back next year to see where we're at.Yeah, likewise, I've been a long time listener,so it's been really exciting to be finally on this side of the mic.I think we're in a really, really exciting moment with AI and anybody who's listening tothe podcast, just go out there. Don't be afraid. Just use the tools experiment.There's a lot of fun to be had and a lot of efficiency to be gained.Well, I think a year is going to be too long.The pace at which this this world is changing.So Andrew, Nishant, thank you very much indeed.And thank you all for listening.Thanks once again for listening, everyone. And please as usual,give us a follow, like or subscribe wherever you get your podcasts.The information contained in this podcast does not constitute a recommendation from anyLSEG entity to the listener.The views expressed in this podcast are not necessarily those of LSEG,and LSEG is not providing any investment, financial,economic, legal, accounting or tax advice or recommendations in this podcast.Neither LSEG nor any of its affiliates make any representation or warranty as to theaccuracy or completeness of the statements or any information contained in this podcast,and any and all liability, therefore, whether direct or indirect,is expressly disclaimed.For further information, visit the show notes of this podcast or lseg.com.

Disclaimer

The content and information (“Content”) in the podcast (“Programs”) is provided for informational purposes only and not investment advice. You should not construe any such Content, information or other material as legal, tax, investment, financial, or other professional advice nor does any such information constitute a comprehensive or complete statement of the matters discussed. None of the Content constitutes a solicitation, recommendation, endorsement, or offer by LSEG, its affiliates or any third party service provider to buy or sell any securities or other financial instruments in this or in in any other jurisdiction in which such solicitation or offer would be unlawful under the securities laws of such jurisdiction. All Content is information of a general nature, is illustrative only and does not address the circumstances of any particular individual or entity. LSEG and its affiliates are not a fiduciary by virtue of any person’s use of or access to the Programs or Content. You alone assume the sole responsibility of evaluating the merits and risks associated with the use of any information or other Content in the Programs before making any decisions based on such information or other Content. In exchange for accessing and/or participating in the Program and Content, you agree not to hold LSEG, its affiliates or any third party service provider liable for any possible claim for damages arising from any decision you make based on information or other Content made available to you through the Program. LSEG and its affiliates make no representation or warranty as to the accuracy or completeness of the Content. LSEG disclaims all liability for any loss that may arise (whether direct, indirect, consequential, incidental, punitive or otherwise) from any use of the information in the Program. LSEG does not recommend, explicitly nor implicitly, nor suggest or recommend any investment strategy. LSEG and its affiliates do not have regard to any individual’s, group of individuals’ or entity’s specific investment objectives, financial situation or circumstances. The views expressed in the Program are not necessarily those of LSEG or its affiliates. LSEG and its affiliates do not express any opinion on the future value of any security, currency or other investment instrument. You should seek expert financial and other advice regarding the appropriateness of the material discussed or recommended in the Program and should note that investment values may fall, you may receive back less than originally invested and past performance is not necessarily reflective of future performance.