Although 63% of financial services firms have at least a level three maturity for “responsible AI” for data and technology according to a recent survey by McKinsey, more progress is needed. To achieve the data governance and agility that’s required to support AI use cases that truly deliver value, organisations need to rethink their data infrastructure. LSEG understands what is required of this data transformation programme because it has recently undertaken a similar journey, to enable the industry to access AI-ready financial data.
- Delivering value through an AI strategy requires data that is high quality and accessible.
- Transforming legacy data infrastructure can be challenging, but LSEG has already undertaken this, to make its market data available for today’s AI use cases.
- To implement its own AI strategy, LSEG partnered with Microsoft. Today its market data is available within an ecosystem of solutions designed to support AI use cases from ideation to delivery.
Building the data foundations for AI
At Microsoft AI Tour London, industry leaders discussed the fundamental role that data infrastructure plays in a successful AI strategy. Accurate AI outcomes is underpinned by data that is high quality, accessible, permissioned, and sits within a data governance framework.
This data must sit within a robust ecosystem that converts information embedded across workflows and analytics. For many firms in the financial services industry, achieving this is full of complexity, thanks to legacy data and technology stacks that have grown organically over time. LSEG appreciates the challenge financial firms face in transforming their data for AI, because it has already taken this journey itself, by deploying an ecosystem of Microsoft solutions.
Data progress achieved
According to a recent survey by McKinsey[note1], 63% of financial services firms have at least a level three maturity for “responsible AI” for data and technology, compared with 55% of all industries included in the study. Responsible AI includes a significant focus on data quality, because AI outcomes are only as good as the data that they are grounded on. It is positive news that the industry has made such progress.
However, more work must be done to deliver value from AI strategies. Poor quality data will deliver inaccurate AI outcomes, resulting in increased financial, compliance and operational risks – as well as the need for additional human intervention. To build a culture that embraces AI and achieves its strategic goals, a data governance programme that ensures data accuracy and provides the right semantics is needed. Robust data governance helps to build a culture of trust in the ability of AI to automate processes and provide analytical insights.
Enabling agile data
In addition to data governance, financial firms require agile data to support AI. Most organisations are sitting on massive quantities of their own data, locked in silos. To be able to transform AI into value, firms need to be able to translate this data into intelligence that can flow through processes, deliver insights, generate operational responsiveness, and support regulatory confidence.
Unfortunately, most financial firms have not achieved this yet. According to another survey, 83% of senior business leaders said their organisation’s AI adoption would be faster if they had stronger data infrastructure in place. [note2]
The complexity of legacy data stacks and associated technology often means there are data copies and infrastructure inefficiencies, limited interoperability between data sets and across platforms, data security challenges and vendor sprawl.
Democratisation of data
The solution is to build a single, organisation-wide data lake that enables AI adoption beyond specialist teams, drives productivity and use‑case innovation, and supports enterprise‑wide impact. This is because having a single source of truth for data across the enterprise ensures that everyone is working with the highest quality, freshest data with accurate permissions and essential metadata baked in.
“When organisations move away from having segregated data sets sitting in garden sheds and under floorboards, and instead in a single location that everyone can access, the capacity for AI to transform the business grows exponentially,” said Emily Prince, Group Head of Enterprise AI, LSEG, speaking at the Microsoft AI London event. “When we did this at LSEG, we started to have a view on information that was incredibly powerful in terms of the insights that it could yield.”
By creating a data lake, people can deploy data into AI use cases knowing that quality is strong, and they have permission to do so. People can also access a wider range of data sets, potentially uncovering news, reference data, pricing data, and more that will enhance the accuracy of AI outcomes. Adding in more historical data – which captures periods of financial distress – can improve the robustness of stress tests and generate improved scenario analysis.
This unified data infrastructure delivers smarter decisions and scalability. End-to-end transparency and governance reduce risk and maintain data trust. While simplifying management and reducing costs, unified data unlocks the ability for employees across the business to use AI to innovate to improve productivity and generate insight.
The LSEG Journey
LSEG’s AI strategy focused on delivering trusted, licensed, AI-ready content to scale AI in financial services. The unparalleled depth, breadth, and quality of LSEG’s AI-ready content and taxonomies includes proprietary datasets stretching back over decades. LSEG Everywhere includes deployment of the Model Context Protocol (MCP) and recent partnerships with Microsoft, Claude, ChatGPT, Snowflake and Databricks.
“Bringing data to people in a turn-key way, that enables them to ideate and experiment, is extremely powerful,” says Prince. “Now, put that together with Microsoft and MCP, and firms can now work with over 33 petabytes of trusted data that they can really lean into.”
LSEG data sits within the Microsoft ecosystem of data and AI solutions, including Microsoft Foundry (AI), Microsoft Defender (Security), Microsoft Purview (governance), and OneLake. Together, these solutions support data quality and governance. Data rights are embedded, so that end users know what their permissions are. Data discovery becomes more intuitive, enabling AI innovation.
With this infrastructure, AI value can truly be delivered. “We are co-developing with customers, and some of the things that we are able to see, and build are so exciting” says Prince. “Historically, there are so many possibilities that the financial services industry hasn’t modelled, hasn’t captured. Put another way, there is so much opportunity.”
Sources
[1] https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/tech-forward/state-of-ai-trust-in-2026-shifting-to-the-agentic-era | Back to Note 1
[2] https://www.ey.com/content/dam/ey-unified-site/ey-com/en-us/insights/emerging-technologies/documents/ey-ai-survey-shows-investment-boosts-roi-but-leaders-continue-to-see-risks.pdf | Back to Note 2
Legal Disclaimer
Republication or redistribution of LSE Group content is prohibited without our prior written consent.
The content of this publication is for informational purposes only and has no legal effect, does not form part of any contract, does not, and does not seek to constitute advice of any nature and no reliance should be placed upon statements contained herein. Whilst reasonable efforts have been taken to ensure that the contents of this publication are accurate and reliable, LSE Group does not guarantee that this document is free from errors or omissions; therefore, you may not rely upon the content of this document under any circumstances and you should seek your own independent legal, investment, tax and other advice. Neither We nor our affiliates shall be liable for any errors, inaccuracies or delays in the publication or any other content, or for any actions taken by you in reliance thereon.
Copyright © 2025 London Stock Exchange Group. All rights reserved.
The content of this publication is provided by London Stock Exchange Group plc, its applicable group undertakings and/or its affiliates or licensors (the “LSE Group” or “We”) exclusively.
Neither We nor our affiliates guarantee the accuracy of or endorse the views or opinions given by any third party content provider, advertiser, sponsor or other user. We may link to, reference, or promote websites, applications and/or services from third parties. You agree that We are not responsible for, and do not control such non-LSE Group websites, applications or services.
The content of this publication is for informational purposes only. All information and data contained in this publication is obtained by LSE Group from sources believed by it to be accurate and reliable. Because of the possibility of human and mechanical error as well as other factors, however, such information and data are provided "as is" without warranty of any kind. You understand and agree that this publication does not, and does not seek to, constitute advice of any nature. You may not rely upon the content of this document under any circumstances and should seek your own independent legal, tax or investment advice or opinion regarding the suitability, value or profitability of any particular security, portfolio or investment strategy. Neither We nor our affiliates shall be liable for any errors, inaccuracies or delays in the publication or any other content, or for any actions taken by you in reliance thereon. You expressly agree that your use of the publication and its content is at your sole risk.
To the fullest extent permitted by applicable law, LSE Group, expressly disclaims any representation or warranties, express or implied, including, without limitation, any representations or warranties of performance, merchantability, fitness for a particular purpose, accuracy, completeness, reliability and non-infringement. LSE Group, its subsidiaries, its affiliates and their respective shareholders, directors, officers employees, agents, advertisers, content providers and licensors (collectively referred to as the “LSE Group Parties”) disclaim all responsibility for any loss, liability or damage of any kind resulting from or related to access, use or the unavailability of the publication (or any part of it); and none of the LSE Group Parties will be liable (jointly or severally) to you for any direct, indirect, consequential, special, incidental, punitive or exemplary damages, howsoever arising, even if any member of the LSE Group Parties are advised in advance of the possibility of such damages or could have foreseen any such damages arising or resulting from the use of, or inability to use, the information contained in the publication. For the avoidance of doubt, the LSE Group Parties shall have no liability for any losses, claims, demands, actions, proceedings, damages, costs or expenses arising out of, or in any way connected with, the information contained in this document.
LSE Group is the owner of various intellectual property rights ("IPR”), including but not limited to, numerous trademarks that are used to identify, advertise, and promote LSE Group products, services and activities. Nothing contained herein should be construed as granting any licence or right to use any of the trademarks or any other LSE Group IPR for any purpose whatsoever without the written permission or applicable licence terms.