risk intelligence Insights

Faster payments, higher risks: combatting AI-enabled fraud

Ramesh Menon

Group Director of Product Management, Digital Identity & Fraud Solutions
  • AI-driven fraud is rapidly coming to the forefront as organisations within the payments industry contend with several new threats to identity verification.
  • The introduction of deepfakes, synthetic identities, and other AI-enabled tactics are being used to bypass traditional fraud protections. 
  • To effectively manage identity and payments risk, a comprehensive, real-time approach addressing the customer lifecycle is required. 

AI-powered tools are swiftly finding their way into nearly every business process, promising increased productivity, processing power, and the ability to automate a growing number of repetitive tasks. However, AI-driven fraud is rapidly coming to the forefront as organisations within the payments industry contend with several new threats to identity verification.

Merchants, payment processors, and financial institutions must become aware of these new threats and take appropriate measures to protect their customers and employees. Otherwise, they risk significant losses, reputational damage, and regulatory penalties. Below are two of the latest threats that have emerged since the introduction of widely available AI tools.

Deepfakes: The Latest Trend in Account Takeover

Deepfakes are sophisticated digital manipulations created using AI and have been successfully used to forge documents, mimic celebrities and politicians, and even create highly believable audio and video content of events that never occurred. In the context of identity verification, deepfakes can be used in social engineering, to gain access to existing accounts, create new accounts using a forged identity of a real person, and intercept transactions.

Whereas many of these methods were easy to spot even as recently as a year ago, the technology has developed so quickly that these fakes can now effectively deceive both human and machine countermeasures. With the rapid spread of AI tools used to develop all manner of deepfakes, these attacks are becoming increasingly widespread. In just the last year, there was a 1,740% increase in deepfake fraud in North America.[1]

Synthetic Identities: Powered by AI

Synthetic identities have long been among the most difficult challenges to identity verification. However, the combination of stolen personal identifiable information (PII) available on the Dark Web and the virtually unlimited processing power of AI tools have never made the ability for fraud operators to create, test, and deploy synthetic identities greater. AI tools have empowered fraud networks to use synthetic identities en masse, creating new fictitious accounts at an unprecedented speed.

Once fraudulent accounts are created, they can be used for countless types of fraud, including the purchase of goods and services, facilitating illegal money transfers, establishing lines of credit, and more. Identifying synthetic identities and stopping fraud operators at the point of enrolment is critical to stopping fraud in its tracks. Once a fake identity has been successfully used to create a new account, it is much harder and costlier to correct.

Identity Verification in the Age of AI

As businesses increasingly cater to post-pandemic consumer preferences for rapid, digital-first interactions, the speed and potential costs of fraud grow exponentially. The convergence of faster payment adoption, consumer expectations of convenience above all else, and AI-powered tools for fraud have created a new risk environment for any organisation in the financial ecosystem. Understanding these new risks is critical to staying ahead of fraud attempts.

Innovation in financial technology has provided organisations and customers with seamless, faster payments and a far superior user experience. However, fraud and innovation operate hand in hand. Businesses have to continuously balance the competing needs of minimizing friction for users, while preventing fraud – and the solution requires a comprehensive, real-time approach across the customer lifecycle.

Key to this approach are data-driven tools that combat AI-driven artifacts and identities. The three pillars of the approach involve the ability to:

  1. Trust the identity of the client, via data-based verification to supplement document verification and biometrics;
  2. Trust the accounts, via bank account verification and account ownership verification; and,
  3. Trust the interaction, via email, phone, location and other signals, in conjunction with analytics-based insight for anomaly identification.

Focusing on these three pillars can help organisations protect themselves against emerging threats in the most robust and comprehensive way. 

Stay updated

Subscribe to an email recap from:

Legal Disclaimer

Republication or redistribution of LSE Group content is prohibited without our prior written consent. 

The content of this publication is for informational purposes only and has no legal effect, does not form part of any contract, does not, and does not seek to constitute advice of any nature and no reliance should be placed upon statements contained herein. Whilst reasonable efforts have been taken to ensure that the contents of this publication are accurate and reliable, LSE Group does not guarantee that this document is free from errors or omissions; therefore, you may not rely upon the content of this document under any circumstances and you should seek your own independent legal, investment, tax and other advice. Neither We nor our affiliates shall be liable for any errors, inaccuracies or delays in the publication or any other content, or for any actions taken by you in reliance thereon.

Copyright © 2024 London Stock Exchange Group. All rights reserved.