Risk Intelligence Insights

2024: the year AI breaks content-based identity verification?

Daniel Flowe

Daniel Flowe

Head of Digital Identity Strategy
  • In a recent piece of investigative journalism, Joseph Cox described using a fake ID created with AI tools to pass KYC checks performed by a prominent document and biometric verification provider
  • His experience indicates that when used alone, content-based identity verification measures, such as document and biometric verification, and facial liveness, are insufficient
  • To combat AI-enabled identity fraud, content-based identity verification should be supplemented by data-based identity verification, where user-supplied images of identity documents and biometrics are verified against authoritative sources

Aza Raskin, Co-Founder of the Humane Technology Project and former Google Design Ethicist, recently spoke about the impact of generative AI on identity verification and said, “this is the year all content-based verification breaks.” A recent piece of investigative journalism, written by Joseph Cox at 404 Media, suggests that Aza may just be right.

Last year, Cox detailed his attempt to breach his own bank’s security system using a generative AI tool and a 3-second clip of his own voice. Spoiler alert: he was successful and gained full control over his own accounts with a voice recording almost anyone could have accessed [1].

In the latest article, Joseph dives deep into an underground website named OnlyFakes that generates fake IDs for just $15. He provided the service a passport photo of himself and an entirely fabricated set of personal information. The system generated a realistic looking signature to match the fabricated personally identifiable information (PII), and, within a few minutes of paying the required fee, he had a front and back image of a hyper realistic ID card that appeared to be lying on a carpeted floor. The photo on the ID matched his face – but the ID was otherwise completely synthetic.

More concerningly, he was easily able to use his new fake ID card to pass KYC checks performed by a prominent document and biometric verification provider, and then establish an account at a popular crypto exchange under his new, synthetic identity. FinCEN’s recent Financial Trend Analysis on identity-related fraud revealed how attackers frequently use impersonation tactics to move funds illicitly, and also showed how often money service businesses, such as crypto exchanges, are targeted by fraudsters.

 

AI algorithms are advancing at an unprecedented pace. They can create incredibly realistic fake images, videos and audio recordings, and can be used to generate believable synthetic identities. Aza Raskin says that generative AI’s capabilities are increasing not exponentially – but on a double exponential. And he points out, in a podcast on the TED network, that the ratio of AI developers to those working on AI safety is 30:1. This highlights a potentially sizeable gap between those driving AI forward, for better or worse, and those committed to establishing guardrails that promote safe use of AI tools. Financial institutions are particularly at risk. FinCEN’s FTA says that “financial institutions and other victims appeared to have more difficulty identifying impersonation when they lack an authoritative source to compare identity documentation and evidence.”

Joseph’s story and experience underscore the following: while document and biometric-based methodologies of IDV are valuable and important parts of a comprehensive approach to preventing money laundering and terrorist financing, they’re inadequate as a standalone means of proofing identities.

What we need is to enhance content-based identity verification with data-based identity verification, where user-supplied images of identity documents and biometrics are verified against authoritative sources. In this example, no matter how believable the fake ID was that OnlyFakes generated, the synthetic PII was not present in any reliable systems of record. A simple data-based verification would have prohibited this fraudulent attempt, as well as many others, to open accounts in their tracks. No one should be able to bypass KYC processes and defeat a top provider of AML technology in minutes for the cost of a movie ticket. We need more reliance on authoritative data-based identity verification to reduce these risks.

[1] Source: Vice Media

 

Stay updated

Subscribe to an email recap from:

Legal Disclaimer

Republication or redistribution of LSE Group content is prohibited without our prior written consent. 

The content of this publication is for informational purposes only and has no legal effect, does not form part of any contract, does not, and does not seek to constitute advice of any nature and no reliance should be placed upon statements contained herein. Whilst reasonable efforts have been taken to ensure that the contents of this publication are accurate and reliable, LSE Group does not guarantee that this document is free from errors or omissions; therefore, you may not rely upon the content of this document under any circumstances and you should seek your own independent legal, investment, tax and other advice. Neither We nor our affiliates shall be liable for any errors, inaccuracies or delays in the publication or any other content, or for any actions taken by you in reliance thereon.

Copyright © 2024 London Stock Exchange Group. All rights reserved.