Knowledge Center
Blog
The Identity Fraud Arms Race Is Accelerating. Detection Alone Won't Win It.

Entrust just published one of the most comprehensive identity fraud datasets in the industry. Their 2026 Identity Fraud Report, drawn from over one billion identity verifications across 195 countries, documents a threat landscape that should alarm every CISO, fraud leader, and identity product owner in financial services.
The numbers are stark. Global fraud rates hit 3.1% in 2024. In the Americas, that figure climbed to 4.3%. Account takeover losses in the U.S. alone reached $15.6 billion. And the attack methods driving those numbers aren't just evolving — they're industrializing.
But the most important thing the Entrust report reveals isn't the scale of the problem. It's the structural reason current defenses are falling behind.
Deepfakes and Injection Attacks Are Outpacing Detection
Two years ago, deepfakes were a curiosity in the fraud landscape. Today, they account for one in every five biometric fraud attempts globally.
The escalation is exponential. Entrust found that deepfakes represented 26% of sophisticated biometric fraud attempts against active liveness systems in 2025 — up from just 2% the prior year. That's a 13x increase in twelve months.
Injection attacks — where fabricated video or images are fed directly into verification systems, bypassing the device camera entirely — grew roughly 40% year over year. These attacks don't try to fool a camera. They skip the camera altogether, inserting AI-generated media straight into the data pipeline that verification software trusts.
Meanwhile, the crypto sector has become the primary testing ground for these techniques. Sixty percent of all deepfake fraud now targets crypto platforms, with fraud activity in the sector growing 24% annually since 2020.
The tools fueling this aren't bespoke. Fraud-as-a-Service platforms sell pre-built deepfake generators, emulation scripts, and credential dumps to anyone willing to pay. Organized fraud rings — Entrust has identified 53 unique rings since 2023, four of which target multiple clients across sectors — operate around the clock, with attack volumes peaking between 2 and 4 AM UTC.
This is industrial-scale fraud, powered by the same generative AI tools that are transforming legitimate industries.
Why Current Defenses Are Structurally Limited
The identity verification industry has responded with increasingly sophisticated countermeasures. Active liveness detection, passive device signals, back-end AI analysis, behavioral biometrics — each adds a layer of defense. And they work. Entrust's own Motion Liveness technology achieves below 0.1% fraud rates. That's impressive.
But it's not zero. And the trend lines matter more than the snapshot.
The Entrust report categorizes fraud into three attack surfaces: targeting identity elements (documents, biometrics), targeting prevention systems (injection attacks, device emulation), and targeting people (social engineering). Current defenses primarily address the first two categories by trying to detect attacks after they've been injected into the verification flow.
That's the structural problem. Every software-based countermeasure operates within the same trust boundary as the attack. The verification system trusts the camera stream, the device memory, and the application layer — and so does the attacker. As long as authentication depends on data that passes through software-controlled channels, AI-generated media and session manipulation will scale faster than detection.
The data confirms this trajectory. Digital document forgeries have climbed to 35% of all document fraud, fueled by GenAI tools that produce convincing fakes from open-source models and a few prompts. In payments, 82% of fraud attempts target the authentication process after onboarding — meaning the attacker already has an account and is exploiting the verification layer. In digital banking, 55% of fraud occurs post-onboarding.
Detection is getting better. But the attack surface is getting wider, faster.
The Architectural Alternative: Moving Authentication Below the Software Layer
SLC Digital approaches this problem differently — not by building a better detector, but by eliminating the surface that detectors are trying to protect.
Our multi-patent protected Java Card applet deploys to a secondary eSIM, creating an MNO-grade hardware root of trust — a dedicated channel between the enterprise and the customer's device that operates independently of the device's operating system, the public internet, and the application layer. When authentication is required, the SIM's secure element generates a cryptographic proof that confirms device possession, subscriber identity, and session liveness — all anchored to carrier provisioning and bound to tamper-resistant hardware.
This changes the authentication question entirely. Instead of "does this person appear legitimate based on what the camera sees," the question becomes "can this device produce a cryptographic attestation that only this specific hardware could generate."
That shift closes the attack vectors the Entrust report documents:
Injection attacks become irrelevant. There is no camera stream to intercept, no video feed to fabricate, no biometric data pipeline to manipulate. The authentication channel runs through the SIM's secure element, not the device's application layer.
Deepfakes lose their target. The system doesn't rely on biometric appearance. It provides hardware-rooted, cryptographic proof of identity and intent through a channel that no deepfake can reach. A pixel-perfect deepfake video is meaningless when the verification mechanism is a cryptographic challenge-response from tamper-resistant hardware.
Account takeover risk drops materially. Instead of software-verifiable credentials that can be phished, intercepted, or replayed, authentication depends on hardware-anchored possession and cryptographic proof delivered through a dedicated channel that cannot be accessed through the public internet, intercepted via phone number, or replicated by any app. An attacker would need physical control of the specific SIM secure element — and even then, the hardware is designed to destroy its keys before surrendering them.
What Hardware-Rooted Identity Doesn't Solve
No authentication method eliminates all fraud. SLC doesn't claim otherwise.
Social engineering and coercion — where victims use their own genuine credentials under manipulation — remain a challenge regardless of whether identity is verified through software or hardware. If a person is tricked into authorizing a transaction with their own authenticated device, the cryptographic proof will be legitimate because the person and the hardware are genuine.
But here's why closing the technical attack surface still represents a fundamentally better security posture: it forces adversaries toward social engineering as their primary vector. And social engineering has characteristics that work against the attacker at scale.
It can't be commoditized through Fraud-as-a-Service platforms. You can't package a convincing phone call into a downloadable toolkit the way you can package a deepfake generator. It can't be automated with AI in the same way injection attacks can — real-time social manipulation requires human judgment and adaptation. And it can't operate at the industrial volumes the Entrust report documents, where organized rings run automated attacks 24/7 across multiple institutions simultaneously.
By eliminating the technical attack surface, hardware-rooted authentication doesn't just reduce fraud. It changes the economics of fraud, pushing attackers toward methods that are inherently harder, slower, and more expensive to execute.
Where This Is Heading
The trajectory documented in Entrust's report points in one direction. GenAI tools will continue improving. Deepfake quality will keep rising. Fraud-as-a-Service platforms will keep lowering the barrier to entry. And the gap between attack capability and software-based detection will continue to widen.
The identity industry has spent the last decade in a detection arms race — building better models to catch better fakes. That work has value. It buys time. But the Entrust data makes the long-term math clear: when attack generation is cheap, automated, and improving exponentially, detection-only strategies face structural headwinds that no amount of model tuning can overcome.
Hardware-rooted identity verification doesn't enter that arms race. It sidesteps it. By anchoring authentication in cryptographic proof generated inside tamper-resistant silicon, it creates a trust boundary that software-based attacks — no matter how sophisticated — cannot cross.
The fraud landscape will keep evolving. The question for security leaders isn't whether current defenses will be tested. It's whether their authentication architecture is built on a foundation that scales with the threat — or one that has to keep rebuilding faster than the threat can adapt.


