Knowledge Center
Blog
Synthetic Identity Fraud: The $6 Billion Problem Hiding in Plain Sight

A synthetic identity doesn't belong to anyone. It's assembled from fragments: a real Social Security number paired with a fabricated name, a generated date of birth, and a deepfake selfie. Each piece passes its individual check. Together, they create a person who doesn't exist but, to every verification system, looks like they do.
The Federal Reserve has called synthetic identity fraud the fastest-growing type of financial crime in the United States. Estimated losses exceed $6 billion annually, and that figure almost certainly understates reality, because the defining characteristic of synthetic identity fraud is that it's designed not to trigger complaints. There's no real victim calling their bank. The "person" who opened the account was never real to begin with.
What Is Synthetic Identity Fraud?
Synthetic identity fraud is a form of identity fabrication where criminals combine real and fictitious personal information to create a new, fake identity. Unlike traditional identity theft, where a criminal impersonates an existing person, synthetic identity fraud manufactures someone entirely new.
The synthetic identity fraud definition matters because it explains why detection is so difficult. In traditional identity theft, the victim eventually notices unauthorized activity and reports it. With synthetic identities, there's no victim to sound the alarm. The fabricated identity builds credit history, passes KYC checks, opens accounts, and operates normally — until the fraudster decides to cash out.
The typical lifecycle: a synthetic identity is created using a real SSN (often belonging to a child, elderly person, or recent immigrant who won't notice credit inquiries), combined with fake biographical details. The identity applies for credit, gets denied, but the application itself creates a credit file. Over months, the identity builds legitimate-looking history through authorized user piggybacking or secured credit products. Eventually, the identity "busts out", maxing out credit lines and disappearing.
In crypto and fintech, the cycle is faster. AI-generated documents and deepfake selfies pass automated onboarding in minutes. A synthetic identity can create a wallet, fund it, and begin laundering, all before any behavioral monitoring system has enough data to flag it.
Why Synthetic Identity Fraud Is Accelerating
Three forces are converging to make this problem dramatically worse.
AI has democratized identity fabrication. Generative AI tools now produce convincing fake identity documents, passports, driver's licenses, utility bills, complete with functional barcodes and security features. Deepfake selfies defeat liveness checks. AI-generated synthetic identities made up 34% of fake exchange account registrations in 2025, up from single digits just two years prior.
Data breaches supply the raw materials. Every breach dumps millions of SSNs, addresses, and dates of birth into dark web marketplaces. These real data fragments are the seeds that synthetic identities grow from. With 1.96 billion credentials exposed in a single 2024 dataset, the supply of raw material is effectively unlimited.
Verification systems weren't designed for this. KYC workflows check whether a document appears genuine and whether the face matches. They don't check whether the person behind the document actually exists as a coherent, real human. Document verification asks "is this passport real?", not "is this person real?" That gap is where synthetic identities live.
Synthetic Identity Fraud Detection: Why Current Approaches Fall Short
The industry has invested heavily in synthetic identity fraud detection. Machine learning models analyze application patterns. Consortium databases cross-reference identity elements across institutions. Behavioral analytics monitor post-onboarding activity for anomalies.
These tools improve detection rates at the margins. But they share a structural limitation: they operate on data that the fraudster controls.
When the identity document is AI-generated, document verification sees a genuine-looking document. When the selfie is a deepfake, liveness detection sees a live face. When the SSN belongs to a real person who hasn't flagged it, database checks return a clean result. Each verification step passes because it's evaluating attacker-supplied inputs against attacker-understood criteria.
Detection, by definition, means the synthetic identity has already entered the system. The account exists. The wallet is open. The fraud window is active. Even the best detection systems create a gap between account creation and fraud identification, and in crypto, where transactions settle in minutes, that gap is all an attacker needs.
Preventing Synthetic Identity Fraud at the Hardware Layer
SLC takes a fundamentally different approach to preventing synthetic identity fraud. Instead of trying to determine whether an identity is real after it's been submitted, we require cryptographic proof of physical existence before any identity claim is evaluated.
Here's the mechanism: every account creation through SLC-protected onboarding requires a cryptographic attestation from a physical SIM's secure element, delivered through a dedicated channel via the mobile network, not the public internet. This attestation proves that a specific, tamper-resistant piece of hardware is present and that the subscriber identity bound to that hardware is authentic.
This changes the synthetic identity equation fundamentally.
Synthetic identities are rejected before they reach verification. A synthetic identity can have a perfect fake passport, a flawless deepfake selfie, and a clean SSN. But it cannot produce a cryptographic response from a physical SIM secure element that it doesn't possess. No verified device, no wallet opened. The identity is rejected at the hardware gate — before document verification, before liveness detection, before any downstream system is engaged.
1:1 hardware binding prevents scale. Each SIM secure element generates unique cryptographic keys that cannot be cloned or transferred. One identity, one SIM, one account. Bulk wallet creation chains collapse because every additional wallet requires an additional physical SIM with unique hardware-rooted credentials. The economics of synthetic identity fraud depend on scale; create hundreds of identities, open hundreds of accounts, and cash out across all of them simultaneously. When each account requires unique, non-cloneable hardware, that model breaks.
The dedicated channel eliminates injection attacks. Synthetic identity fraud increasingly relies on injection attacks, feeding fabricated data directly into verification pipelines, bypassing cameras and sensors entirely. SLC's authentication runs through the mobile network, not the application layer. There's no data pipeline to inject into. The cryptographic proof originates in tamper-resistant silicon and travels through carrier infrastructure that no attacker controls.
Synthetic Identity Fraud in Crypto and Fintech
Digital wallets have become the primary target for synthetic identity fraud. According to recent industry data, wallets lead all fraud targeting at 48%, making them the primary entry point for crypto-related scams and account takeovers.
The reason is structural. Crypto onboarding is designed for speed and minimal friction, which is exactly what synthetic identities exploit. Traditional financial institutions might take days to open an account, creating windows for manual review and cross-referencing. Crypto platforms onboard in minutes, and transactions are irreversible.
This combination, fast onboarding, irreversible transactions, pseudonymous infrastructure, makes crypto the ideal environment for synthetic identity fraud at scale. A synthetic identity can create a wallet, receive funds from a compromised account or pig butchering scheme, and move those funds through mixing services before any detection system flags the activity.
Hardware-rooted authentication disrupts this by introducing a physical verification requirement that synthetic identities cannot satisfy, without adding friction for legitimate users. The SIM attestation is instantaneous, faster than document upload and liveness capture. It adds security by subtracting steps.
What Preventing Synthetic Identity Fraud Looks Like in Practice
For institutions evaluating their synthetic identity fraud prevention posture, the strategic question is whether to invest further in detection, catching more fakes after they enter the system, or to invest in prevention at the architectural level.
Detection investments yield diminishing returns. Every improvement in AI-powered document analysis is met by an improvement in AI-powered document generation. Every advance in liveness detection faces a corresponding advance in deepfake technology. The arms race has no finish line.
Prevention through hardware-rooted authentication sidesteps the arms race entirely. The verification doesn't ask "does this document look real?" It asks "can this device produce a cryptographic proof that only genuine, carrier-provisioned hardware could generate?" That question has a binary, mathematically verifiable answer. No confidence score. No risk threshold. No margin for a sophisticated fake to slip through.
The Bottom Line
Synthetic identity fraud is the fastest-growing financial crime because it exploits a fundamental gap: verification systems check whether identity elements are genuine without checking whether the person is real.
Closing that gap requires moving the trust anchor from documents and biometrics, which AI can fabricate, to hardware and cryptography, which it cannot.
The SIM in every mobile device is already a tamper-resistant, carrier-provisioned secure element deployed at global scale. It's the one identity signal that a synthetic identity cannot fake, because it exists in physical hardware, not in data.
The institutions that anchor their onboarding in hardware-rooted authentication will stop synthetic identity fraud at the source. Everyone else will keep building better detectors for better fakes — indefinitely.
Synthetic identities passing your onboarding checks? See how SIM-based authentication rejects fake identities at the hardware layer →


