For the past few years, banks and verification platforms have been fighting a new and specific kind of ghost: the synthetic identity.
Unlike traditional identity theft, where a criminal steals a real person’s life, a synthetic identity is a “Frankenstein” creation. It is a persona built from a blend of real attributes (like a stolen tax file number) and invented details. To a bank’s onboarding system, these profiles often look just like “thin-file” customers (legitimate newcomers to the financial system) making them incredibly difficult to distinguish from valid applicants.
The US Federal Reserve first pulled this problem into the spotlight in 2019, outlining how synthetic accounts behave normally for months or years, building credit history, before they “bust out”. This leaves lenders with massive losses that are difficult to attribute to a real victim.
But recently, the threat landscape has shifted. Generative AI has turned what was once a manual hustle into a scalable, automated operation.
The AI Accelerant
We are no longer dealing with simple document forgeries. We are dealing with AI-designed realities.
According to Entrust’s 2025 Identity Fraud Report, digital document forgeries have surged by 244 percent year-on-year. Perhaps more alarming, the report recorded deepfake attempts occurring every five minutes in 2024.
The toolkit for fraudsters has become cheap and accessible:
- Industrialised Paperwork: Federal Reserve fraud experts note that Large Language Models (LLMs) can now draft consistent backstories, employment letters, and dispute narratives that perfectly conform to underwriting rules.
- Bypassing KYC: A 2026 analysis from the World Economic Forum details how attackers are using synthetic faces and camera-feed injections to bypass liveness checks, a technique specifically aimed at financial onboarding.
- Real-Time Masking: Voice cloning and real-time face masking are now sophisticated enough to confuse human agents during step-up verification calls.
The Privacy Paradox in Banking
Traditionally, the banking sector’s response to rising fraud is to collect more data. Because the risk in synthetic IDs hides in the linkages between data points rather than in any single field, institutions try to perform graph-style analysis across devices, addresses, and application timing.
However, this creates a conflict with Australia’s privacy landscape. The Office of the Australian Information Commissioner (OAIC) continues to remind organisations that data collection must be necessary and proportionate. The “privacy by design” playbook implies that minimisation and purpose limitation are central obligations.
Furthermore, the policy environment is tightening. The Treasury’s consultation on the Scams Prevention Framework and the RBA’s push for “whole-of-ecosystem” defences signal that banks will face tougher obligations to stop scams before value moves.
The challenge is clear: How do we improve detection to satisfy the RBA and Treasury without violating privacy principles by hoarding personal data?.
A Better Way: Verify Once, Bind Forever
The solution lies in changing how we verify, not what we collect.
The Federal Reserve’s white paper advocates for stronger “front door” identity binding. This aligns with a “verify once, then bind” strategy. Instead of asking a customer to upload sensitive documents for every new service or application, we can anchor a verified human identity to a durable authenticator.
Here is how a privacy-first financial workflow looks:
- Separate Identity from Auth: Keep the identifying attributes (who they are) in one secure enclave and the user’s authenticator (the key they use) in another. If one side is compromised, the other does not automatically fall.
- Ask Yes-or-No Questions: Once verified, relying services should only need to receive a yes or a no for matching (such as “Is this person over 18?”) rather than uploading documents each time.
- Risk-Based Step-Ups: Aligning with RBA guidance, controls should be frictionless initially and only step up when risk signals appear, such as a sudden change in device or repeated application attempts.
The Path Forward for Financial Leaders
As the Treasury finalises the Scams Prevention Framework, the focus must be on controls that confirm the person behind the transaction.
Financial institutions should treat AI-assisted synthesis as the baseline threat. This means moving away from validating the instrument (the device or account) and focusing on validating the person.
By adopting a model where verified attributes are reused rather than repeatedly collected, banks can reduce the surface area available for fraud. It means fewer documents moving around and far less material for AI-enabled forgers to exploit. This delivers the high assurance regulators demand while respecting the privacy boundaries customers expect.
If you’re exploring solutions to synthetic identity theft and fraud, reach out to DigiChek today to learn more about how we keep your business and your users safe.