An old-school approach could be the answer for finservs

For many people, video conferencing apps like Zoom made work, school, and other everyday activities possible amid the global pandemic—and more convenient. Remote workers commuted from sleeping position to upright position. Business meetings resembled “Hollywood Squares.” Business-casual meant a collared shirt up top and pajama pants down low.

Fraudsters were also quite comfortable during this time. Unprecedented amounts of people sheltering in place naturally caused an ungodly surge in online traffic and a corresponding increase in security breaches. Users were easy prey, and so were many of the apps and companies they transacted with.

In the financial services (finserv) sector, branches closed down and ceased face-to-face customer service. Finserv companies relied on Zoom for document verification and manual reviews, and bad actors, armed with stolen credentials and improved deepfake technology, took full advantage.

Even in the face of AI-Generated identity fraud most finservs still use remote identity verification to comply with regulator KYC requirements, and when it comes time to offer a loan. It’s easier than meeting in person, and what customer doesn’t prefer verifying their identity from the comfort of their couch?

But AI-powered synthetic identities are getting smarter and, while deepfake deterrents are closing the gap, a return to an old-school approach remains the only foolproof option for finservs.

Deepfakes, and the SuperSynthetic™ quandary

Gen AI platforms such as ChatGPT and Bard, coupled with their nefarious brethren FraudGPT and WormGPT and the like, are so accessible it’s scary. Everyday users can create realistic, deepfaked images and videos with little effort. Voices can be cloned and manipulated to say anything and sound like anyone. The rampant spread of misinformation across social media isn’t surprising given that nearly half of people can’t identify a deepfaked video.

More disturbing: deepfaked Mona Lisa, or that someone made this 3+ years ago?

Finserv companies are especially susceptible to deepfaked trickery, and bypassing remote identity verification will only get easier as deepfake technology continues to rapidly improve.

For SuperSynthetics, the new generation of fraudulent deepfaked identities, fooling finservs is quite easy. SuperSynthetics—a one-two-three punch of deepfake technology and synthetic identity fraud and legitimate credit histories—are more humanlike and individualistic than any previous iteration of bot. The bad actors who deploy these SuperSynthetic bots aren’t in a rush; they’re willing to play the long game, depositing small amounts of money over time and interacting with the website to convince finservs they’re prime candidates for a loan or credit application.

When it comes time for the identity verification phase, SuperSynthetics deepfake their documents, selfie, and/or video interview…and they’re in.

An overhaul is in order

Deepfake technology, which first entered the mainstream in 2018, is still relatively infantile yet pokes plenty of holes in remote identity verification.

The “ID plus selfie” process, as Gartner analyst Akif Khan calls it, is how most finservs are verifying loan and credit applicants these days. The user takes a picture of their ID or driver’s license, authenticity is confirmed, then the user snaps a picture of themselves. The system checks the selfie for liveness and makes sure the biometrics line up with the photo ID document. Done.

The process is convenient for legitimate customers and fraudsters alike thanks to the growing availability of free deepfake apps. Using these free tools, fraudsters can deepfake images of docs and successfully pass the selfie step, most commonly by executing a “presentation attack” in which their primary device’s camera is aimed at the screen of a second device displaying a deepfake.

Khan advocates for a layered approach to deepfake mitigation, including tools that detect liveness and check for certain types of metadata. This is certainly on point, but there’s an old-school, far less technical way to ward off deepfaking fraudsters. Its success rate? 100%.

The good ol’ days

Remember handshakes? How about eye contact that didn’t involve staring into a camera lens? These are merely vestiges of the bygone in-person meetings that many finservs used to hold with loan applicants pre-COVID.

Outdated, and less efficient, as face-to-face meetings with customers might be, they’re also the only rock-solid defense against deepfakes.

Not even advanced liveness detection is a foolproof deepfake deterrent.

Sure, the upper crust of finserv companies likely have state-of-the-art deepfake deterrents in place (i.e., 3D liveness detection solutions). But liveness detection doesn’t account for deepfaked documents or, more importantly, video, or the fact that the generative AI tools available to fraudsters are advancing just as fast as vendor solutions, if not faster. It’s a full-blown AI arms race, and with it comes a lot of question marks.

In-person verification (only for high-risk activities) puts these fears to bed. Is it frictionless? Obviously far from it, though workarounds, such as traveling notaries that meet customers at their residence, help ease the burden. But if heading down to a local branch for a quick meet-and-greet is what it takes to snag a $10K loan, will a customer care? They’d probably fly across state lines if it meant renting a nicer apartment or finally moving on from their decrepit Volvo.

Time to layer up

Khan’s recommendation, for finservs to assemble a superteam of anti-deepfake solutions, is sound, so long as companies can afford to do so and can figure out how to orchestrate the many solutions into a frictionless consumer experience. Vendors indeed have access to AI in their own right, powering tools that directly identify deepfakes through patterns, or that key in on metadata such as the resolution of a selfie. Combine these with the most crucial layer, liveness detection, and the final result is a stack that can at the very least compete against deepfakes.

SuperSynthetics aren’t as easy to neutralize. In previous posts, we’ve advocated for a “top-down” anti-fraud solution that spots these types of identities before the loan or credit application stage. Contrary to individualistic fraud prevention tools, this bird’s-eye view reveals digital fingerprints—concurrent account activities, simultaneous social media posts, etc.—that otherwise would go undetected.

In the meantime, it doesn’t hurt to consider the upside of an in-person approach to verifying customer identities (prior to extending a loan, not onboarding). No, it isn’t flashy, nor is it flawless. However, it is reliable and, if finservs effectively articulate the benefit to their customers—protecting them from life-altering fraud—chances are they’ll understand.

Customer or AI-Generated Identity? The lines are as blurry as ever.

Today’s fraudsters are truly, madly, deeply fake.

Deepfaked identities, which use AI-generated audio or visuals to pass for a legitimate customer, are multiplying at an alarming rate. Banks and other fintech companies—who collectively lost nearly $2 billion to bank transfer or payment fraud in 2022, are firmly in their crosshairs.

Sniffing out deepfaked chicanery isn’t easy. One study found that 43% of people struggle to identify a deepfaked video. It’s especially concerning that this technology is still relatively infantile and already capable of luring consumers and businesses into fraudulent transactions.

Over time, deepfakes will seem increasingly less fake and much harder to detect. In fact, an offshoot of deepfaked synthetic identities, the SuperSynthetic™ identity, has already emerged from the pack. Banks and financial organizations have no choice but to stay on top of developments in deepfake technology and swiftly adopt a solution to combat this unprecedented threat.

Rise of the deepfakes

Deepfakes have come a long way since exploding onto the scene roughly five years ago. Back then, deepfaked videos aimed to entertain. Most featured harmless superimpositions of one celebrity’s face onto another, such as this viral Jennifer Lawrence-Steve Buscemi mashup.

The trouble started when users began deepfaking sexually explicit videos, opening up a massive can of privacy- and ethics-related worms. Then a 2018 video of a deepfaked Barack Obama speech showed just how dangerous the technology could be.

Image Credit: DHS

The proliferation and growing sophistication of deepfakes over the past five years can be attributed to the democratization of AI and deep learning tools. Today, anyone can doctor an image or video with just a few taps. FakeApp and Lyrebird and countless other apps enable smartphone users to seamlessly integrate someone’s face into an existing video, or generate a new video that can easily pass for the real deal.

Given this degree of accessibility, the threat of deepfakes to banks and fintech companies will only intensify in the months and years ahead. The specter of new account fraud, perpetrated by way of a deepfaked synthetic identity, looms large in the era of remote customer onboarding.

This is a stickup

Synthetic identity fraud, in which bad actors invent a new identity using a combination of stolen and made-up credentials, has already cost banks upwards of $6 billion. Deepfake technology only adds fuel to the fire.

A deepfaked synthetic identity heist doesn’t require any heavy lifting. A fraudster crops someone’s face from a social media picture and they’re well on their way to spawning a lifelike entity that speaks, blinks, and moves its head on screen. Image- or video-based identity verification, KYC protocol designed to deter potential fraud before an account is opened or extended credit, is moot. The fraudster’s uploaded selfie will be a dead ringer for the face on the ID card. Even a live video conversation with an agent is unlikely to ferret out a deepfaked identity.

Not even Columbo can spot a deepfaked synthetic identity.

Audio-based verification processes are circumvented just as easily. Exhibit A: the vulnerability of the voice ID technology used by banks across the US and Europe, ostensibly another layer of login security that prompts users to say some iteration of, “My voice is my password.” This sounds great in theory, but AI-generated audio solutions can clone anyone’s voice and create a virtually identical replica. One user, for example, tapped voice creation tool ElevenLabs to clone his own voice using an audio sample. He accessed his account in one try.

In this use case, the bad actor would also need a date of birth to access the account. But, thanks to frequent big-time data leaks—such as the recent Progress Corp breach—dates of birth and other Personally Identifiable Information (PII) are readily available on the dark web.

Here come the SuperSynthetics

In deepfaked synthetic identities, banks and financial services platforms clearly face a formidable foe. But this worthy opponent has been in the gym, protein-shaking and bodyhacking itself into something stronger and infinitely more dangerous: the SuperSynthetic identity.

SuperSynthetic identities, armed with the same deepfake capabilities as regular synthetics (and then some), bring an even greater level of Gen AI-powered smarts to the table. No need for a brute force attack. SuperSynthetics operate with a sophistication and discernment that is so lifelike it’s spooky. In this regard, one must only look at the patience of these bots.

SuperSynthetics are all about the long con. Their aged and geo-located identities play nice for months, engaging with the website and making small deposits here and there, enough to appear human and innocuous. Once enough of these transactions accumulate, and trust is gained from the bank, a credit card or loan is extended. Any additional verification is bypassed via deepfake, of course. When the money is deposited into their SuperSynthetic account the bad actor immediately withdraws it, along with their seed money, before finding another bank to swindle.

How prevalent are SuperSynthetics? Deduce estimates that between 3-5% of financial services accounts onboarded within the past year are in fact SuperSynthetic “sleepers” waiting to strike. It certainly warrants a second look at how customers are verified before obtaining a loan or credit card, including the consideration of in-person verification to rule out any deepfake activity.

No time like the present

If deepfaked synthetic identities don’t call for a revamped cybersecurity solution, deepfaked SuperSynthetic identities will certainly do the trick. Our money is on a top-down approach that views synthetic identities collectively rather than individually. Analyzing synthetics as a group uncovers their digital footprints—signature online behaviors and patterns too consistent to suggest mere coincidence.

Whatever banks choose to do, kicking the can down the road only works in favor of the fraudsters. With every passing second, the deepfakes are looking (and sounding) more real.

Time is a-tickin’, money is a-burnin, and customers are a-churnin’.

How SuperSynthetic identities carry out modern day bank robberies

The use cases for generative AI continue to proliferate. Need a vegan-friendly recipe for chocolate cookies that doesn’t require refined sugar? Done. Need to generate an image of Chuck Norris holding a piglet? You got it.

However, not all Gen AI use cases are so innocuous. Fraudsters are joining the party and developing tools like WormGPT and FraudGPT to launch sophisticated cyberattacks that are significantly more dangerous and accessible. Consumer and enterprise companies alike are on high alert, but fintech organizations really need to upgrade their “bot-y” armor.

Each new wave of bots grows increasingly stronger and brings its unique share of challenges to the table—none more than synthetic “Frankenstein” identities consisting of real and fake PII data. But, alas, the next evolution of synthetic identities has entered the fray: SuperSyntheticTM identities.

Let’s take a closer look at how these SuperSynthetic bots came to be, how they can effortlessly defraud banks, and how banks need to change their account opening workflows.

The evolution of bots

Before we dive into SuperSynthetic bots and the danger they pose to banks, it’s helpful to cover how we got to this point.

Throughout the evolution of bots we’ve seen the good, the bad, and the downright nefarious. Well-behaved bots like web crawlers and chatbots help improve website or app performance; naughty bots crash websites, harm the customer experience and, worst of all, steal money from businesses and consumers.

The evolutionary bot chart looks like this:

Generation One: These bots are capable of basic scripting and automated maneuvers. Primarily they scrape, spam, and perform fake actions on social media apps (comments, likes, etc.).

Generation Two: Web analytics, user interface automation, and other tools that enable the automation of website development.

Generation Three: This wave of bots adopted complex machine learning algorithms, allowing for the analysis of user behavior to boost website or app performance.

Generation Four: These bots laid the groundwork for SuperSynthetics. They’re highly effective at simulating human behavior while staying off radar.

Generation Five: SuperSynthetic bots with a level of sophistication that negates the need to execute a brute force attack hoping for a fractional chance of success. Individualistic finesse, combined with the bad actor’s willingness to play the long game, makes these bots undetectable by conventional bot mitigation and synthetic fraud detection strategies.

Playing the slow game

So, how have SuperSynthetics emerged as the most formidable bank robbers yet? It’s more artifice than bull rush.

Over time, a SuperSynthetic bot uses its AI-generated identity to deposit small amounts of money via Zelle, ACH, or another digital payments app while interacting with various website functions. The bot’s meager deposits accumulate over the course of several months, and regular access to its bank account to “check its balance” earns the reputation of a “customer in good standing.” Its credit risk worthiness score increases and an offer of a credit card or a personal, unsecured loan is extended.

At this point it’s hook, line, and sinker. The bank deposits the loan amount or issues the credit card and the fraudster transfers it out, along with their seed funds, and moves on to the next unsuspecting bank. This is a cunning, slow-burn operation only a SuperSynthetic identity can successfully carry out at scale. Deduce estimates that between 3-5% of accounts onboarded within the past year at financial services and fintech institutions are in fact SuperSynthetic Sleeper identities.

Such patience and craftiness is unprecedented in a bot. Stonewalling SuperSynthetics takes an equally novel approach.

A change in philosophy

Traditional synthetic fraud prevention solutions won’t detect SuperSynthetic identities. Built around static data, these tools lack the dynamic, real-time data and scale needed to sniff out an AI-generated identity. Even manual review processes and tools like DocV are no match as deepfake AI methods can create realistic documents and even live video interviews.

An individualistic approach offers little resistance to SuperSynthetic bots.

Fundamentally, these static-based tools take an individualistic approach to stopping fraud. The data that’s pulled from a range of sources during the verification phase is only analyzing one identity at a time. In this case, a SuperSynthetic identity will appear legitimate and pass all the verification checks. Fraudulent patterns missed. Digital forensic footprints overlooked.

A philosophical change in fraud prevention is foundational to banks keeping SuperSynthetic bots out of their pockets. Verifying identities as a collective group, or signature, is the only viable option.

A view from the top

Things always look different from the top floor. In the case of spotting and neutralizing SuperSynthetic identities, a big-picture perspective reveals digital footprints otherwise obscured by an individualistic anti-fraud tool.

A bird’s-eye view that groups identities into a single signature uncovers suspicious evidence such as simultaneous social media posts, concurrent account actions, matching time-of-day and day-of-week activities, and other telltale signs of fraud. Considering the millions of fraudulent identities in the mix, it’s illogical to attribute this evidence to mere happenstance.

There’s no denying that SuperSynthetic identities have arrived. No prior iteration of bot has ever appeared so lifelike and operated with such precision. If banks want to protect their margins and user experience, verifying identity via a signature approach is a must. This does require bundling existing fraud prevention stacks with ample (and scalable) real-time identity intelligence, but the first step in thwarting SuperSynthetics is an ideological one: co-opt the signature strategy.