College students are lifelong learners. So are AI-powered fraudsters.

With each passing day AI grows more powerful and more accessible. This gives fraudsters the upper hand, at least for now, as they roll out legions of AI-powered fake humans that even governmental countermeasures—such as the Biden administration’s recent executive order—will be lucky to slow down.

Among other nefarious activities, bad actors are leveraging AI to peddle synthetic bank and online sports betting accounts, swing elections, and spread disinformation. They’re also fooling banks with another clever gimmick: posing as college freshmen.

College students, particularly underclassmen, have long been a target demographic for banks. Fraudsters are well aware and know that banks’ yearning for customer acquisition, coupled with their inadequate fraud prevention tools, present an easy cash-grab opportunity (and, perhaps, a chance to revisit their collegiate years).

Early bank gets the bullion

The appeal of a new college student from a customer acquisition perspective can’t be understated.

A young, impressionable kid is striking out on their own for the first time. They need a credit card to pay for both necessary and unnecessary things (mostly the latter). They need a bank. And their relationship with that bank? There’s a good chance it will outlast most of their romantic relationships.

This could be their bank through college, through their working years, the bank they procure a loan from for their first house, the bank they encourage their kids and grandkids to bank with. In a college freshman banks don’t just land one client, but potentially an entire generation of clients. Lifetime value up the wazoo.

Go to any college move-in day and you’ll spot bank employees at tables, using giveaway gimmicks to attract students to open up new accounts. According to the Consumer Financial Protection Bureau, 40% of students attend a college that’s contractually linked to a specific bank. However, as banks shovel out millions so they can market their products at universities, a fleet of synthetic college freshmen lie in wait, with the potential to collectively steal millions of their own.

Playing the part

Today’s fraudsters are master identity-stealers who can dress up synthetic identities to match any persona.

In the case of a fake college freshman, building the profile starts off in familiar fashion: snagging a dormant social security number (SSN) that’s never been used or hasn’t been used in a while. Like many forms of Personally Identifiable Information (PII), stolen SSNs from infants or deceased individuals are readily available on the dark web.

From here, fraudsters can string together a combination of stolen and made-up PII to create a synthetic college freshman identity that qualifies for a student credit card. No branch visit necessary, and IDs can be deepfaked. The synthetic identity makes small purchases and pays them off on time—food, textbooks, phone bill—building trust with the bank and improving their already respectable credit score of around 700. They might sign up for an alumni organization and/or apply for a Pell Grant to further solidify their collegiate status.

Pell Grants, of course, require admission to a college—a process that, similar to acquiring a credit card from a bank, is easy pickings for synthetic fraudsters.

The ghost student epidemic

Any bank that doesn’t take the synthetic college freshman use case seriously should study the so-called “ghost student” phenomenon: fake college enrollees that rob universities of millions. 

In California, these synthetic students, who employ the same playbook as bank-swindling synthetics, comprise 20% of community college applications alone (more than 460K). Thanks to an increased adoption of online enrollment and learning post-pandemic, relaxed verification protocols for household income, and the proliferation of AI-powered fake identities, ghost students can easily grab federal aid and never have to attend class.

Like ghost students, synthetic college freshmen can apply for a credit card without ever stepping foot inside a bank branch. Online identity verification is a breeze for the seasoned bad actor. Given the democratization of powerful generative AI tools, ID cards and even live video interviews over Zoom or another video client can be deepfaked.

A (SuperSynthetic) tale old as time

Both the fake freshmen and ghost student problems are symptomatic of a larger issue: SuperSynthetic™ identities.

SuperSynthetic bots are the most sophisticated yet. Forget the brute force attacks of yore; SuperSynthetics are incredibly lifelike and patient. These identities play nice for several months or even years, building trust by paying off credit card transactions on time and otherwise interacting like a real human would. But, once the bank offers a loan and a big payday is in sight, that SuperSynthetic is out the door.

An unorthodox threat like a SuperSynthetic identity can’t be thwarted by traditional fraud prevention tools. Solutions reliant on individualistic, static data won’t cut it. Instead, banks (and universities, in the case of ghost students) need a solution powered by scalable and dynamic real-time data. The latter approach verifies identities as a group or signature: the only way to pick up on the digital footprints left behind by SuperSynthetics.

As human as SuperSynthetic identities are, they aren’t completely infallible. With a “birds eye” view of identities, patterns of activities—such as SuperSynthetics commenting on the same website at the exact same time every week over an extended period—quickly emerge.

Fake college students are one of the many SuperSynthetic personas capable of tormenting banks. But it isn’t the uphill battle it appears to be. If banks change their fraud prevention philosophy and adopt a dynamic, birds eye approach, they can school SuperSynthetics in their own right.

Synthetic fraud remains the elephant in the room

The Biden administration’s recent executive order “on Safe, Secure, and Trustworthy Artificial Intelligence” naturally caused quite a stir among the AI talking heads. The security community also joined the dialog and expressed varying degrees of confidence in the executive order’s ability to protect the federal government and private sector against bad actors.

Clearly, any significant effort to enforce responsible and ethical AI use is a step in the right direction, but this executive order isn’t without its shortcomings. Most notable is its inadequate plan of attack against synthetic fraudsters—specifically those created by Generative AI.

With online fraud reaching a record $3.56 billion through the first half of 2022 alone, financial institutions are an obvious target of AI-based synthetic identities. A Wakefield report commissioned by Deduce found that 76% of US banks have synthetic accounts in their database, and a whopping 86% have extended credit to synthetic “customers.”

However, the shortsightedness of the executive order also carries with it a number of social and political ramifications that stretch far beyond dollars and cents.

Missing the (water)mark

A key element of Biden’s executive order is the implementation of a watermarking system to differentiate between content created by humans and AI, a topical development in the wake of the SAG-AFTRA strike and the broader artist-versus-AI clash. Establishing provenance of an object via a digital image or signature would seem like a sensible enough solution to identifying AI-generated content and synthetic fraud, that is, if all of the watermarking mechanisms currently at our disposal weren’t utterly unreliable.

A University of Maryland professor, Soheil Feizi, as well as researchers at Carnegie Mellon and UC Santa Barbara, circumvented watermarking verification by adding fake imagery. They were able to remove watermarks just as easily.

It’s also worth noting that the watermarking methods laid out in the executive order were developed by big tech. This raises concerns around a walled-garden effect in which these companies are essentially regulating themselves while smaller companies follow their own set of rules. And don’t forget about the fraudsters and hackers who, of course, will gladly continue using unregulated tools to commit AI-powered synthetic fraud, as well as overseas bad actors who are outside US jurisdiction and thus harder to prosecute.

The deepfake dilemma

Another element of many synthetic fraud attacks, deepfake technology, is addressed in the executive order but a clear-cut solution isn’t proposed. Deepfaking is as complex and democratized as ever—and will only grow more so in the coming years—yet the executive order falls short of recommending a plan to continually evolve and keep pace.

Facial recognition verification is employed at the government and state level, but even novice bad actors can use AI to deepfake their way past these tools. Today, anyone can deepfake an image or video with a few taps. Apps like FakeApp can seamlessly integrate someone’s face into an existing video, or generate an entirely new one. As little as a cropped face from a social media image can spawn a speaking, blinking, head-moving entity. Uploaded selfies and live video calls pass with flying colors.

In this era of remote customer onboarding, coinciding with unprecedented access to deepfake tools, it behooves executive orders and other legislation to offer a more concrete solution to deepfakes. Finservs (financial services) companies are in the crosshairs, but so are social media platforms and their users; the latter poses its own litany of dangers.

Synthetic fraud: multitudes of mayhem

The executive order’s watermarking notion and insufficient response to deepfakes don’t squelch the multibillion-dollar synthetic fraud problem.

Synthetic fraudsters still have the upper hand. With Generative AI at their disposal, they can create patient and incredibly lifelike SuperSynthetic™ identities that are extremely difficult to intercept. Worse, “fraud-as-a-service” organizations peddle synthetic mule accounts from major banks, and also sell synthetic accounts on popular sports betting sites—new, aged, geo-located—for as little as $260.

More worrisome, amid the rampant spread of disinformation online, is the potential for synthetic accounts to cause social panic and political upheaval.

Many users struggle to identify AI-generated content on X (formerly Twitter), much less any other platform, and social networks charging a nominal fee to “verify” an account offers synthetic identities a cheap way to appear even more authentic  All it takes is one post shared hundreds of thousands or millions of times for users to mobilize against a person, nation, or ideology. A single doctored image or video could spook investors, incite a riot, or swing an election. 

“Election-hacking-as-a-service” is indeed another frightening offshoot of synthetic fraud, to the chagrin of politicians (or those on the wrong side of it, at least). These fraudsters weaponize their armies of AI-generated social media profiles to sway voters. One outfit in the Middle East interfered in more than 33 elections.

Banks or betting sites, social uprisings or rigged elections, unchecked synthetic fraud, buttressed by AI, will continue to wreak havoc in multitudinous ways if it isn’t combated by an equally intelligent and scalable approach.

The best defense is a good offense

The executive order, albeit an encouraging sign of progress, is too vague in its plan for stopping AI-generated content, deepfakes, and the larger synthetic fraud problem. The programs and tools it says will find and fix security vulnerabilities aren’t clearly identified. What do these look like? How are they better than what’s currently available?

AI-powered threats grow smarter by the second. Verbiage like “advanced cybersecurity program” doesn’t say much; will these fraud prevention tools be continually developed so they’re in lockstep with evolving AI threats? To its credit, the executive order does mention worldwide collaboration in the form of “multilateral and multi-stakeholder engagements,” an important call-out given the global nature of synthetic fraud.

Aside from an international team effort, the overarching and perhaps most vital key to stopping synthetic fraud is an aggressive, proactive philosophy. Stopping AI-generated synthetic and SuperSynthetic identities requires a preemptive, not reactionary, approach. We shouldn’t wait for authenticated—or falsely authenticated—content and identities to show up, but rather stop synthetic fraud well before infiltration can occur. And, given the prevalence of synthetic identities, they should have a watermark all their own.

76% of finservs are victims of synthetic fraud

In 1938, Orson Welles’ infamous radio broadcast of The War of the Worlds convinced thousands of Americans to flee their homes for fear of an alien invasion. More than 80 years later, the public is no less gullible, and technology unfathomable to people living in the 1930s allows fake humans to spread false information, bamboozle banks, and otherwise raise hell with little to no effort.

These fake humans, also known as synthetic identities, are ruining society in myriad ways: tampering with electorate polls and census data, disseminating misleading social media posts with real-world consequences, sharing fake articles on Reddit that subsequently skew Large Language Models that drive platforms such as ChatGPT. And, of course, bad actors can leverage fake identities to steal millions from financial institutions.

The bottom line is this: synthetic fraud is prevalent; financial services companies (finservs), social media platforms, and many other organizations are struggling to keep pace; and the impact, both now and in the future, is frighteningly palpable.

Here is a closer look at how AI-powered synthetic fraud is infiltrating multiple facets of our lives.

Accounts for sale

If you need a new bank account, you’re in luck: obtaining one is as easy as buying a pair of jeans and, in all likelihood, just as cheap.

David Maimon, a criminologist and Georgia State University professor, recently shared a video from Mega Darknet Market, one of the many cybercrime syndicates slinging bank accounts like Girl Scout Cookies. Mega Darknet and similar “fraud-as-a-service” organizations peddle mule accounts from major bank brands (in this case Chase) that were created using synthetic identity fraud, in which scammers combine stolen Personally Identifiable Information (PII) with made-up credentials.

But these cybercrime outfits take it a step further. With Generative AI at their disposal, they can create SuperSyntheticTM identities that are incredibly patient, lifelike, and difficult to catch.

Aside from bank accounts, fraudsters are selling accounts on popular sports betting sites. The verified accounts—complete with name, DOB, address, and SSN—can be new or aged and even geo-located, with a two-year-old account costing as little as $260. Perfect for money launderers looking to wash stolen cash.

Fraudsters are selling stolen bank accounts as well as stolen accounts from sports betting sites.

Cyber gangs like Mega Darknet also offer access to the very Generative AI tools they use to create synthetic accounts. This includes deepfake technology which, besides fintech fraud, can help carry out “sextortion” schemes.

X-cruciatingly false

Anyone who’s followed the misadventures of X (formerly Twitter) over the past year, or used any social media since the late 2010s, knows that Elon’s embattled platform is a breeding ground for bots and misinformation. Generative AI only exacerbates the problem.

A recent study found that X users couldn’t distinguish AI-generated content (GPT-3) from human-generated content. Most alarming is that these same users trusted AI-generated posts more than posts from real humans.

In the US, where 20% of the population famously can’t locate the country on a world map, and elsewhere these synthetic accounts and their large-scale misinformation campaigns pose myriad risks, especially if said accounts are “verified.” It wouldn’t take much to incite a riot, or stoke anger and subsequent violence toward a specific group of people. How about sharing a bogus picture of an exploded Pentagon that impacts the stock market? Yep. That, too.

This fake image of an explosion near the Pentagon exemplifies the danger of synthetic accounts spreading misinformation.

Election-hacking-as-a-service

Few topics are more timely and can rile up users like election interference, another byproduct of the fake human—and fake social media—epidemic. Indeed, the spreading of false information in service of a particular political candidate or party existed well before social media, but now the stakes have increased exponentially.

If fraud-as-a-service isn’t ominous-sounding enough, election-hacking-as-a-service might do the trick. Groups with access to armies of fake social media profiles are weaponizing disinformation to sway elections any which way. Team Jorge is just one example of these election meddling units. Brought to light via a recent Guardian investigation, Team Jorge’s mastermind Tal Hanan claimed he manipulated upwards of 33 elections.

The rapid creation and dissemination of fake social media profiles and content is far more harmful and widespread with Generative AI in the fold. Flipping elections is one of the worst possible outcomes, but grimmer consequences will arise if automated disinformation isn’t thwarted by an equally intelligent and scalable solution.

Finservs in the crosshairs

Cash is king. Synthetic fraudsters want the biggest haul, even if it’s a slow-burn operation stretched out over a long period of time. Naturally, that means finservs, who lost nearly $2 billion to bank transfer or payment fraud last year, are number one on their hit list. 

Most finservs today don’t have the tools to effectively combat AI-generated synthetic and SuperSynthetic fraud. First-party synthetic fraud—fraud perpetrated by existing “customers”—is rising thanks to SuperSynthetic “sleeper” identities that can imitate human behavior for months before cashing out and vanishing at the snap of a finger. SuperSynthetics can also use deepfake technology to evade detection, even if banks request a video interview during the identity verification phase.

It’s not like finservs are dilly-dallying. In a study from Wakefield, commissioned by Deduce, 100% of those surveyed had synthetic fraud prevention solutions installed along with sophisticated escalation policies. However, more than 75% of finservs already had synthetic identities in their customer databases, and 87% of those respondents had extended credit to fake accounts.

Fortunately for finservs and others trying to neutralize synthetic fraud, it’s not impossible to outsmart generative AI. With the right foundation in place—specifically a massive and scalable source of real-time, multicontextual, activity-backed identity intelligence—and a change in philosophy, even a foe that grows smarter and more humanlike by the second can be thwarted.

This philosophical change is rooted in a top-down, bird’s-eye approach that differs from traditional, individualistic fraud prevention solutions that examine identities one by one. A macro view, on the other hand, sees identities collectively and groups them into a single signature which uncovers a trail of digital footprints. Behavioral patterns such as social media posts and account actions rule out coincidence. The SuperSynthetic smokescreen evaporates.

Whether it’s bad actors selling betting accounts, social media platforms stomping out disinformation, or finservs protecting their bottom lines, fake humans are more formidable than ever with generative AI and SuperSynthetic fraud at their disposal. Most companies seem to be aware of the stakes, but singling out bogus users and SuperSynthetics requires a retooled approach. Otherwise, revenue, users, and brand reputations will dwindle, and the ways in which fake accounts wreak havoc will multiply.

That rise in first-party synthetic fraud is no fluke. You have a SuperSynthetic identity problem.

Online fraud in the US totaled a record-breaking $3.56 billion through the first half of last year. Most consumer-facing companies have done the sensible thing and spent six or seven figures fortifying their perimeter defenses against third-party fraud.

But another effective, and seemingly counterintuitive, strategy for stopping today’s fraudsters is to think inside-out, not just outside-in. In other words, first-party synthetic fraud—or fraud perpetrated by existing “customers”—is threatening bottom lines in its own right, by way of AI–generated synthetic “sleeper” identities that play nice for months before executing a surprise attack.

Banks and other finserv (financial services) companies shouldn’t be surprised if their first-party synthetic fraud is off the charts. Deduce estimates that between 3-5% of new customers acquired in the past year are actually synthetic identities, specifically SuperSyntheticTM identities, created using generative AI.

The good news is that a simple change in philosophy will go a long way in neutralizing synthetic first-party fraudsters before they’re offered a loan or credit card.

First-party problems

Third-party fraud is when bad actors pose as someone else. It’s your classic case of identity theft. They leverage stolen credit card info and/or other credentials, or combine real and fake PII (Personal Identifiable Information) to create a synthesized identity, for financial or material gain. Consequently, the victims whose identities were stolen notice fraudulent transactions on their bank statements, or debt collectors track them down, and it’s apparent they’ve been had.

First-party synthetic fraud is even more cunning—and arguably more frustrating—because the account information and activity appear genuine, complicating the fraud detection process. The aftermath is where it hurts the most. Since, unlike third-party fraud, there isn’t an identifiable victim, finservs have no one to collect the debt from and are forced to bite the bullet.

Image Credit: Experian

One hallmark of first-party synthetic fraud is its patience. These sleeper identities appear legitimate for months, sometimes more than a year, making small deposits every now and then while interacting with the website or app like a real customer. Once they bump up their credit worthiness score and qualify for a loan or line of credit, it’s game over. The fraudster executes a “bust-out,” or “hit-and-run,” spending the money and leaving the bank with uncollectible debt.

This isn’t the work of your average synthetic identity. Such a degree of calculation and human-like sophistication can only be attributed to SuperSynthetic identities.

That escalated quickly

An Equifax report found that nearly two million consumer credit accounts, over the span of a year, were potentially synthetic identities. More than 30% of these accounts represented a major delinquency risk with cases averaging $8K-10K in losses.

The blame for rising first-party synthetic fraud—and the finservs left in its wake—can be placed squarely on the shoulders of SuperSynthetic identities. These AI-generated bots are proliferating worldwide, scaling their sleeper networks to execute bust-outs on a grand scale.

SuperSynthetics—featuring a three-pronged attack of synthetic identity fraud, legitimate credit history, and deepfake technology—need not brute-force their way into a bank’s pockets. Aside from a SuperSynthetic’s patient approach and aged, geo-located identity, its deepfake capability, a benefit of the recent generative AI explosion, is key to securing the long-awaited loan or credit card.

Selfie verification? A video interview? No problem. Deepfake tools, some of them free, are advanced enough to trick finservs even if they have liveness detection in their stack. Document verification? There’s a deepfake for that, too.

SuperSynthetics don’t have a kryptonite, per se. But analyzing identities from a different angle boosts the chances of a finserv spotting SuperSynthetics before they can circumvent the loan or credit verification stage.

Dusting for fingerprints

If finservs want to sniff out SuperSynthetic identities and successfully combat first-party synthetic fraud, they can’t be afraid of heights.

A top-down, bird’s-eye view is the best way to uncover the digital fingerprints or signatures of SuperSynthetics. Individualistic fraud prevention tools overlook these behavioral patterns, but a macro approach, which studies identities collectively, illuminates forensic evidence like a black light.

A top-down view reveals digital fingerprints that otherwise would go undetected.

Grouping identities into a single signature—and examining them alongside millions of fraudulent identities—reveals indisputable evidence of SuperSynthetic activities such as social media posts and account actions that consistently happen at the exact day and time each week by a group or signature of identities. Coincidence is out of the question.

Of course, not every finserv has the firepower to adopt this strategy. In order to enable a big-picture view, companies’ anti-fraud stacks need a large and scalable source of real-time, multicontextual, activity-backed identity intelligence.

There are other avenues. Consider, for example, the only 100-percent foolproof solution to first-party synthetic fraud: in-person identity verification. Even if this approach was used exclusively at the pre-loan juncture it seems unlikely that many companies would take on the added friction, though driving down to the bank is a small price to pay for a five or ten thousand-dollar loan.

If finservs don’t wish to revisit the good old days of face-to-face verification, the top-down, signature approach is the only other viable deterrent to first-party synthetic fraud. Solutions that analyze identities one by one won’t stop SuperSynthetics before a loan or credit card is granted, and by that point it’s already over.

An old-school approach could be the answer for finservs

For many people, video conferencing apps like Zoom made work, school, and other everyday activities possible amid the global pandemic—and more convenient. Remote workers commuted from sleeping position to upright position. Business meetings resembled “Hollywood Squares.” Business-casual meant a collared shirt up top and pajama pants down low.

Fraudsters were also quite comfortable during this time. Unprecedented amounts of people sheltering in place naturally caused an ungodly surge in online traffic and a corresponding increase in security breaches. Users were easy prey, and so were many of the apps and companies they transacted with.

In the financial services (finserv) sector, branches closed down and ceased face-to-face customer service. Finserv companies relied on Zoom for document verification and manual reviews, and bad actors, armed with stolen credentials and improved deepfake technology, took full advantage.

Even in the face of AI-Generated identity fraud most finservs still use remote identity verification to comply with regulator KYC requirements, and when it comes time to offer a loan. It’s easier than meeting in person, and what customer doesn’t prefer verifying their identity from the comfort of their couch?

But AI-powered synthetic identities are getting smarter and, while deepfake deterrents are closing the gap, a return to an old-school approach remains the only foolproof option for finservs.

Deepfakes, and the SuperSynthetic™ quandary

Gen AI platforms such as ChatGPT and Bard, coupled with their nefarious brethren FraudGPT and WormGPT and the like, are so accessible it’s scary. Everyday users can create realistic, deepfaked images and videos with little effort. Voices can be cloned and manipulated to say anything and sound like anyone. The rampant spread of misinformation across social media isn’t surprising given that nearly half of people can’t identify a deepfaked video.

More disturbing: deepfaked Mona Lisa, or that someone made this 3+ years ago?

Finserv companies are especially susceptible to deepfaked trickery, and bypassing remote identity verification will only get easier as deepfake technology continues to rapidly improve.

For SuperSynthetics, the new generation of fraudulent deepfaked identities, fooling finservs is quite easy. SuperSynthetics—a one-two-three punch of deepfake technology and synthetic identity fraud and legitimate credit histories—are more humanlike and individualistic than any previous iteration of bot. The bad actors who deploy these SuperSynthetic bots aren’t in a rush; they’re willing to play the long game, depositing small amounts of money over time and interacting with the website to convince finservs they’re prime candidates for a loan or credit application.

When it comes time for the identity verification phase, SuperSynthetics deepfake their documents, selfie, and/or video interview…and they’re in.

An overhaul is in order

Deepfake technology, which first entered the mainstream in 2018, is still relatively infantile yet pokes plenty of holes in remote identity verification.

The “ID plus selfie” process, as Gartner analyst Akif Khan calls it, is how most finservs are verifying loan and credit applicants these days. The user takes a picture of their ID or driver’s license, authenticity is confirmed, then the user snaps a picture of themselves. The system checks the selfie for liveness and makes sure the biometrics line up with the photo ID document. Done.

The process is convenient for legitimate customers and fraudsters alike thanks to the growing availability of free deepfake apps. Using these free tools, fraudsters can deepfake images of docs and successfully pass the selfie step, most commonly by executing a “presentation attack” in which their primary device’s camera is aimed at the screen of a second device displaying a deepfake.

Khan advocates for a layered approach to deepfake mitigation, including tools that detect liveness and check for certain types of metadata. This is certainly on point, but there’s an old-school, far less technical way to ward off deepfaking fraudsters. Its success rate? 100%.

The good ol’ days

Remember handshakes? How about eye contact that didn’t involve staring into a camera lens? These are merely vestiges of the bygone in-person meetings that many finservs used to hold with loan applicants pre-COVID.

Outdated, and less efficient, as face-to-face meetings with customers might be, they’re also the only rock-solid defense against deepfakes.

Not even advanced liveness detection is a foolproof deepfake deterrent.

Sure, the upper crust of finserv companies likely have state-of-the-art deepfake deterrents in place (i.e., 3D liveness detection solutions). But liveness detection doesn’t account for deepfaked documents or, more importantly, video, or the fact that the generative AI tools available to fraudsters are advancing just as fast as vendor solutions, if not faster. It’s a full-blown AI arms race, and with it comes a lot of question marks.

In-person verification (only for high-risk activities) puts these fears to bed. Is it frictionless? Obviously far from it, though workarounds, such as traveling notaries that meet customers at their residence, help ease the burden. But if heading down to a local branch for a quick meet-and-greet is what it takes to snag a $10K loan, will a customer care? They’d probably fly across state lines if it meant renting a nicer apartment or finally moving on from their decrepit Volvo.

Time to layer up

Khan’s recommendation, for finservs to assemble a superteam of anti-deepfake solutions, is sound, so long as companies can afford to do so and can figure out how to orchestrate the many solutions into a frictionless consumer experience. Vendors indeed have access to AI in their own right, powering tools that directly identify deepfakes through patterns, or that key in on metadata such as the resolution of a selfie. Combine these with the most crucial layer, liveness detection, and the final result is a stack that can at the very least compete against deepfakes.

SuperSynthetics aren’t as easy to neutralize. In previous posts, we’ve advocated for a “top-down” anti-fraud solution that spots these types of identities before the loan or credit application stage. Contrary to individualistic fraud prevention tools, this bird’s-eye view reveals digital fingerprints—concurrent account activities, simultaneous social media posts, etc.—that otherwise would go undetected.

In the meantime, it doesn’t hurt to consider the upside of an in-person approach to verifying customer identities (prior to extending a loan, not onboarding). No, it isn’t flashy, nor is it flawless. However, it is reliable and, if finservs effectively articulate the benefit to their customers—protecting them from life-altering fraud—chances are they’ll understand.

Customer or AI-Generated Identity? The lines are as blurry as ever.

Today’s fraudsters are truly, madly, deeply fake.

Deepfaked identities, which use AI-generated audio or visuals to pass for a legitimate customer, are multiplying at an alarming rate. Banks and other fintech companies—who collectively lost nearly $2 billion to bank transfer or payment fraud in 2022, are firmly in their crosshairs.

Sniffing out deepfaked chicanery isn’t easy. One study found that 43% of people struggle to identify a deepfaked video. It’s especially concerning that this technology is still relatively infantile and already capable of luring consumers and businesses into fraudulent transactions.

Over time, deepfakes will seem increasingly less fake and much harder to detect. In fact, an offshoot of deepfaked synthetic identities, the SuperSynthetic™ identity, has already emerged from the pack. Banks and financial organizations have no choice but to stay on top of developments in deepfake technology and swiftly adopt a solution to combat this unprecedented threat.

Rise of the deepfakes

Deepfakes have come a long way since exploding onto the scene roughly five years ago. Back then, deepfaked videos aimed to entertain. Most featured harmless superimpositions of one celebrity’s face onto another, such as this viral Jennifer Lawrence-Steve Buscemi mashup.

The trouble started when users began deepfaking sexually explicit videos, opening up a massive can of privacy- and ethics-related worms. Then a 2018 video of a deepfaked Barack Obama speech showed just how dangerous the technology could be.

Image Credit: DHS

The proliferation and growing sophistication of deepfakes over the past five years can be attributed to the democratization of AI and deep learning tools. Today, anyone can doctor an image or video with just a few taps. FakeApp and Lyrebird and countless other apps enable smartphone users to seamlessly integrate someone’s face into an existing video, or generate a new video that can easily pass for the real deal.

Given this degree of accessibility, the threat of deepfakes to banks and fintech companies will only intensify in the months and years ahead. The specter of new account fraud, perpetrated by way of a deepfaked synthetic identity, looms large in the era of remote customer onboarding.

This is a stickup

Synthetic identity fraud, in which bad actors invent a new identity using a combination of stolen and made-up credentials, has already cost banks upwards of $6 billion. Deepfake technology only adds fuel to the fire.

A deepfaked synthetic identity heist doesn’t require any heavy lifting. A fraudster crops someone’s face from a social media picture and they’re well on their way to spawning a lifelike entity that speaks, blinks, and moves its head on screen. Image- or video-based identity verification, KYC protocol designed to deter potential fraud before an account is opened or extended credit, is moot. The fraudster’s uploaded selfie will be a dead ringer for the face on the ID card. Even a live video conversation with an agent is unlikely to ferret out a deepfaked identity.

Not even Columbo can spot a deepfaked synthetic identity.

Audio-based verification processes are circumvented just as easily. Exhibit A: the vulnerability of the voice ID technology used by banks across the US and Europe, ostensibly another layer of login security that prompts users to say some iteration of, “My voice is my password.” This sounds great in theory, but AI-generated audio solutions can clone anyone’s voice and create a virtually identical replica. One user, for example, tapped voice creation tool ElevenLabs to clone his own voice using an audio sample. He accessed his account in one try.

In this use case, the bad actor would also need a date of birth to access the account. But, thanks to frequent big-time data leaks—such as the recent Progress Corp breach—dates of birth and other Personally Identifiable Information (PII) are readily available on the dark web.

Here come the SuperSynthetics

In deepfaked synthetic identities, banks and financial services platforms clearly face a formidable foe. But this worthy opponent has been in the gym, protein-shaking and bodyhacking itself into something stronger and infinitely more dangerous: the SuperSynthetic identity.

SuperSynthetic identities, armed with the same deepfake capabilities as regular synthetics (and then some), bring an even greater level of Gen AI-powered smarts to the table. No need for a brute force attack. SuperSynthetics operate with a sophistication and discernment that is so lifelike it’s spooky. In this regard, one must only look at the patience of these bots.

SuperSynthetics are all about the long con. Their aged and geo-located identities play nice for months, engaging with the website and making small deposits here and there, enough to appear human and innocuous. Once enough of these transactions accumulate, and trust is gained from the bank, a credit card or loan is extended. Any additional verification is bypassed via deepfake, of course. When the money is deposited into their SuperSynthetic account the bad actor immediately withdraws it, along with their seed money, before finding another bank to swindle.

How prevalent are SuperSynthetics? Deduce estimates that between 3-5% of financial services accounts onboarded within the past year are in fact SuperSynthetic “sleepers” waiting to strike. It certainly warrants a second look at how customers are verified before obtaining a loan or credit card, including the consideration of in-person verification to rule out any deepfake activity.

No time like the present

If deepfaked synthetic identities don’t call for a revamped cybersecurity solution, deepfaked SuperSynthetic identities will certainly do the trick. Our money is on a top-down approach that views synthetic identities collectively rather than individually. Analyzing synthetics as a group uncovers their digital footprints—signature online behaviors and patterns too consistent to suggest mere coincidence.

Whatever banks choose to do, kicking the can down the road only works in favor of the fraudsters. With every passing second, the deepfakes are looking (and sounding) more real.

Time is a-tickin’, money is a-burnin, and customers are a-churnin’.

How SuperSynthetic identities carry out modern day bank robberies

The use cases for generative AI continue to proliferate. Need a vegan-friendly recipe for chocolate cookies that doesn’t require refined sugar? Done. Need to generate an image of Chuck Norris holding a piglet? You got it.

However, not all Gen AI use cases are so innocuous. Fraudsters are joining the party and developing tools like WormGPT and FraudGPT to launch sophisticated cyberattacks that are significantly more dangerous and accessible. Consumer and enterprise companies alike are on high alert, but fintech organizations really need to upgrade their “bot-y” armor.

Each new wave of bots grows increasingly stronger and brings its unique share of challenges to the table—none more than synthetic “Frankenstein” identities consisting of real and fake PII data. But, alas, the next evolution of synthetic identities has entered the fray: SuperSyntheticTM identities.

Let’s take a closer look at how these SuperSynthetic bots came to be, how they can effortlessly defraud banks, and how banks need to change their account opening workflows.

The evolution of bots

Before we dive into SuperSynthetic bots and the danger they pose to banks, it’s helpful to cover how we got to this point.

Throughout the evolution of bots we’ve seen the good, the bad, and the downright nefarious. Well-behaved bots like web crawlers and chatbots help improve website or app performance; naughty bots crash websites, harm the customer experience and, worst of all, steal money from businesses and consumers.

The evolutionary bot chart looks like this:

Generation One: These bots are capable of basic scripting and automated maneuvers. Primarily they scrape, spam, and perform fake actions on social media apps (comments, likes, etc.).

Generation Two: Web analytics, user interface automation, and other tools that enable the automation of website development.

Generation Three: This wave of bots adopted complex machine learning algorithms, allowing for the analysis of user behavior to boost website or app performance.

Generation Four: These bots laid the groundwork for SuperSynthetics. They’re highly effective at simulating human behavior while staying off radar.

Generation Five: SuperSynthetic bots with a level of sophistication that negates the need to execute a brute force attack hoping for a fractional chance of success. Individualistic finesse, combined with the bad actor’s willingness to play the long game, makes these bots undetectable by conventional bot mitigation and synthetic fraud detection strategies.

Playing the slow game

So, how have SuperSynthetics emerged as the most formidable bank robbers yet? It’s more artifice than bull rush.

Over time, a SuperSynthetic bot uses its AI-generated identity to deposit small amounts of money via Zelle, ACH, or another digital payments app while interacting with various website functions. The bot’s meager deposits accumulate over the course of several months, and regular access to its bank account to “check its balance” earns the reputation of a “customer in good standing.” Its credit risk worthiness score increases and an offer of a credit card or a personal, unsecured loan is extended.

At this point it’s hook, line, and sinker. The bank deposits the loan amount or issues the credit card and the fraudster transfers it out, along with their seed funds, and moves on to the next unsuspecting bank. This is a cunning, slow-burn operation only a SuperSynthetic identity can successfully carry out at scale. Deduce estimates that between 3-5% of accounts onboarded within the past year at financial services and fintech institutions are in fact SuperSynthetic Sleeper identities.

Such patience and craftiness is unprecedented in a bot. Stonewalling SuperSynthetics takes an equally novel approach.

A change in philosophy

Traditional synthetic fraud prevention solutions won’t detect SuperSynthetic identities. Built around static data, these tools lack the dynamic, real-time data and scale needed to sniff out an AI-generated identity. Even manual review processes and tools like DocV are no match as deepfake AI methods can create realistic documents and even live video interviews.

An individualistic approach offers little resistance to SuperSynthetic bots.

Fundamentally, these static-based tools take an individualistic approach to stopping fraud. The data that’s pulled from a range of sources during the verification phase is only analyzing one identity at a time. In this case, a SuperSynthetic identity will appear legitimate and pass all the verification checks. Fraudulent patterns missed. Digital forensic footprints overlooked.

A philosophical change in fraud prevention is foundational to banks keeping SuperSynthetic bots out of their pockets. Verifying identities as a collective group, or signature, is the only viable option.

A view from the top

Things always look different from the top floor. In the case of spotting and neutralizing SuperSynthetic identities, a big-picture perspective reveals digital footprints otherwise obscured by an individualistic anti-fraud tool.

A bird’s-eye view that groups identities into a single signature uncovers suspicious evidence such as simultaneous social media posts, concurrent account actions, matching time-of-day and day-of-week activities, and other telltale signs of fraud. Considering the millions of fraudulent identities in the mix, it’s illogical to attribute this evidence to mere happenstance.

There’s no denying that SuperSynthetic identities have arrived. No prior iteration of bot has ever appeared so lifelike and operated with such precision. If banks want to protect their margins and user experience, verifying identity via a signature approach is a must. This does require bundling existing fraud prevention stacks with ample (and scalable) real-time identity intelligence, but the first step in thwarting SuperSynthetics is an ideological one: co-opt the signature strategy.

How a top-down approach can unmask AI-generated fraudsters

Whomever’s side of the AI debate you’re on there’s no denying that AI is here to stay, and has barely started to tap its potential.

AI makes life easier on consumers and businesses alike. However, the proliferation of AI-based tools helps fraudsters as well.

As the AI arms race heats up, one emerging threat that’s tormenting businesses is AI-generated identity fraud. With help from generative AI, fraudsters can easily use previously acquired PII (Personal Identifiable Information) to establish a credible online identity that appears human-like, replete with an OK credit history, then leverage deepfakes to legitimize a synthetic identity with documents, voice, and video. As of April 2023, audio and video deepfakes alone have duped one-third of companies..

Without the proper fortification in place, financial services and fintech businesses are prime targets for AI-generated identities, new account opening fraud, and the resultant revenue loss.

The (multi)billion-dollar question is, how do these companies fight back when AI-generated identities are seemingly indistinguishable from real customers?

Playing the long game

There are several ways in which AI helps create synthetic identities.

For one, social engineering and phishing with AI-powered tools is as easy as “PII.” Generative AI can crank out a malicious yet convincing email or deepfake a document or voice to obtain personal info. In terms of scalability, fraudsters can now manage thousands of fake identities at once thanks to AI-assisted CRMs and marketing automation software and purpose-built platforms for committing fraud such as FraudGPT and WormGPT. Thousands of synthetics creating “aged” and geo-located email addresses, signing up for newsletters, and making social media profiles and other accounts—all on autopilot. This unparalleled sophistication is the hallmark of an even more formidable synthetic identity: the SuperSyntheticTM identity.

Thanks to AI’s automation and effective utilization of previously stolen PII data, SuperSynthetic identities can assemble a credible trail of online activity. But these SuperSynthetics have a credible (maybe not an 850 but a solid 700) credit history, too. Therein lies the other challenge with AI-generated identity fraud: the human bad actors behind the computer or phone screen, pulling the strings, are remarkably patient. They’ll invest actual money by making deposits over time into a newly opened bank account, or make small purchases on a retailer’s website to build “existing customer” status, to gradually forge a bogus identity that lands them North of $15K (according to the FTC, a net ROI of thousands of dollars). AI-generated fraud is a very profitable business.

The chart above shows how a fraudster boosts credibility for an identity both online and with credit history before opening a credit card or loan, or even transacting via BNPL (Buy Now Pay Later). They sign up for cheap mobile phone plans, such as Boost, Mint, or Cricket, or make small pre-paid debit card donations to charities linked to their social security number. They can even use AI to find rental vacancies in MLS listings in a geography that maps to their aged and geo-located legend, in order to establish an online activity history of paying utility bills. The patience, calculation, and cunning of these fraudsters is striking—and just as dangerous as the AI that fuels their SuperSynthetic identities.

Looking at the big picture

Neutralizing AI-generated identity fraud requires a new approach. Traditional bot mitigation and synthetic fraud prevention solutions reliant upon static data about a single identity need some extra oomph to stonewall persuasive SuperSynthetics.

These static data-based tools lack the dynamic, real-time data and scale necessary to pick up the scent of AI-generated identity fraud. Patterns and digital forensic footprints get overlooked, and the sophistication of these fake identities even outflanks manual review processes and tools like DocV.

The bigger problem is that, when today’s anti-fraud solutions pull data from a range of sources during the verification phase, they’re doing so on an individual identity basis. Why is this problematic? Because a SuperSynthetic identity on its own will look legitimate and pass all the verification checks—including a manual review, the last bastion of fraud prevention. However, analyzing that same identity from a high-level vantage point changes everything. The identity is revealed to be a member of a larger signature of SuperSynthetic identities. Like a black light, this bird’s-eye view uncovers previously obscured, digital forensic evidence. 

But what does this evidence even look like? And what does it take to transition from an individualistic to a signature-centered approach?

The key to the evidence locker

AI-generated SuperSynthetic identities leave behind a variety of digital fingerprints or signatures. A top-down view reveals suspicious patterns across millions of fraudulent identities that are too identical to be a coincidence. 

For example, if the same three identities post a comment on the New York Times website every Tuesday morning at 7:32 a.m. PST, the chances these are three humans are infinitesimally small and therefore it’s clear that each is in fact SuperSynthetic.

Switching over to a top-down approach isn’t merely a philosophical change. Unlocking the requisite evidence to thwart AI-generated identities demands premium identity intelligence at scale, combined with sophisticated ML that gathers and analyzes large swaths of real-time data from diverse sources.

In short, an activity-based, real-time identity graph capable of sifting through hundreds of millions of identities.

Protect your margins (and UX)

A ginormous real-time identity graph rivaling the likes of big tech? This may seem like an unrealistic path to stopping AI-generated identities. It isn’t.

Deduce employs the largest identity graph in the US: 780 million US privacy-compliant identity profiles and 1.5 billion daily user events across 150,000+ websites and apps. Additionally, Deduce has previously seen 89% of new users at the account creation stage—where AI-generated synthetics typically pass through undetected—and 43% of these users hours before they enter the new account portal.

Deduce’s premium identity intelligence, patented technology, and formidable ML algorithms enable a multi-contextualized, top-down approach. Identities are analyzed against signatures of synthetic fraudsters—hundreds of millions of them—to ensure they’re the real McCoy. It’s a far superior alternative to overtightening existing risk models and causing unnecessary friction followed by churn, reputational harm, and revenue loss.

Want to outsmart AI-generated identity fraud while preserving a trusted user experience? Contact us today.