76% of finservs are victims of synthetic fraud

In 1938, Orson Welles’ infamous radio broadcast of The War of the Worlds convinced thousands of Americans to flee their homes for fear of an alien invasion. More than 80 years later, the public is no less gullible, and technology unfathomable to people living in the 1930s allows fake humans to spread false information, bamboozle banks, and otherwise raise hell with little to no effort.

These fake humans, also known as synthetic identities, are ruining society in myriad ways: tampering with electorate polls and census data, disseminating misleading social media posts with real-world consequences, sharing fake articles on Reddit that subsequently skew Large Language Models that drive platforms such as ChatGPT. And, of course, bad actors can leverage fake identities to steal millions from financial institutions.

The bottom line is this: synthetic fraud is prevalent; financial services companies (finservs), social media platforms, and many other organizations are struggling to keep pace; and the impact, both now and in the future, is frighteningly palpable.

Here is a closer look at how AI-powered synthetic fraud is infiltrating multiple facets of our lives.

Accounts for sale

If you need a new bank account, you’re in luck: obtaining one is as easy as buying a pair of jeans and, in all likelihood, just as cheap.

David Maimon, a criminologist and Georgia State University professor, recently shared a video from Mega Darknet Market, one of the many cybercrime syndicates slinging bank accounts like Girl Scout Cookies. Mega Darknet and similar “fraud-as-a-service” organizations peddle mule accounts from major bank brands (in this case Chase) that were created using synthetic identity fraud, in which scammers combine stolen Personally Identifiable Information (PII) with made-up credentials.

But these cybercrime outfits take it a step further. With Generative AI at their disposal, they can create SuperSyntheticTM identities that are incredibly patient, lifelike, and difficult to catch.

Aside from bank accounts, fraudsters are selling accounts on popular sports betting sites. The verified accounts—complete with name, DOB, address, and SSN—can be new or aged and even geo-located, with a two-year-old account costing as little as $260. Perfect for money launderers looking to wash stolen cash.

Fraudsters are selling stolen bank accounts as well as stolen accounts from sports betting sites.

Cyber gangs like Mega Darknet also offer access to the very Generative AI tools they use to create synthetic accounts. This includes deepfake technology which, besides fintech fraud, can help carry out “sextortion” schemes.

X-cruciatingly false

Anyone who’s followed the misadventures of X (formerly Twitter) over the past year, or used any social media since the late 2010s, knows that Elon’s embattled platform is a breeding ground for bots and misinformation. Generative AI only exacerbates the problem.

A recent study found that X users couldn’t distinguish AI-generated content (GPT-3) from human-generated content. Most alarming is that these same users trusted AI-generated posts more than posts from real humans.

In the US, where 20% of the population famously can’t locate the country on a world map, and elsewhere these synthetic accounts and their large-scale misinformation campaigns pose myriad risks, especially if said accounts are “verified.” It wouldn’t take much to incite a riot, or stoke anger and subsequent violence toward a specific group of people. How about sharing a bogus picture of an exploded Pentagon that impacts the stock market? Yep. That, too.

This fake image of an explosion near the Pentagon exemplifies the danger of synthetic accounts spreading misinformation.

Election-hacking-as-a-service

Few topics are more timely and can rile up users like election interference, another byproduct of the fake human—and fake social media—epidemic. Indeed, the spreading of false information in service of a particular political candidate or party existed well before social media, but now the stakes have increased exponentially.

If fraud-as-a-service isn’t ominous-sounding enough, election-hacking-as-a-service might do the trick. Groups with access to armies of fake social media profiles are weaponizing disinformation to sway elections any which way. Team Jorge is just one example of these election meddling units. Brought to light via a recent Guardian investigation, Team Jorge’s mastermind Tal Hanan claimed he manipulated upwards of 33 elections.

The rapid creation and dissemination of fake social media profiles and content is far more harmful and widespread with Generative AI in the fold. Flipping elections is one of the worst possible outcomes, but grimmer consequences will arise if automated disinformation isn’t thwarted by an equally intelligent and scalable solution.

Finservs in the crosshairs

Cash is king. Synthetic fraudsters want the biggest haul, even if it’s a slow-burn operation stretched out over a long period of time. Naturally, that means finservs, who lost nearly $2 billion to bank transfer or payment fraud last year, are number one on their hit list. 

Most finservs today don’t have the tools to effectively combat AI-generated synthetic and SuperSynthetic fraud. First-party synthetic fraud—fraud perpetrated by existing “customers”—is rising thanks to SuperSynthetic “sleeper” identities that can imitate human behavior for months before cashing out and vanishing at the snap of a finger. SuperSynthetics can also use deepfake technology to evade detection, even if banks request a video interview during the identity verification phase.

It’s not like finservs are dilly-dallying. In a study from Wakefield, commissioned by Deduce, 100% of those surveyed had synthetic fraud prevention solutions installed along with sophisticated escalation policies. However, more than 75% of finservs already had synthetic identities in their customer databases, and 87% of those respondents had extended credit to fake accounts.

Fortunately for finservs and others trying to neutralize synthetic fraud, it’s not impossible to outsmart generative AI. With the right foundation in place—specifically a massive and scalable source of real-time, multicontextual, activity-backed identity intelligence—and a change in philosophy, even a foe that grows smarter and more humanlike by the second can be thwarted.

This philosophical change is rooted in a top-down, bird’s-eye approach that differs from traditional, individualistic fraud prevention solutions that examine identities one by one. A macro view, on the other hand, sees identities collectively and groups them into a single signature which uncovers a trail of digital footprints. Behavioral patterns such as social media posts and account actions rule out coincidence. The SuperSynthetic smokescreen evaporates.

Whether it’s bad actors selling betting accounts, social media platforms stomping out disinformation, or finservs protecting their bottom lines, fake humans are more formidable than ever with generative AI and SuperSynthetic fraud at their disposal. Most companies seem to be aware of the stakes, but singling out bogus users and SuperSynthetics requires a retooled approach. Otherwise, revenue, users, and brand reputations will dwindle, and the ways in which fake accounts wreak havoc will multiply.

How a top-down approach can unmask AI-generated fraudsters

Whomever’s side of the AI debate you’re on there’s no denying that AI is here to stay, and has barely started to tap its potential.

AI makes life easier on consumers and businesses alike. However, the proliferation of AI-based tools helps fraudsters as well.

As the AI arms race heats up, one emerging threat that’s tormenting businesses is AI-generated identity fraud. With help from generative AI, fraudsters can easily use previously acquired PII (Personal Identifiable Information) to establish a credible online identity that appears human-like, replete with an OK credit history, then leverage deepfakes to legitimize a synthetic identity with documents, voice, and video. As of April 2023, audio and video deepfakes alone have duped one-third of companies..

Without the proper fortification in place, financial services and fintech businesses are prime targets for AI-generated identities, new account opening fraud, and the resultant revenue loss.

The (multi)billion-dollar question is, how do these companies fight back when AI-generated identities are seemingly indistinguishable from real customers?

Playing the long game

There are several ways in which AI helps create synthetic identities.

For one, social engineering and phishing with AI-powered tools is as easy as “PII.” Generative AI can crank out a malicious yet convincing email or deepfake a document or voice to obtain personal info. In terms of scalability, fraudsters can now manage thousands of fake identities at once thanks to AI-assisted CRMs and marketing automation software and purpose-built platforms for committing fraud such as FraudGPT and WormGPT. Thousands of synthetics creating “aged” and geo-located email addresses, signing up for newsletters, and making social media profiles and other accounts—all on autopilot. This unparalleled sophistication is the hallmark of an even more formidable synthetic identity: the SuperSyntheticTM identity.

Thanks to AI’s automation and effective utilization of previously stolen PII data, SuperSynthetic identities can assemble a credible trail of online activity. But these SuperSynthetics have a credible (maybe not an 850 but a solid 700) credit history, too. Therein lies the other challenge with AI-generated identity fraud: the human bad actors behind the computer or phone screen, pulling the strings, are remarkably patient. They’ll invest actual money by making deposits over time into a newly opened bank account, or make small purchases on a retailer’s website to build “existing customer” status, to gradually forge a bogus identity that lands them North of $15K (according to the FTC, a net ROI of thousands of dollars). AI-generated fraud is a very profitable business.

The chart above shows how a fraudster boosts credibility for an identity both online and with credit history before opening a credit card or loan, or even transacting via BNPL (Buy Now Pay Later). They sign up for cheap mobile phone plans, such as Boost, Mint, or Cricket, or make small pre-paid debit card donations to charities linked to their social security number. They can even use AI to find rental vacancies in MLS listings in a geography that maps to their aged and geo-located legend, in order to establish an online activity history of paying utility bills. The patience, calculation, and cunning of these fraudsters is striking—and just as dangerous as the AI that fuels their SuperSynthetic identities.

Looking at the big picture

Neutralizing AI-generated identity fraud requires a new approach. Traditional bot mitigation and synthetic fraud prevention solutions reliant upon static data about a single identity need some extra oomph to stonewall persuasive SuperSynthetics.

These static data-based tools lack the dynamic, real-time data and scale necessary to pick up the scent of AI-generated identity fraud. Patterns and digital forensic footprints get overlooked, and the sophistication of these fake identities even outflanks manual review processes and tools like DocV.

The bigger problem is that, when today’s anti-fraud solutions pull data from a range of sources during the verification phase, they’re doing so on an individual identity basis. Why is this problematic? Because a SuperSynthetic identity on its own will look legitimate and pass all the verification checks—including a manual review, the last bastion of fraud prevention. However, analyzing that same identity from a high-level vantage point changes everything. The identity is revealed to be a member of a larger signature of SuperSynthetic identities. Like a black light, this bird’s-eye view uncovers previously obscured, digital forensic evidence. 

But what does this evidence even look like? And what does it take to transition from an individualistic to a signature-centered approach?

The key to the evidence locker

AI-generated SuperSynthetic identities leave behind a variety of digital fingerprints or signatures. A top-down view reveals suspicious patterns across millions of fraudulent identities that are too identical to be a coincidence. 

For example, if the same three identities post a comment on the New York Times website every Tuesday morning at 7:32 a.m. PST, the chances these are three humans are infinitesimally small and therefore it’s clear that each is in fact SuperSynthetic.

Switching over to a top-down approach isn’t merely a philosophical change. Unlocking the requisite evidence to thwart AI-generated identities demands premium identity intelligence at scale, combined with sophisticated ML that gathers and analyzes large swaths of real-time data from diverse sources.

In short, an activity-based, real-time identity graph capable of sifting through hundreds of millions of identities.

Protect your margins (and UX)

A ginormous real-time identity graph rivaling the likes of big tech? This may seem like an unrealistic path to stopping AI-generated identities. It isn’t.

Deduce employs the largest identity graph in the US: 780 million US privacy-compliant identity profiles and 1.5 billion daily user events across 150,000+ websites and apps. Additionally, Deduce has previously seen 89% of new users at the account creation stage—where AI-generated synthetics typically pass through undetected—and 43% of these users hours before they enter the new account portal.

Deduce’s premium identity intelligence, patented technology, and formidable ML algorithms enable a multi-contextualized, top-down approach. Identities are analyzed against signatures of synthetic fraudsters—hundreds of millions of them—to ensure they’re the real McCoy. It’s a far superior alternative to overtightening existing risk models and causing unnecessary friction followed by churn, reputational harm, and revenue loss.

Want to outsmart AI-generated identity fraud while preserving a trusted user experience? Contact us today.