How cybercriminals are using gen AI to scale their scams

When it comes to AI’s impact on the global economy, the projections are staggering. 

Generative AI could inject up to the equivalent of $4.4 trillion annually into the world economy, according to McKinsey. PwC puts AI’s potential contributions to the globe’s coffers at $15.7 trillion by 2030 — more than China’s and India’s current output combined.

Driving those dollar signs are AI’s undeniable productivity-boosting powers, which business leaders are eager to harness. With generative AI applications such as ChatGPT, Midjourney, and GitHub Copilot, companies can increase productivity, spark innovation, and improve efficiencies.

"It's this very exciting time right now, with all this potential of what we can do with AI: how we can make our organisations more successful, how we can take advantage of it to serve our mission. But the flip side of that is it has risks," Okta CEO Todd McKinnon said during Axios’ The Future of Cybersecurity in the AI Era roundtable in October.

You’ll find that flip side on full display in the world of cybercrime. Just as legitimate businesses are using generative AI to scale quickly and boost productivity, so are bad actors. 

It now takes fewer than 3 seconds of audio for cybercriminals using generative AI to clone someone’s voice, which they use to trick family members into thinking a loved one is hurt or in trouble or banking staff to transfer money out of a victim’s account. Generative AI applications have also been used to create celebrity deepfakes, using the likeness of famous people to produce videos and photo-realistic images that funnel unsuspecting fans to scams. 

And those examples are the new kids on the block. Phishing, a type of social engineering that’s almost as old as the internet itself, is also on the rise. In 2022, phishing attacks increased 47% when compared to the previous year, according to Zscaler. A major factor behind that jump? Generative AI.

“The increased prevalence of phishing kits sourced from black markets and chatbot AI tools like ChatGPT has seen attackers quickly develop more targeted phishing campaigns,” the report states. 

Why have generative AI-assisted attacks increased?

With generative AI, it’s easier than ever for cybercriminals to separate people and companies from their money and data. Low-cost, easy-to-use tools coupled with a proliferation of public-facing data (i.e. photos of individuals, voice recordings, personal details shared on social media, etc.) and improved computation to work with that data make for an expanding threat landscape. Someone with no coding, design, or writing experience can level up in seconds as long as they know how to prompt: feeding natural language instructions into an AI large language model, or LLM (think ChatGPT), or into a text-to-image model (for example, StableDiffusion) to elicit the creation of net-new content.

AI’s automation capabilities also mean that bad actors can more easily scale operations, such as phishing campaigns, which until recently were tedious, manual, and expensive undertakings. As the volume of attacks increases, so does the probability of an attack’s success, the fruits of which are then rolled into more sophisticated cybercrimes.

What is the economic impact of generative AI-enhanced fraud?

While it’s hard to pin down exactly how much generative AI–fuelled attacks alone will cost us, consider this: In 2015, CyberSecurity Ventures predicted the global annual cost of cybercrime to run about $3 trillion a year. Fast forward to its October 2023 report: “We expect global cybercrime damage costs to grow by 15 percent per year over the next two years, reaching $10.5 trillion USD annually by 2025.” Generative AI is only helping to move the needle. 

The concern surrounding AI is also shared by the highest levels of government. On Oct. 30, 2023, President Joe Biden issued an executive order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence

“As AI’s capabilities grow, so do its implications for Americans’ safety and security. With this Executive Order, the President directs the most sweeping actions ever taken to protect Americans from the potential risks of AI systems,” reads a statement from the White House.

Consumers’ concerns are also growing. The percentage of U.S. adults who report being “very concerned” about their data being hacked and stolen from companies they regularly use grew to 41% in October 2023 from a quarterly average of 36% by the end of 2022, reports CivicScience. When it comes to AI, 52% of Americans reported feeling “more concerned than excited” about the increased use of the technology in daily life, according to an August 2023 Pew Research Centre survey.

With so much at stake, from both financial and consumer confidence standpoints, it’s important for businesses and individuals alike to stay informed about how generative AI is being used for nefarious purposes. 

Here’s a look at some of the ways bad actors are using generative AI to scale their attacks, what the future holds, and how to defend against those efforts.

How is generative AI helping cybercriminals work smarter and faster?

While artificial intelligence isn’t new, the availability of powerful generative AI applications to the public is. Since November 2022, when OpenAI released ChatGPT into the world, we’ve seen this powerful technology leveraged for both legitimate and fraudulent purposes.

Voice cloning

Voice authentication once appeared to be the next big secure identification method, but that’s been rattled with generative AI’s voice cloning capabilities. As mentioned earlier, bad actors need only a short snippet of audio of someone speaking to output a voice replica that sounds natural and can be prompted to say anything. How realistic are these voice clones? In May 2023, ethical hackers used a voice clone of a “60 Minutes” correspondent to trick one of the show’s staffers into handing over sensitive information in about five minutes — all as the cameras were rolling. Efforts are underway to combat these clones: Okta recently published a patent on detecting AI-generated voices.

Image and video manipulation

Gayle King, Tom Hanks, MrBeast: these are just some of the celebrities whose names have made the headlines recently — and not for their latest project. AI deepfakes of the celebs hit the internet earlier this fall, with scammers using their likenesses to deceive an unsuspecting public. And it’s not just a celebrity’s brand that’s at stake; deepfakes obfuscate the truth, causing chaos and uncertainty where facts matter the most, like on the global stage or in the courtroom. A proliferation of relatively inexpensive and easy-to-use generative AI applications is making the creation of deepfakes easy and cheap.

Text creation

Guidance for spotting a phishing email used to be relatively simple: Is the message rife with grammatical and punctuation errors? Then it could be the first stop in a scam pipeline. But in the AI era, those signals have gone the way of the pilcrow. Generative AI can create convincing and flawless text across countless languages, leading to more wide-spread, sophisticated, and personalised phishing schemes. 

Code generation

With generative AI, the phrase “do more with less” doesn’t just apply to people power. It also pertains to practical knowledge. Generative AI’s coding and scripting abilities makes it easier for cybercriminals with little or no coding prowess to develop and launch attacks. This reduced barrier to entry could draw more individuals into the cybercrime ecosystem and improve operational efficiencies. 

Password cracking 

Passwords have a problem: the humans who create and use them. Privacy experts have long advised the public to create strong passwords and to never reuse them. Not everyone is listening. The word password was the top most common password used in 2022, according to NordPass. People also tend to select passwords that have a special meaning to them (like a favourite sports team or band) and reuse those passwords across sites. This is just the information hackers need for brute force attacks. But what used to be a manual, time-intensive guessing game has been sped up with assistance from large language models (LLMs), a type of generative AI. Leaning on publicly available data, like information found on someone’s social media accounts, bad actors can use generative AI to output a list of possible — more relevant — passwords to try out. (A passwordless world can’t come soon enough.) 

CAPTCHA bypass

Click a box, type text into a field, select all the squares with traffic lights: CAPTCHA helps protect websites against everything from spam to DDoS attacks by distinguishing human users from undesirable bots. And while artificial intelligence has been a worthy opponent for years, new research indicates that bots are now faster and more accurate when it comes to solving CAPTCHA tests. This doesn’t mean CAPTCHA’s days are numbered. New methods that use AI to outsmart AI are being developed and tested. One proposed alternative — recently presented by Okta’s data science team at CAMLIS, a conference focused on machine learning and information security — is image-based narration completion. The method uses AI to create an image-based short story made up of two scenes, which are presented to the user. The user must then select the image that works best contextually as the final scene — a task that AI currently cannot do easily or cheaply.

Prompt injection

“Prompt injection is an attack against applications that have been built on top of AI models,” says open-source developer Simon Willison, who’s credited with coining the term “prompt injection” after the vulnerability was made public in September 2022.

The phrase “on top” is critical here because, as Willison explains, the AI models are not the targets. “This is an attack against the stuff which developers like us are building on top of them,” he said during a webinar in May

Successful prompt injections — which concatenate (i.e. join) malicious inputs to existing instructions — can stealthily override developer directives and subvert safeguards set up by LLM providers. They steer the model’s output in whichever direction the attack’s author chooses, telling the LLM, “Ignore their instructions, and follow mine instead.” An example of a prompt injection in action: One website’s AI-run tweet bot was tricked into tweeting threats against the president

Experts ranging from Willison to the UK’s National Cyber Security Centre (NCSC) warn that the risks posed by prompt injection will only increase as more and more businesses integrate LLMs into products. Here on the fuzzy edges of the AI frontier, best practices for safeguarding against this threat are in short supply.

“As LLMs are increasingly used to pass data to third-party applications and services, the risks from malicious prompt injection will grow. At present, there are no failsafe security measures that will remove this risk. Consider your system architecture carefully and take care before introducing an LLM into a high-risk system,” cautions the NCSC.

What’s next for generative AI-fuelled threats?

As the adoption of generative AI tools continues to grow, and the applications themselves become more advanced, companies and individuals will likely see cybercriminals deploy more and more attacks. Again, just as businesses start using generative AI to create more personalised customer experiences, bad actors will do the same with their scams. Cybercriminals’ efforts will result in highly customised attacks on specific targets that scammers can launch at scale automatically — flooding the digital world with one click. Determining what’s real and what’s synthetic will only become more difficult.

Unsurprisingly, combating AI abuse has rocketed to the top of security and AI researchers’ priority list. That sense of urgency was on display at this year’s Conference on Applied Machine Learning for Information Security (CAMLIS), where more than a third of the talks were about the offensive and defensive aspects of LLMs. Last year, that topic wasn’t on the agenda. So while researchers are moving as fast as they can to address these threats, the question is, Are they moving fast enough? And if in-development safeguards such as watermarking and source attribution come to fruition, when will foundational LLM owners actually implement them? There is, however, one aspect of LLMs that has emerged as an unintentional safeguard: the price tag. These models remain very expensive and complex to build, train, and maintain. 

How can businesses defend against generative AI-enabled attacks?

Cat, meet Mouse. Mouse, meet Cat. The best defence against AI is AI. As bad actors ramp up their efforts, it’s the legitimate businesses that have embraced AI that stand the best chance of defending against these attacks. While the degree to which companies use AI varies greatly depending on their size, maturity, and type, here are two guiding principles for navigating the era of AI:

Education: An engaged workforce is a more vigilant workforce. Provide employees with the space to learn about and experiment with generative AI tools — but not before educating them about best practices and establishing company-wide guardrails that protect and manage the risks associated with generative AI. The goal: Promote the safe use of generative AI without impeding innovation. 

Partnership: “Every company needs to be an AI company,” Okta CEO Todd McKinnon said in an interview on Yahoo Finance Live. Even if AI doesn’t factor into a particular business’s core offerings, that company is likely using tools and platforms where AI capabilities feature prominently. This is where strong partnerships come in. 

“This whole new AI revolution is going to mean more capabilities …. But the threat actors can use this technology too,” McKinnon said. “So you need this ecosystem of companies that are working together to protect all of your applications, services, and infrastructure. And it can't be done by one company. It's got to be this collective ecosystem.”

Okta is no stranger to AI, which powers many of its Workforce Identity Cloud and Customer Identity Cloud products. Built on more than a decade’s worth of data, these capabilities — known as Okta AI — help organisations harness this incredible technology so they can build better experiences and defend against cyberattacks. 

Want to learn about how Okta AI can help you protect your business and drive innovation? Read more from CEO Todd McKinnon about Okta’s AI vision.