Deepfakes and deception: Building a human firewall against AI-powered attacks

About the Author

Brian Prince

Brian Prince is a marketing content creator and former journalist who has been focused on cybersecurity for more than 15 years.

17 June 2025 Time to read: ~

Signs of a fake email were once relatively obvious: misspellings, grammatical errors, awkward phrasings, and suspicious sender addresses. But what happens when an email doesn’t contain those obvious signs and comes from a familiar sender with a familiar greeting and request? Worse yet, what happens when this illicit communication takes the form of a deepfake video call?

Unfortunately for businesses, AI-powered threat activity has turned this possibility into a reality. Recently, Okta's Threat Intelligence team reported on IT job scams run by operatives from the Democratic People's Republic of Korea who leverage generative AI in various ways, including creating convincing cover letters and CVs, to gain and maintain employment in remote IT roles.

The increase of generative AI technology, which uses AI to create original content like text, images, and audio, enables threat actors to quickly launch multi-channel attacks and add to the credibility and urgency of their messages. Rather than simply sending a phishing email, a target may receive an email, a deepfake video, and a text message. AI can also potentially bolster these efforts by automating the collection of publicly available data to be used to build fraudulent content. 

Call it social engineering on steroids. Last year, multinational engineering firm Arup notified Hong Kong police that a finance employee had been duped into transferring millions out of the company's coffers by a deepfake video call. According to reports, the worker was initially skeptical after receiving a message that spoke of a secret transaction that needed to be performed. However, the deepfake video call, which included digitally created versions of the company's chief financial officer and others, sold the scam. It wasn’t until later, when the employee contacted the main corporate office, that the person realized they had been tricked.

It’s not just the business sector that’s being targeted. On May 15, the FBI warned that senior US officials were being impersonated in a malicious text and voice messaging campaign. The campaign targeted  "individuals, many of whom are current or former senior US federal or state government officials and their contacts."

As AI makes these scams more effective, security awareness training and identity management will become increasingly critical elements of enterprises' frontline defense.

Building a human firewall 

So, what should security awareness look like in an age of AI-powered attacks? When enterprises build a culture of security, the result is a human firewall, where employees become a shield against cyberattacks.

According to Ben King, Vice President for Security Trust and Culture at Okta, the key to building a strong security culture is to combat these attacks on three fronts:

  • Policies — Organizations need clear policies, such as mandatory multi-factor authentication, direct verbal confirmations on existing known channels for major transactions, and verification of digital identities.

  • Training — Employees should have regular, scenario-based training showing common, existing, and evolving threats for awareness.

  • Culture — Leaders need to build a culture of skepticism that encourages and rewards reporting suspicious activity to highlight new risks as they occur.

Recognizing deepfakes and fraudulent emails is inherently tricky. Subtle differences, such as inconsistent resolution, may allow vigilant employees to spot a digitally created image. King warned about several telltale signs of bad actors in real-time meetings or applicant interviews. These include reluctance to appear on camera, blurred or virtual backgrounds, unexpected voice changes or unusual background noises, and trouble engaging in small talk or answering personal questions. 

"Training needs to include identifying subtle linguistic cues, contextual inconsistencies, and verification protocols rather than just spotting obvious misspellings or poor grammar," King advised. "Adaptive learning platforms that personalize training based on individual roles, experience, and past threats, as well as simulated AI-driven attacks, will be more useful. Likewise, driving awareness of  the reality of mass-produced, commodity, AI-powered phishing and other convincing synthetic media will, over time, reduce blind trust and enhance wariness of employees."

Depending on the risk profile of the person or business unit of the organization, additional verification steps may be necessary. A significant financial transaction, for example, may warrant multiple security challenges, like MFA and cross-channel communication. While social engineering will always pose a risk, compensating controls, including phishing-resistant MFA such as Okta Fastpass, can greatly reduce the risk of compromise due to phishing, King said.

This marriage of technology and security-first attitudes is pivotal as threat activity becomes more sophisticated. As always, a healthy dose of skepticism can save the day. AI-generated phishing emails will often still bear some of the same hallmarks of traditional phishing attacks, like a sense of urgency and a request for personal or business information. If a message or video feels unusual, it may very well be, and employees should speak up.

"Dealing with AI-driven attacks will be extremely challenging due to their increasing realism and ability to bypass standard defenses," King said. "As they evolve in complexity and realism, employees need to be able to fall back on their security training and trust their intuition to act on anything suspicious with confidence and clarity."

To learn more about how to protect yourself, your workforce, your business, and your customers from phishing attacks, check out our Ultimate guide to phishing prevention.

 

This posting does not necessarily represent Okta's position, strategies, or opinion.

About the Author

Brian Prince

Brian Prince is a marketing content creator and former journalist who has been focused on cybersecurity for more than 15 years.

Get our Identity newsletter

Okta newsletter image