Deepfake Fraud Threatens CFOs: Protecting Corporate Finance

10 Min Read

Multifactor verification and other precautions are becoming essential as AI enables more sophisticated scams.

Video and phone call freezes are typically attributed to poor service or some exterior cause. But if you notice unusual white hairs around the edge of your CFO’s beard just before a freeze, and when the call resumes seconds later, the beard is once again jet black, should you follow his instructions to transfer funds?

Perhaps, but not without further verification. Fraudsters, aided by AI applications, may one day—soon, even—perfect so-called deepfake audio and video calls. But even now, “tells” can indicate something is amiss, and the temporary freeze could actually be AI’s doing.

“I was recently testing a platform that had a feature designed to help hide artifacts, glitches, or synching issues,” recalls Perry Carpenter, chief human risk management strategist at KnowBe4, a security awareness and behavior change platform. “The program would freeze the video on the last good deepfake frame to protect the identity of the person doing the deepfake. It’s clear that some attackers are using adaptive strategies to minimize detection when their deepfakes start to fail.”


“There should never be an immediate need to wire a large amount of money without first verifying [it].” 

Perry Carpenter, Chief Human Management Strategist, KnowBe4


To what extent such attacks are successful or even attempted is unclear since companies typically keep that information under wraps. A significant attack reported last year by CNN and others involved a Hong Kong-based corporate finance executive of UK-based engineering firm Arup, who warily eyed an email requesting a secret, $25 million payment. He sent the money anyway, after a video call with several persons who looked and sounded like colleagues—but were, in fact, deepfakes.

In another incident reported by The Guardian last year, scammers used a publicly available photo of Mark Read, CEO of advertising giant WPP, to establish a fake WhatsApp account. That account in turn was used to set up a Microsoft Teams meeting that used a voice clone of one executive and impersonated Read via a chat window to target a third executive, in an attempt to solicit money and personal details.

A WPP spokesperson confirmed the accuracy of The Guardian’s account but declined to explain how the scam was foiled, noting only, “This isn’t something we are eager to relitigate.”

Self-Correcting Deepfakes

Unlike deepfake video clips, which are extremely difficult to detect, real-time voice and video via social messaging platforms are still prone to errors, says Carpenter. Whereas earlier deepfakes had obvious tells, like facial warping, unnatural blinking, or inconsistent lighting, newer models are starting to self-correct those irregularities in real time.

Consequently, Carpenter doesn’t train clients on the oftenfleeting technical flaws, because that can lead to a false sense of security. “Instead, we need to focus on behavioral cues, context inconsistencies, and other tells such as the use of heightened emotion to try to get a response or reaction,” he says.

Rapid deepfake evolution poses an especially significant risk for corporate finance departments, given their control over the object of the fraudsters’ desire. Distributing a new code word to verify identities, perhaps daily or even per transaction, is one approach, says Stuart Madnick, professor of information technology at MIT Sloan School of Management. There are various ways to do so safely.

When executives in corporate finance who deal with large fund transfers are well acquainted, they can test their voice or video counterparts by asking semi-personal questions. Madnick has asked alleged colleagues what their “brother Ben” thinks about an issue, when no such brother exists.

A clever, but not a permanent solution, Madnick cautions: “The trouble is that the AI will learn about all of your siblings.” Ultimately, all companies should use multifactor authentication (MFA), which bolsters security by requiring verification from multiple sources; most large companies have broadly implemented it. But even then, some critical departments may not consistently use MFA for certain tasks, notes Katie Boswell, US Securing AI leader at KPMG, leaving them susceptible.

“It’s important for corporate leadership to collaborate with their IT and technology teams to make sure that effective cybersecurity solutions, like MFA, are in the hands of those most likely be exposed to deepfake attacks,” she urges.

Perry Carpenter, Chief Human Management Strategist, KnowBe4

Identifying Multifaceted Scams

Even with MFA, devious fraudsters can mine social media and online resources and use AI to conjure authentic looking invoices and other documents, and along with deepfake video and/or audio, create backstories persuasive enough to convince executives to make decisions they later regret. That makes training critical, conditioning executives handling large sums of money to automatically pause when they receive unusual requests and demand additional verification.

“There should almost never be an immediate need to wire a large amount of money without first verifying through a known internal channel,” says Carpenter. An interlocutor who communicates over a private phone or email account is also problematic, especially if they resist moving the conversation to the company’s secure systems. Ploys like adopting a tone of urgency, authority, or high emotion are also red flags, “so it’s critical that people give themselves permission to pause and verify,” he said.

While two or more verifications help, companies must still ensure their verification sources are secure. Madnick recalls a client company losing money when a fraudster passed a phony check. Suspicious, the bank called the company’s corporate finance department to verify the transaction, but the fraudster had already instructed the phone company to reroute calls to a number where it validated the check.

“Companies can set up procedures with their phone company that require them never to reroute calls without further verification with the company,” Madnick says. “Otherwise, it’s at the discretion of the phone company.”

Given corporate finance’s allure for fraudsters, KPMG’s Boswell stresses the importance of keeping abreast of emerging threats. Since CFOs and other top finance leaders must focus on their immediate duties, they can’t be expected to read the latest research on deepfake attacks. But companies can establish policies and procedures that ensure IT, or other experts regularly update them, raising finance’s awareness of the latest types of attacks, both internally and at other companies.

Madnick regularly asks corporate finance executives to raise their hands if they know their departments have faced cyberattacks. Many do not.

Katie Boswell, US Securing AI leader at KPMG

“The trouble is that cyberattacks on average continue over 200 days before they’re discovered,” he says. “So, they may think they haven’t experienced an attack, but they’re just not aware of it yet.”

Corporate finance can also include deepfake scenarios in its risk assessments, including tabletop exercises incorporated in the company’s security initiatives. And employees should be encouraged to report even unsuccessful attacks, or what they believe may have been attacks, that they might otherwise dismiss, Boswell advises.

“That way, others in the organization are aware that it has potentially been targeted, and what to look out for,” she says.

In addition, while c-suite executives at large companies may have significant public profiles, information available externally about lower-level executives and departments such as accounts payable and accounts receivable should be limited. “Threat actors use that type of information more frequently using AI, to help manipulate targets through social engineering,” Boswell notes. “If they don’t have access to that data, they can’t incorporate it in attacks.”

Such precautions are only becoming more important, as deepfake fraudsters broaden and deepen their reach. While they have been spreading fastest in major economies such as the US and Europe, even countries whose populations use fewer common languages are increasingly exposed.

“Most criminals may not know Turkish, but what’s great about AI systems is that they can speak just about any language,” Madnick cautions. “If I were a criminal, I would target companies in countries that have been targeted less in the past, because they are probably less prepared.”

Share this Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Exit mobile version