Treasury warns banks deepfake fraud is on the rise

Img

On Wednesday, a bureau of the Department of Treasury warned that deepfakes are increasingly playing a role in fraud that targets banks and credit unions.

The Financial Crimes Enforcement Network (FinCEN) issued an alert designed to help financial institutions identify fraud schemes associated with deepfake media, joining a rising chorus of government agencies warning against the threats presented by deepfakes.

Common definitions of deepfakes encompass AI-generated videos, images and audio. In its alert Wednesday, FinCEN also included text under its definition in the alert, which focused on deepfakes that fraudsters might use to mislead a bank about their identity. These include manipulated photos of identity documents and AI-generated text in customer profiles or in response to prompts.

Alerts such as the one FinCEN released this week typically prelude reports documenting the extent of the impact the subject (in this case, deepfakes) has on financial institutions, helping to quantify various risks. For example, FinCEN released an analysis in September following an alert last year detailing exactly how criminals are stealing money from banks and customers using check fraud.

While no data currently exists to quantify the financial impact of deepfakes on U.S. financial institutions, anecdotal evidence and warnings from law enforcement suggest they pose a major threat. Last year, the FBI, National Security Agency (NSA) and Cybersecurity and Infrastructure Agency (CISA) released a joint report documenting the impacts deepfakes can have on various organizations.

Bad actors are actively exploiting deepfake technology to defraud U.S. businesses and consumers, according to Andrea Gacki, director of FinCEN.

"Vigilance by financial institutions to the use of deepfakes, and reporting of related suspicious activity, will help safeguard the U.S. financial system and protect innocent Americans from the abuse of these tools," Gacki said.

While deepfakes have existed for years, they have become more notable in recent years thanks to advances in AI technology that make them more convincing and products that make the technology more widely available, according to Rijul Gupta, CEO and co-founder of AI communications company DeepMedia.

"Deepfakes have gotten more sophisticated — not to mention easier to create — over the years," Gupta said. "Today, a hacker can manipulate a person's voice using just seconds of audio."

Indeed, deepfake audio has recently become a special concern for banks, especially those that use voiceprinting technology to authenticate customers using their voices. Even when banks do not authenticate customers using voiceprints, companies that specialize in detecting deepfakes have detected AI-generated audio being used against banks' call centers, to try to trick employees.

In its alert, FinCEN warned against audio but also highlighted that fraudsters can manipulate and synthesize images and even live video of a person's face or identity documents. Banks sometimes use these live verification checks to authenticate the user.

While the methods for generating these deepfakes are often advanced, they can leave artifacts banks and credit unions can use to detect the use of generative AI (GenAI). For example, a customer's photo might have internal inconsistencies (visual tells that the image is altered), be inconsistent with other identifying information (such as the customer's date of birth), or be inconsistent with other identity documents belonging to the customer.

FinCEN highlighted other red flags that a fraudster is using deepfake technology. For example, the "customer" might use a third-party webcam plugin during a live verification check (indicating they may be using software to create the live images rather than an actual video feed), or the user attempts to change communication methods during a live verification due to supposed glitches. Reverse-image lookup might match an online gallery of GenAI-produced faces.

Red flags that generally apply to fraud schemes also apply to deepfake schemes. For example, if the customer's geographic or device data is inconsistent with their identity documents, or a newly opened account, or an account with little prior transaction history, suddenly sees high payment volumes to potentially risky payees, such as gambling websites or digital asset exchanges.

Whenever a financial institution files a SAR involving deepfakes, FinCEN requests they include the key term "FIN-2024-DEEPFAKEFRAUD" in SAR field 2 ("Filing Institutions Note to FinCEN") to ensure the report is included in the expected data analysis the bureau will release.


More From Life Style