On Wednesday, a bureau of the Department of Treasury warned that deepfakes are increasingly playing a role in fraud that targets banks and credit unions.
The Financial Crimes Enforcement Network (FinCEN) issued an alert designed to help financial institutions identify fraud schemes associated with deepfake media, joining a rising chorus of government agencies warning against the threats presented by deepfakes.
Common definitions of deepfakes encompass AI-generated videos,
Alerts such as the one FinCEN released this week typically prelude reports documenting the extent of the impact the subject (in this case, deepfakes) has on financial institutions, helping to quantify various risks. For example, FinCEN released
While no data currently exists to quantify the financial impact of deepfakes on U.S. financial institutions, anecdotal evidence and warnings from law enforcement suggest they pose a major threat. Last year, the FBI, National Security Agency (NSA) and Cybersecurity and Infrastructure Agency (CISA)
Bad actors are actively exploiting deepfake technology to defraud U.S. businesses and consumers, according to Andrea Gacki, director of FinCEN.
"Vigilance by financial institutions to the use of deepfakes, and reporting of related suspicious activity, will help safeguard the U.S. financial system and protect innocent Americans from the abuse of these tools," Gacki said.
While deepfakes have existed
"Deepfakes have gotten more sophisticated — not to mention easier to create — over the years," Gupta said. "Today, a hacker can manipulate a person's voice using just seconds of audio."
Indeed, deepfake audio has
In its alert, FinCEN warned against audio but also highlighted that fraudsters can manipulate and synthesize images and even live video of a person's face or identity documents. Banks sometimes use these live verification checks to authenticate the user.
While the methods for generating these deepfakes are often advanced, they can leave artifacts banks and credit unions can use to detect the use of generative AI (GenAI). For example, a customer's photo might have internal inconsistencies (visual tells that the image is altered), be inconsistent with other identifying information (such as the customer's date of birth), or be inconsistent with other identity documents belonging to the customer.
FinCEN highlighted other red flags that a fraudster is using deepfake technology. For example, the "customer" might use a third-party webcam plugin during a live verification check (indicating they may be using software to create the live images rather than an actual video feed), or the user attempts to change communication methods during a live verification due to supposed glitches. Reverse-image lookup might match an online gallery of GenAI-produced faces.
Red flags that generally apply to fraud schemes also apply to deepfake schemes. For example, if the customer's geographic or device data is inconsistent with their identity documents, or a newly opened account, or an account with little prior transaction history, suddenly sees high payment volumes to potentially risky payees, such as gambling websites or digital asset exchanges.
Whenever a financial institution files a SAR involving deepfakes, FinCEN requests they include the key term "FIN-2024-DEEPFAKEFRAUD" in SAR field 2 ("Filing Institutions Note to FinCEN") to ensure the report is included in the expected data analysis the bureau will release.