How AI-powered scams are targeting real estate deals

Img

The real estate industry is facing a $173 million problem. That, according to the FBI, is how much consumers lost to real estate fraud in 2023. It's a 19% increase from the previous year, and many analysts expect that number to keep rising in the coming years, spurred on in part by AI.

AI is reshaping how mortgage companies do business, acting as personal assistants, customer chatbots, and even giving more accurate appraisals. But it's also opening new doors for fraudsters.

New tools of AI-driven fraud

AI is making business faster and smarter — but it's doing the same for fraud.

"AI has helped criminals when it comes to scale, speed, and sophistication," said Elizabeth Blosser, chief strategy, communications and innovation office at the American Land Title Association. "They are able to more quickly target more people and make it look more legitimate."

Phishing and fake email scams are nothing new. But advances in generative AI like ChatGPT have made it easier for fraudsters to pen convincing emails and text messages that recipients won't question.

Before, a criminal might need to know the lingo or writing style in order to craft a believable email. But now, AI can whip up a well-written fake in seconds. This can be anything from creating a more authentic-sounding text message to scanning LinkedIn and copying the writing style of a specific CFO.

All of this has made it harder for everyone – buyers, sellers, and agents – to know if the email is legitimate or not.

Voice cloning and deepfakes

AI isn't just writing better fake emails – it's enabling fake phone calls and videos.

With new voice cloning apps, users can make a realistic copy of someone's voice with just a few seconds of audio. This is one of the biggest concerns for Sarah Frano, vice president and real estate fraud risk expert at First American Title Insurance Company.

"Fraudsters can use voice cloning to call a transaction participant and provide them with fake wire instructions to misdirect the money to the fraudsters' bank accounts," she said.

Videocalls aren't safe, either. New deepfake technology means criminals can take just a few images and turn it into a synthetic virtual "mask" that they can use on video calls.

This is opening up new avenues for title fraud, in which criminals either forge documents or pretend to be the owner of a home in order to trick buyers into paying them for a property that's not actually for sale. Many experts worry that the use of AI deepfakes will make it harder to know if the person on the other side of the Zoom call is the real owner.

Some criminals have already tried this. In Florida last year, scammers created a deepfake video in order to impersonate a seller on a Zoom call. At first, the "seller" on the screen looked like the same elderly woman from the ID documents. The title agent caught on to the scheme when she realized it was just a looping video and the "seller" didn't respond to their questions.

While a quick eye and good intuition may have foiled the fraudsters in that instance, experts say the technology is evolving fast, and criminals are looking for any advantage they can get.

Andy White is co-founder and CEO of Closinglock, a digital platform where users can close real estate transactions securely. He has seen first-hand how much AI has advanced in just a few years, and how criminals are trying to use it to defraud buyers. In one case, a scammer used AI to generate side images based on a stolen driver's license. The goal, White said, was to beat the selfie verification that Closinglock uses to match a user with their ID (White said that their system wasn't fooled and blocked the transaction).

"There's no end to what fraudsters will try to do to get through these systems," he said.

Risks to consumers and companies

Despite the risks, some companies are still behind in shoring up their cyber defenses. Arnel Manalo, chief security officer & VP assurance Americas at cybersecurity firm ConvergentDS, said companies need to start taking these  threats seriously rather than treating them as a "boogeyman in the closet."

"It's not just a cyber risk," Manalo said. "It's a business risk."

Cyberattacks can be expensive for companies, with the average fraud case costing around $143,000, according to the ALTA. Typically whoever transfers the funds, whether it's the homebuyer or a lender, is the out the money. But in some cases, consumers have sued title companies and lenders for failing to protect them. In 2021, a couple in Ohio sued their title company and realtor after they lost more than $289,000 in a wire fraud case, claiming that the firms were negligent and did not do enough to keep them safe.

"We've seen companies unfortunately go out of business," White said, citing one company in Utah that closed down after a seven-figure loss.

How the industry is fighting back

Security experts recommend that companies take a two-pronged approach to cyberdefense, combining human checks with better technology. If a title agent or real estate broker suspects the person on the other end of a Zoom call is fake, for instance, they can ask them to put their hand in front of their face or stand up and turn around in order to disrupt any AI filters.

If something seems off, Manalo suggests calling or texting the person on the videocall to make sure it's really them, a kind of real-life two-factor authentication. Colleagues may even want to have passwords or security questions they can ask to ensure the person they're talking to is really who they appear to be.

There's also a range of new technology that mortgage and title companies can use to keep transactions safe. Verisoul's Email Deep Research program can help verify a seller's email address and weed out possible scammers, while Closinglock offers a single platform where buyers and sellers can prove their identities, sign documents, and transfer money in one secure place. Both systems use biometric facial scanning that is hard for AI deepfakes to fool – at least for now.

Still, everyone agrees that it's an arms race between the companies trying to keep transactions safe and the criminals who want to steal the money. Joey Maddox, chief strategy officer at Verisoul, describes it as a game of whack-a-mole, with companies like his always trying to stay one step ahead.

"If you don't do these things, you will eventually be targeted if you're not already being targeted," he said. "And the issue is you might never know because you're not even trying to detect it."

With new technology making it harder than ever to trust what we read, hear, or see, the best strategy might be to bring a dose of healthy skepticism to every online transaction. Sarah Frano recommends that everyone take a "defensive posture" and verify everything, rather than rely on gut instinct.

"Assume someone in the transaction is compromised," she said, "and act accordingly."

Best practices for spotting AI fraud

  • If using videocalls for closings, ask clients to raise their arms, stand up, or move around. This can help ensure you're seeing a live video and that the person isn't using a deepfake filter.
  • If a video or email seems suspicious or "off", use a second method of contact to confirm their identity, such as a phone call or text message.
  • Utilize knowledge-based verification by asking the person questions only they are likely to know the answer to. This isn't just for clients — if you receive an unusual request from a colleague (such as for passwords or money transfers), you can use this to confirm their identity then, too.
  • Use two-factor authentication for emails and other systems to keep them secure from hacks.
  • There are many new systems that can help verify identifies and keep closings secure, including Proof, Verisoul, and Closinglock. Invest in one of these platforms and use it in every transaction.
  • Set up clear protocols and follow them. Everyone in the transaction — buyers, sellers, and agents — should know what to look for and what steps to take to keep the transaction safe.

More From Life Style