As mortgage companies turn to
"There's a lot of noise around AI and fair lending," said Tori Shinohara, a partner at Mayer Brown. "If you look at the consensus of interagency pronouncements around the use of responsible AI or the White House blueprint for an AI Bill of Rights, those all have anti-discrimination components."
However, while there's guidance in this area, there hasn't been formal regulation, she noted.
"Federal regulators, including the prudential regulators, whenever they put guidance on AI, it almost always has some sort of anti-bias component to it. But in terms of true regulation, there isn't anything out there yet that is specifically regulating the use of AI in mortgage lending for anti-discrimination purposes," Shinohara said.
Whether there is formal regulation in this area on the horizon remains to be seen.
"I think the thought is the existing regulatory framework for fair lending: the Fair Housing Act, Equal Credit Opportunity Act, and state laws are sufficient to prevent discrimination in connection with AI and in mortgage lending or servicing, because they're so broad and cover discrimination as a result of a model in addition to discrimination as the result of an individual decision," she added.
So these two federal laws are what mortgage companies may want to prioritize in compliance, but mortgage professionals should take note that public officials and agencies are looking at fair lending in new ways too.
"Both laws require any aspects of a credit transaction to be fair, and historically that was interpreted as being just underwriting and pricing," said Kareem Saleh, founder and CEO of Fairplay AI, in a separate interview. "But if you pay attention to the statements coming out of the federal regulators, there also now seem to be concerns about digital marketing and fraud."
This means more layers of potential scrutiny of fintech providers in areas where AI is being applied such as customer outreach, Saleh said.
"I think that is a big consequence of this move toward alternative data and advanced predictive models," Saleh said. "As those systems are being used at more and more touchpoints in the customer journey, we're seeing fair lending risks and obligations grow commensurately."
It's scrutiny that could apply to servicers as well as originators, as use cases for AI to determine settlements, modification offers or what customers to call, when, and how often emerge, according to Saleh.
What some new dimensions of risk look likeTo get a sense of where AI and fair lending rules veer into areas like marketing regulation and potential fraud allegations, consider the following examples. While these lie outside the traditional owner-occupied single-family mortgage market, the situations involved are applicable.
One cautionary tale to be aware of when it comes to compliance for generative AI, a type of machine learning that draws on existing data it's fed and patterns within it to create new outputs, was an Air Canada chatbot. (Several other airlines have used chatbots as well.)
That chatbot produced a response to a consumer asking about a bereavement discount in 2022 that was a "hallucination," which is to say the AI somehow interpreted the airline's data in such a way that it made an offer that didn't previously exist at the airline and that it didn't intend. Earlier this year, the British Columbia Civil Resolution Tribunal forced the airline to make good on the offer.
In the United States, that kind of development might lead to violations of laws against unfair, deceptive or abusive acts or practices, Shinohara said.
"I think those would equate to UDAAP concerns if there was something that was provided and was inaccurate, raising questions about whether the company is still on the hook for those types of miscommunications," Shinohara said.
The Consumer Financial Protection Bureau, Office of the Comptroller of the Currency and other prudential regulators enforce UDAAP, and the Federal Trade Commission enforces laws against unfair or deceptive acts and practices, which might also be relevant in such a circumstance.
Meanwhile, exemplifying the kind of new scrutiny of fair lending risks that might arise when AI gets used for marketing purposes is a recent missive the Department of Housing and Urban Development delivered to real estate agents and lenders.
HUD directed them to "to carefully consider the source, and analyze the composition, of audience datasets used for custom and mirror audience tools for housing-related ads" in conjunction with
Demetria McCain, principal deputy assistant secretary fair housing and equal opportunity, warns in a related press release that "the Fair Housing Act applies to tenant screening and the advertising of housing," suggesting that officials are watching any customer outreach and approvals in this area for signs of redlining.
Marketing may currently be the bigger concern of the two for housing finance companies in the single-family owner-occupied market.
For now, qualifying borrowers or other core processes are determined primarily by major government-related secondary market players, so mortgage companies in that business are most likely to relegate AI and any related compliance efforts to customer outreach, according to Shinohara.
"I think there's more focused interest on adopting AI or machine learning tools for things like marketing and how you make your marketing dollars go further. In action with marketing, you run into risks, like digital redlining," she said. "If you've got tools that are being used to select who you're going to market to, and you are marketing credit products, you should look at whether those tools inadvertently exclude or only give preference to certain communities."
The path to compliant use of AI The aforementioned examples of new scrutiny applied to AI-driven tools do raise a key question about whether newer technologies like generative AI are helping to better address inequities that exist than their predecessors, or are further entrenching systemic biases.
"On the one hand, some of the disparate outcomes are likely the result of non-AI models, so you've kind of got a modernization issue," Saleh said. "But also behind some of the disparities are AI issues which basically encode the disparities that were the result of the conventional techniques to begin with, and so it's a very interesting time to be doing this work."
AI could be viewed as a constructive force in a lot of the advanced data analysis the government-sponsored enterprises are doing with the aim of safely opening up the underwriting or marketing box in ways that could make lending more equitable.
"In theory, that should allow you to paint a kind of a finer portrait of a borrower, or the ability and willingness to repay a loan," Saleh said.
But with AI currently relegated to limited use in conjunction with the customer experience and other challenges to qualifying for a loan existing in the market, applying AI to the point where it allows lenders to actually extend more loans to more people in an equitable manner is tricky.
"There are a lot of headwinds to write related to affordability in particular. So it's a tough time to do fair lending, because on the one hand, you've got more resources than ever on the other hand, the macroeconomic environment is kind of working against you."
How to address 'the compliance officer's lament'When asked how a mortgage company can best address the aforementioned challenges, Saleh said, "This is the compliance officer's lament, which is what do you want me to do? If I don't do things exactly to the letter, am I going to get in trouble?"
Doing things to the letter may not even be possible, because the regulators themselves face a conundrum when it comes to giving companies guidance that's too specific.
"There have been a lot of requests from the industry for more guidance and I think in some ways, the regulators have wanted to give more guidance. However, in other ways, they've been reluctant because they want to maintain their optionality," he added. "They're concerned that if they give guidance that's too specific that people will game the system."
So the industry is left to navigate what Saleh calls a "strategic ambiguity."
"The thing about judgment is that you can always be second guessed, but if you can document that you take fairness seriously and why you feel the approach you've chosen doesn't pose a threat to the consumers that you serve, I think that is your best option," Saleh said.
Because legacy data that fuels generative AI may be biased and its outputs have to be watched for hallucinations, the answer to how to make it a constructive and compliant tool may be ongoing monitoring, a phrase common in consent orders.
The approach is in line with what
Saleh suggests applying analytics that may be AI-driven and can be examined on a regular basis such as monthly to the problem, perhaps even more frequently where unpredictable generative models are utilized.Although the aforementioned ambiguity from regulators and the opt-in nature of borrower information around race can make it be hurdles to interest in building the kind of robust fair-lending data sets that AI has the capacity to help ingest, Saleh advises doing so. He also advised keeping in mind that regulators generally want an understanding and explanation of any model used, no matter how complex it is, as HUD noted in its aforementioned directive.
"Have the benefit of evidence that's informed by data so that you can comply and explain," Saleh said.
Adjustments may not be necessary each time the statistics get examined as aberrations may occur in the short-term. But if counterproductive rather than productive patterns start to appear regularly in analyses, they need to be addressed, he said.
"I think a key part of what originators can do to navigate this environment gets back to saying, 'Hey, we're going to monitor frequently to make sure that these models and our decisions are performing reasonably and don't pose a threat to consumers," Saleh said.