While there is clearly tremendous excitement and optimism surrounding
This fear is not unfounded. A team of researchers at Carnegie Mellon University studying actual AI privacy incidents across industries found that AI exacerbated existing privacy risks and created new privacy risks across 12 different categories.
The highest AI and privacy risks for mortgage companies fall into two categories:
- Unintentional disclosure of consumer data, which can occur either directly, as the result of a breach or if the mortgage company's data is inadvertently included in AI training sets used broadly, or indirectly, as a result of AI's ability to infer additional information about an individual from limited data points.
- Secondary use, in particular if customer data is used to train an AI application, and then that AI is used for purposes which were never consented to by the consumer, such as affiliate marketing or training of other financial products. For example, two years ago, an artist in California found private medical photos of herself in a set of images used to train many of the largest AI image generators. A resulting investigation identified thousands of other patients' medical photos in the same data set. Imagine that instead of photos, an individual's default history, credit report or an assessment of their likelihood of default found its way into a data set used to train a proprietary AI.
Regulatory RiskFrom a regulatory standpoint, mortgage companies must comply with two primary federal privacy laws. The first is the
The second is the
There is also a third privacy regulation that mortgage servicers, in particular, should take into consideration: The
In addition to these federal laws, state regulations could place additional requirements on mortgage companies to protect their borrower's personal information. The
Reputational RiskObviously, a major data breach can create
AI has an unprecedented ability to create
AI can also infer the answers to sensitive questions based on limited data. It is possible to imagine how a well-meaning attempt to anticipate which borrowers may need loss mitigation assistance, for example, could result in invasive and inaccurate assumptions, such as flagging current, performing customers in your system as "at risk of default." These assumptions are reminiscent in some ways of the concept of "pre-crime" in the popular novella and movie Minority Report, in which authorities use psychic technology to accuse people of crimes before a crime is committed. Certainly, the actions in Minority Report are a drastic science fiction parallel to AI technology, but the comparison drives home concerns about well-meaning anticipatory assumptions. Using AI inferences to sensitive questions based on limited data within the mortgage industry could easily result in fair servicing violations if they, for example, prompted repeated outreach attempts to borrowers based on protected characteristics.
Business RiskMany AI vendors today are offering AI services that use one of the major AI tools (i.e., ChatGPT, Google Gemini) as the underlying system. Companies will want to ensure that their proprietary business data isn't used to train the model, unless they are comfortable with that data becoming available to their competitors.
Before employing AI, lenders and servicers should, at the very minimum, have clear and definitive answers to the following questions:
For your organization:
- Does my organization have an internal AI framework in place such that we can prevent unauthorized use of the AI model?
- What are our obligations with respect to the use of an AI model trained on our customers' data?
- Is privacy a priority in our organization's software development process?
For your AI vendor:
- Is the AI that we are considering a proprietary AI system or a third-party, bolt-on model?
- What data was used to train the AI?
- Will our company's data be used to train the AI model?
- What are the vendor's information security policies?
- What options does the vendor offer to keep our customer's data segregated from the greater model?
Whether, as some observers expect,