How banks' 'hidden' AI could cause problems

Img

Artificial intelligence seems to be everywhere. Sometimes it is even hidden in plain sight.

Technologies and processes that banks rely on, including customer service call transcription, marketing tools, credit decisioning, cybersecurity tools and fraud prevention, may incorporate AI in ways not every user or employee at the bank understands. Other products are in a gray or "it depends" area, such as chatbots, which can be static with pre-programmed questions or more conversational.

The way generative AI burst onto the scene fewer than three years ago means "anyone with access to the internet today can get access to tools like ChatGPT or Google's Gemini, for free and with tremendous processing power they couldn't have had before," said Chris Calabia, a senior advisor to the Alliance for Innovative Regulation. "It's possible your staff is experimenting with ChatGPT to help them write reports and analyze data."

These "hidden" aspects of AI matters because banks must be aware of where AI is embedded in their operations and where it is not. A wave of legislation is closing in on the risks of AI, notably Colorado's Consumer Protections for Artificial Intelligence in the U.S. and the Artificial Intelligence Act in the European Unions.

"Banks need to pay attention and have a definition that aligns with those regulations or they could find themselves afoul of being able to meet them," said Scott Zoldi, the chief analytics officer at FICO.

There is also the question of maintaining customer trust and ensuring responsible usage.

When deploying AI, "there has to be a parallel process to make sure you've got the right guardrails, compliance, and risk governance, so you're not developing solutions that will be toxic or infringe on personally identifiable information," said Larry Lerner, a partner at McKinsey.

Understanding what AI is

The history of AI in banking dates back for decades.

Basic AI-type systems, then called 'expert systems,' existed in financial services as early as the 1980s, said Calabia, to help financial planners come up with plans for individuals' financial planning needs.

"These systems were designed to mimic human decision-making processes," he said.

As AI has evolved, so have its definitions. Even now, pinning down a common understanding of AI is hard.

"People talk about AI when they mean software or analytics," said Zoldi.

The October 2023 White House Executive Order on AI defines it as a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. 

The March report about AI and cybersecurity from the U.S. Treasury highlights the problem of identifying and defining AI in the banking system, said Rafael DeLeon, senior vice president of industry engagement of risk performance management software provider Ncontracts.

The report notes that "there is no uniform agreement among the participants in the study on the meaning of 'artificial intelligence,'" and while the White House's definition "is broad, it may still not cover all the different concepts associated with the term 'artificial intelligence.'"

The report also recognizes there are conflations.

"Recent commentary around advancements in AI technology often uses 'artificial intelligence' interchangeably with 'Generative AI,'" it notes.

Without a common lexicon, banks may struggle to assess and manage the risks associated with AI systems, comply with emerging AI-related regulations, communicate effectively with regulators and third-party vendors about AI use, and make informed decisions about AI adoption and implementation, said DeLeon.

"AI is a wave that has taken us over and now we are trying to swim our way to the top," said DeLeon.

AI tools, such as large language models, offer the promise of increased efficiency; but they also carry with them potential legal and regulatory risks. Financial service providers will need to find a way to strike the right balance as technological advancements continue to emerge.

June 19
Jenner & Block

At FICO, Zoldi defines AI as any process or software that can produce a task at superhuman performance. Machine learning is a sub-class of AI that, unlike AI, is not programmed by humans. It refers to algorithms that self-learn relationships in data and are not necessarily explainable or transparent.

"Very often when people say AI in regulatory circles, they are talking about machine learning and models that learn for themselves," said Zoldi.

Whether they are using traditional AI, generative AI or machine learning, Zoldi finds, "some banks are not in a good position to explain models to a certain level of scrutiny that would meet credit regulations that exist today."  

Although generative AI is all the rage, "it's a very small fraction of what banks use," said Zoldi. "Underneath the hood, 90 to 95% of AI in banks are models that use neural networks and stochastic gradient boosted trees." Both neural networks and tree-based models self-learn non-linear relationships based on historical data to come up with future predictions.

How banks can find clarity

"Banks need to make sure the models they use are fair and ethical," said Zoldi.

To start, banks can take an inventory across their business lines as to what processes or operations use AI or machine learning. They should also come up with a set of standards for usage that will govern model development and to define when models have become harmful or should be removed.

It is easier said than done.

Bankwell Bank in New Canaan, Connecticut, is experimenting with AI and generative AI for small business lending, sales and marketing, underwriting and more. The $3.2 billion-asset bank has brought up discussions of AI and generative AI at its town halls, so "everyone from branch associates up to senior management are starting to think about some of these use cases," said chief innovation officer Ryan Hildebrand. "But we haven't [said], 'here is the guidebook with definitions of AI and how it is used and how to talk about it. We're still early."

Kim Kirk, the chief operations officer of Queensborough National Bank & Trust Company in Louisville, Georgia, has asked her check-fraud monitoring provider for data flow diagrams to understand where information is residing and how it is being manipulated. She finds that cybersecurity and fraud prevention are two areas where AI is commonly used.

"Bankers should understand the underlying architecture of solutions that they are purchasing from third party service providers, because ultimately that's our responsibility to protect our customer information," she said.

The recent CrowdStrike outage served as a reminder that banks must grasp where vulnerabilities and gaps in security for third and fourth party vendors lie.

The $2.1 billion-asset Queensborough was not a direct purchaser of CrowdStrike, "but they were a fourth party to us," said Kirk. "When there are problems, the bank needs to understand if it is impacted."

The focus on AI by the government also underscores the need for banks to register their usage.

"Bankers have to be knowledgeable about the macro enviro of what is happening with AI," said Kirk. "The explosion of AI in the last several years has been enormous. Everyone is trying to get their arms around it from a governance perspective to make sure we are protecting our customer information appropriately."

A financial institution's size does not always correlate with its sophistication around AI usage. Lerner, for example, was impressed when he recently spoke with a midsize credit union about AI.

"I was pleasantly surprised that they already have a center of excellence stood up," he said. "They are beginning to experiment with code acceleration with generative AI, and they've already been talking to the risk and regulatory group to develop an initial set of guardrails."


More From Life Style