Treasury report examines gaps in banks' AI risk management

Img

On Wednesday, the U.S. Department of the Treasury released a report on AI and cybersecurity, providing an overview of the cybersecurity risks that AI poses for banks and methods for managing them and emphasizing the divide between large and small banks in their ability to detect fraud.

The report discusses the inadequacies in financial institutions' ability to manage AI risk — namely, not specifically addressing AI risks in their risk management frameworks — and how this trend has held financial institutions back from adopting expansive use of emerging AI technologies.

AI is redefining cybersecurity and fraud in the financial services sector, according to Nellie Liang, under secretary for domestic finance, which is why — at the direction of President Joe Biden's October executive order on AI security — Treasury authored the report.

"Treasury's AI report builds on our successful public-private partnership for secure cloud adoption and lays out a clear vision for how financial institutions can safely map out their business lines and disrupt rapidly evolving AI-driven fraud," Liang said in a press release.

The report is based on 42 in-depth interviews with representatives from banks of all sizes; financial sector trade associations; cybersecurity and anti-fraud service providers that include AI features in their products and services; and others.

Among the top-line conclusions drawn in the report, Treasury found that "many financial institution representatives" believe their existing practices align with the National Institute of Standards and Technology AI Risk Management Framework, which was released in January 2023. But, those participants also ran into challenges establishing practical and enterprisewide policies and controls for emerging technologies like generative AI — specifically, large language models.

"Discussion participants noted that while their risk management programs should map and measure the distinctive risks presented by technologies such as large language models, these technologies are new and can be challenging to evaluate, benchmark, and assess in terms of their cybersecurity," the report reads.

By this virtue, the report suggests expanding the NIST AI risk framework "to include more substantive information related to AI governance, particularly as it pertains to the financial sector." This is exactly how NIST upgraded its cybersecurity risk management framework last month.

The latest draft emphasizes integrating cybersecurity into core governance functions and broadens its scope beyond just critical infrastructure sectors. It also offers guidance on dealing with novel threats, such as newer strains of ransomware.

August 17

"Treasury will assist NIST's U.S. AI Safety Institute to establish a financial sector-specific working group under the new AI consortium construct with the goal of extending the AI Risk Management Framework toward a financial sector-specific profile," the report reads.

On the subject of banks' cautious approach to large language models, interviewees for the report said these models are "still developing, currently very costly to implement, and very difficult to validate for high-assurance applications," which is why most firms have opted for "low-risk, high-return use cases, such as code-generating assistant tools for imminent deployment."

The Treasury report indicates that some small institutions are not using large language models at all for now, and the financial firms that are using them are not using public APIs to use them. Rather, where banks are using these models, it is via an "enterprise solution deployed in their own virtual cloud network, tenant, or multi-tenant" deployments.

In other words, to the extent possible, banks are keeping their data private from AI companies.

Banks are also investing in technologies that can yield greater confidence in the outputs their AI products yield. For example, the report briefly discusses the retrieval-augmented generation, or RAG, method, an advanced approach to deploying large language models that several institutions reported using.

RAG enables firms to search and generate text based on their own documents in a manner that reliably avoids hallucinations — i.e., text generation that is totally fabricated and false — and minimizes the degree to which outdated training data can poison LLM responses.

The report covers many other additional topics, including the need for firms across the financial sector to develop standardized strategies for managing AI-related risk, the need for adequate staffing and training to implement advancing AI technologies, the need for risk-based regulations on the financial sector and how banks can counteract adversarial AI.

"It is imperative for all stakeholders across the financial sector to adeptly navigate this terrain, armed with a comprehensive understanding of AI's capabilities and inherent risks, to safeguard institutions, their systems, and their clients and customers effectively," the report concludes.


More From Life Style