WASHINGTON — Consumer Financial Protection Bureau Director Rohit Chopra outlined his concerns about artificial intelligence and financial stability in testimony before the Senate Banking Committee, saying the technology could exacerbate already existing problems into destabilizing events.
Chopra made his comments on the second day of an unusually tame pair of hearings in front of the Senate Banking Committee and House Financial Services Committee. While oversight of the bureau has bordered on hostile to Chopra in the past, lawmakers' ire toward financial regulation has recently centered more on the proposed Basel III endgame capital rules, and while Republican senators offered their share of criticism for Chopra and the CFPB, the tone was much milder than it has been in previous hearings.
During the hearing, Chopra offered his concerns that AI could disrupt financial stability. He said that certain opaque AIs could worsen disruptions in the market, turning "tremors into earthquakes."
"We actually have seen some of this in the past with high-frequency trading and securities, but I could see it being dramatically magnified — particularly if many firms are depending on the same foundational model, which … I think [has] potential to occur," Chopra said.
Chopra also pointed to AIs that deliberately mimic human communication as a potential area that could create a financial panic at a particular institution, or at a financial market utility and exchange.
"There are many ways this could happen," Chopra said. "Even a credit reporting agency [could be affected]. I think we have to look very hard about the financial stability effects of this because this may not be an accident. This may actually be a purposeful way to disrupt the U.S. financial system, and we should look at it with that mindset."
Any measures that regulators would use to counter this risk, Chopra said, would have to be based on a more stringent standard than "intent," because AI tools have the potential to cause significant financial harm without that being the intent of the creator of that AI.
"One of the reasons why the U.S. has always had — for over a century — prohibitions on things like deception [and] unfairness that have multiple prongs, but don't necessarily require intent is because you can create a huge amount of harm," Chopra said. "It's in some ways like data breaches. You've put some obligations on firms to make sure they're secure, to stop the downstream harm."