The hands-off approach to regulating artificial intelligence (AI) in financial services could be causing harm to consumers, MPs have said.
The Treasury Select Committee today said the Financial Conduct Authority (FCA), Bank of England and Treasury have a “wait-and-see” approach to AI and are “not doing enough” to offset any risks.
The committee’s report said that 75% of financial firms in the UK are now using AI, a far greater proportion than any other sector.
The report said that AI could bring “considerable benefits” to consumers, and that financial firms and the FCA should explore this.
But it added: “However, the FCA, the Bank of England and HM Treasury are not doing enough to manage the risks presented by AI. By taking a wait-and-see approach to AI in financial services, the three authorities are exposing consumers and the financial system to potentially serious harm.”
The Treasury Select Committee said it had found a significant volume of evidence that AI can harm consumers of financial services.
For example, AI can provide unregulated financial advice, which can mislead consumers or give them bad advice, the committee report said.
Other harm spotted by the committee included vulnerable customers being ruled out of accessing financial services due to AI.
The UK has no specific laws or regulations to manage AI in financial services, the report said, with existing laws and regulations being used instead.
However, the FCA and Bank of England said these pre-existing regulations and laws are sufficient to handle any risk from AI to financial services consumers.
Treasury Select Committee chair Meg Hillier said: “Firms are understandably eager to try and gain an edge by embracing new technology, and that’s particularly true in our financial services sector, which must compete on the global stage.
“The use of AI in the city has quickly become widespread and it is the responsibility of the Bank of England, the FCA and the government to ensure the safety mechanisms within the system keeps pace.
“Based on the evidence I’ve seen, I do not feel confident that our financial system is prepared if there was a major AI-related incident, and that is worrying. I want to see our public financial institutions take a more proactive approach to protecting us against that risk.”
EY UK & Ireland financial services technology consulting leader Preetham Peddanagari said: “Today’s report from the Treasury Select Committee on AI in financial services reinforces the importance of a clearly defined regulatory approach.
“EY research shows strong industry adoption of advanced models – such as agentic AI – and found that over a third of UK financial services firms have fully embedded AI into their operations. The challenge now is governance, with many firms admitting they lack sufficient controls to protect customers and ensure compliance with this new tech.
“As AI continues to move from experimentation to large-scale rollout, it is critical that firms have robust governance and accountability processes in place.”
Many mortgage brokers are already using AI. At a recent Mortgage Strategy MIT Live event in London, MAB said broker accuracy with documentation was around 80% without AI but rose to 99% when the technology was used to help.