If you’re thinking about deploying AI in triage, advice and lending decisions then you need to be aware of ‘explainability’.
This is where the world of AI meets regulation. On the face of it, it means being able to show why an AI tool has come to a particular conclusion. For example, why has that applicant with significant property portfolio income been refused a mortgage on their main home?
But it’s more than that. Explainability is about preserving accountability. It’s the opposite of just declaring ‘the computer says no’. The onus on brokers and lenders in the UK is determined by the FCA and, because the regulator has already indicated it isn’t going to create new rules for AI, all roads lead back to the Consumer Duty.
Therefore, AI tools must satisfy the rules we already have but, given how broad Consumer Duty is, this also opens up new risks. We may even see related changes in borrower behaviour, with outright challenges and complaints more likely.
In principle, AI must not interfere with the ability of customers to understand how decisions have been made about them. However, in a black box scenario where advanced AI models, sucking in increasing volumes of open banking and property data, are typically superseded every six months, you can’t ask a newer AI to explain what its predecessor did.
That’s not explainability, that’s just a guess where the real answer should be. This might have been possible with simpler criteria but we’re moving into a world where underwriting and affordability measures are more complex.
Transparency
Lenders and brokers will need to show that AI recommendations were justified at the time. This means demonstrating that AI-informed risk-based pricing or segmentation was also transparent and non-discriminatory. No matter how big the dataset, borrowers must still be able to understand what has happened in layman’s terms even when it applies to something as complex as deep-learning credit scoring.
Discrimination is another example of how AI demands additional processes. The real risk is slow, unnoticed harm. If model bias, or drift, chips away at outcomes for a protected group over time, the regulator will not accept a defence based on black box complexity. Brokers and lenders need to identify flaws in AI-informed decisioning early in a way they never had to do with human staff.
So the Consumer Duty in the context of AI effectively elevates explainability from a technical attribute to a board-level responsibility. Members of the Senior Managers & Certification Regime (SM&CR) will need to treat AI governance as a horizontal discipline that spans data, risk and compliance. And that’s before we consider the fact that AI tools aren’t just for service providers, customers have them too.
A matter that hasn’t received a lot of air-time so far is the ability of consumers to use AI to challenge and complain. Use of AI will become common knowledge so expect the equivalent of a data subject access request for AI — an AI explainability request, or AIER. Some will then use their own AI tools to find flaws in it, unhappy that they didn’t get the answer they wanted.
And it will cut both ways. With digital transformation and what’s known as Horizontal Digital Integration (HDI) accelerating and feeding more personal and financial data into lending decisions, more data won’t always mean a greater chance of approval. For some it will but it’s possible the industry won’t always get that right and we see more complaints from people who were allowed to borrow too much.
Proactive approach
These new risks can be anticipated and mitigated but brokers and lenders will need to take a more proactive approach to underwriting. This means anticipating edge cases when they don’t present as such and recording all the information they need to answer an explainability challenge at the outset.
High-risk areas include creditworthiness, complex income and debt liabilities. It is only a matter of time before a successful complaint is founded on explainability. Whether the complaint is upheld is going to come down largely to wider data collection, real-time record keeping and an AI audit trail.
The final question concerns when you need to implement all this. Here, there’s a danger that the General Purpose AI (GPAI) Code of Practice requirements may be misleading. There’s an August 2027 deadline for AI models in use prior to August 2025 to meet examinability requirements.
However, this is EU law and, while similar rules might be introduced in the UK next year, the Consumer Duty is already here. As we’ve seen, Consumer Duty is easily broad enough to act as a proxy for the GPAI code of practice. Its deadline also offers a false sense of security to those firms operating both in the UK and EU, because any model first put into use after August 2025 has to comply from day one (there’s no extra breathing space for those systems).
Given models usually have a six-month lifespan, there’s a good chance cross-border firms are going to be working with AI tools that don’t have that breathing space post-February 2026.
Either way, there’s a strong business case for getting ahead on explainability in every jurisdiction, as it begins to reshape the relationship between customers, brokers and lenders.
Pete Gatenby is AI Partner at Novus Strategy