The mortgage industry has long treated compliance as a capacity problem. There aren’t enough hours in the day to review every file, so you review a sample and call it a risk-based approach.
Nobody decided 10% for sampling was the right number. It became the standard because 50 to 90-minute manual reviews made full coverage financially impossible. Sampling was, therefore, not a strategy – but a workaround dressed up as one.
Consumer Duty changed the question: you can no longer evidence a process and consider the matter closed; you have to demonstrate outcomes. That change is one of the things that pushed mortgage networks toward AI-assisted review, and it’s why many are now on a path to running checks on 100% of files without adding headcount.
What’s becoming clear as that coverage expands is that the fraud picture looks different when you’re actually looking at everything. The difference isn’t simply that more fraud is being caught, it’s that two distinct things are now happening that weren’t before.
The first is systematic detection of things that human reviewers occasionally catch but can’t reliably catch at scale. A good example is altered bank statements. To a reviewer scanning a document under time pressure, a missing transaction isn’t obvious as the document looks complete.
But when a system is running dozens of checks across every page of a case file, including reconciling running balance figures, an omission becomes visible in a way it simply isn’t to the human eye moving at normal human speed. These cases were sometimes caught before – now they can be caught consistently.
The second is checks that were previously reserved for high-risk cases being applied to everything. Staged income is the clearest example: when we review payslips, we run searches on Companies House to identify whether the employer appears to be a shell company, and whether the applicant shares a surname or address with a company director.
That doesn’t mean fraud has occurred, but it gives the review team meaningful context to assess a case properly. Previously, that kind of check was too time-intensive to do routinely. It happened when something already looked suspicious. Now it can happen on every file.
What’s coming through is predominantly opportunistic rather than organised: payslip figures that don’t reconcile with bank statement entries, employer names that differ subtly between documents, formatting inconsistencies, like fonts that change mid-document, PDF metadata that doesn’t hold up, or even income figures on self-employed accounts that don’t stack up against other submissions.
It’s rarely one smoking gun – but these are micro-inconsistencies across a file that a first-line reviewer, processing high volumes against a defined checklist, doesn’t have the time or remit to cross-reference.
That’s important context for where lenders currently are.
First-line review checks that cases meet criteria and documents are present and plausible. Second-line review is where the deeper, investigative checking happens- but it’s only touching a fraction of cases. So you have thorough first-line checking on everything, deeper second-line review on a sample, and a gap in between where cases that passed the first line but contain subtle inconsistencies are not getting a second look.
Lenders have operated this way because it was industry standard, and because the alternative wasn’t considered feasible. When we show lenders that sampling no longer needs to be the framework, that the depth of scrutiny previously reserved for second-line review can be applied to every case, the reaction is usually the same: genuine surprise that the problem is solvable.
The regulatory question that follows is straightforward: post-Consumer Duty, if a case passes first-line review, doesn’t fall into the second-line sample, and fraud is later discovered, a lender can reasonably say their process was followed. But the question a regulator may increasingly ask is whether the oversight model was designed to be effective at catching this type of risk – not whether the process was followed, but whether the process was adequate given what’s now possible – with AI and other technologies. That’s a different standard, and it’s one both lenders and the brokers packaging cases for them need to be thinking about.
For any lender compliance director reading this, there are three things worth doing:
First, quantify the gap between your first-line and second-line checking: how many cases passed first line last year that weren’t selected for second-line review, and of the issues found at second line, how many were present in the first-line pass and not picked up?
Second, run known-bad cases back through your first-line process honestly and ask whether they would have been caught.
Third, have a genuine conversation about what effective oversight looks like post-Consumer Duty – not what’s minimum-compliant, but what you’d want to say to the FCA if asked to demonstrate your model works.
The broader opportunity goes beyond fraud detection – when deeper scrutiny applies to every case rather than a sample, advisers get consistent feedback on every submission. The quality of packaged cases improves because the feedback loop is comprehensive rather than selective, and you move from auditing history after completion to intervening before a problematic case reaches the consumer at all.
This could be a different model of oversight – and for an industry still working out what Consumer Duty actually demands in practice, it’s a conversation that needs to happen now.
Dawid Robert Kotur is co-founder and CEO of Curvestone AI