Probability is not a bug: How AI risk challenges FinTech regulation and legal accountability
Traditional software can fail. Code contains bugs. Systems produce errors. But when traditional software fails, something has gone wrong, a deviation from a defined, testable, guaranteed output. That is a bug. AI is different because the uncertainty is not a deviation. It is the design.
Generative and predictive AI systems operate on probability, not rules. They do not execute instructions written by human programmers, they generate statistically plausible outputs based on training data. In FinTech, where AI increasingly informs credit decisions, fraud detection, insurance pricing and consumer outcomes, this distinction matters deeply. Probability is not a bug. It is the architecture.
Financial regulation has always managed uncertainty, but it has assumed that uncertainty can be bounded, measured and explained. AI challenges that assumption at its core. That gap runs deeper than contract language or regulatory design. It cuts through the assumptions we have built into every system we have ever trusted, that technology, when it speaks, speaks accurately. We have spent decades learning to trust what technology tells us. AI exploits that trust without earning it. And the law, the contracts and the people relying on both have not yet fully reckoned with what that means.
This article covers the following headings:
- Built to predict, not to guarantee
- Why Fintech is uniquely exposed
- Deploying AI well is as important as deploying AI at all
Get in touch
Built to predict, not to guarantee
An AI model is not a set of rules written by a human programmer. It is trained on vast amounts of data, learning to identify statistical patterns encoded as internal weightings that it applies to new inputs. It does not follow explicit human written rules. It predicts.
This design gives rise to two risk categories that existing legal and regulatory frameworks were not designed to catch.
Hallucinations, when confidence and accuracy diverge
A hallucination occurs when an AI system generates factually incorrect information and presents it confidently. Unlike a database query or a rules-based system, an AI model has no intrinsic mechanism for verifying whether its outputs are factually correct, producing what is statistically most plausible with the same fluent certainty whether right or wrong.
In FinTech, hallucinations are not a theoretical problem. They can:
- Misinform credit or affordability assessments
- Distort insurance underwriting decisions and
- Generate flawed explanations to consumers or regulators
A confident error is not an edge case to be tolerated. It is a foreseeable and persistent risk present from deployment.
Model drift, the silent degradation problem
Model drift describes the gradual degradation of an AI system as the real world diverges from the historical data on which it was trained. A model calibrated in one economic environment may continue producing confident outputs long after its underlying assumptions no longer hold true.
Unlike conventional software errors, which represent a deviation from a defined correct output, drift has no clean failure point and no line visibly crossed. A model may pass every pre-deployment validation and then degrade silently in production over months or years. Point-in-time contractual warranties are poorly equipped to capture this reality.
RAG, mitigation not solution
Retrieval Augmented Generation, or RAG, partially addresses hallucinations by anchoring the AI to a verified knowledge base before it responds. But it does not solve them and does little to address drift at the level of the model’s underlying logic. Mitigations are not solutions.
Which brings us to the harder question. You cannot sue a vendor for delivering a probabilistic system when that is precisely what you contracted for. The legal challenge is not how to enforce the warranty. It is how to write contracts, governance frameworks and regulatory obligations fit for a system that was never designed to be certain.
Why FinTech is uniquely exposed
FinTech sits at the intersection of regulatory obligation, consumer exposure and personal accountability where that tension is most acute.
Consumer Duty, the disclaimer that does not travel
The FCA’s Consumer Duty requires regulated firms to demonstrate they are delivering good outcomes for retail customers. The obligation is proactive, continuous and non-delegable. While AI vendors may rely on “as is” and “not for sole reliance” disclaimers, those limitations do not follow the output downstream. Regulatory responsibility remains with the regulated firm that deploys the system, regardless of who built it.
SMCR, the human who cannot hide behind the algorithm
The Senior Managers and Certification Regime requires model risk governance to be allocated to a named Senior Management Function holder. Delegating decisions to an AI system does not dilute accountability. If an AI system causes consumer harm and the responsible senior manager cannot evidence reasonable steps to govern it, they face personal regulatory consequences. The algorithm is not a shield. It is a delegation and delegations carry accountability.
Explainability, a compliance gap hiding in plain sight
Regulated firms must explain AI driven decisions to regulators, consumers and courts. Yet the PRA has explicitly questioned the mathematical reliability of commonly used explainability tools when applied to complex, correlated financial datasets. A firm that cannot reliably explain why an AI model declined a loan or altered a customer’s credit limit may be unable to defend that decision in enforcement or litigation. The tools intended to provide that defence may themselves be part of the problem. That is a live compliance gap and most firms have not resolved it.
Deploying AI well is as important as deploying AI at all
Acknowledging the problem is the easier half. The harder half is building governance architecture genuinely fit for probabilistic systems.
The question firms should be asking is not only how to govern AI once it is live, it is whether the deployment was right in the first place. Many of the risks in this article are not governance failures. They are deployment failures. Asking whether a deterministic system would serve better is where the analysis should begin. Purposeful deployment is not a constraint on innovation. It is a precondition for it.
Monitor continuously, including with AI itself
Static validation at procurement is insufficient for a technology that changes after deployment. Good governance requires ongoing monitoring against defined thresholds, with clear triggers for escalation, re-validation, or withdrawal. Drift does not wait for a contract renewal cycle. The ICO’s January 2026 Tech Futures report goes further, identifying the possibility of using AI agents to monitor other AI systems, agentic controls watching agentic deployments. The ICO is careful to present this as emerging thinking rather than settled guidance. But the direction of travel is clear.
Fighting fire with fire may be where governance is heading.
Authority must have limits
For AI systems that execute rather than merely inform, governance must define the boundaries of authority explicitly. What is the system permitted to do? What requires human review? A system without defined authority limits has no meaningful accountability. In a regulated sector where a named individual is personally responsible for model risk governance, undefined authority is not just a design flaw. It is a personal liability.
The firms getting this right are not waiting for a UK AI Act that may never come. They understand that Consumer Duty, SMCR and the PRA’s model risk principles are not holding off for new legislation. Enforcement will come through the frameworks that already exist, applied to failures that are already happening. The question is not whether firms will be held accountable. It is whether they will have built AI governance structures robust enough to withstand scrutiny when they are.
How we can help
To reiterate, AI exploits the trust we have placed in technology for decades. The legal architecture of FinTech was built on that trust. Updating it is not a future obligation. It is a present one. Closing that gap is the work.
If you are deploying, procuring, or governing AI within a regulated FinTech environment, now is the time to reassess whether your legal frameworks are fit for probabilistic systems.
Freeths advises banks, FinTechs, payment service providers and technology vendors on:
- AI and model risk governance
- Consumer Duty and SMCR exposure
- AI contracting and vendor liability and
- Regulatory investigations and enforcement risk
To discuss how your organisation can deploy AI responsibly while managing regulatory and litigation risk, contact Mark Neale or Conor McDonagh or a member of Freeths’ Financial Services team.
The content of this page is a summary of the law in force at the date of publication and is not exhaustive, nor does it contain definitive advice. Specialist legal advice should be sought in relation to any queries that may arise.
Related expertise
Related news & articles
Contact us today
Whatever your legal needs, our wide ranging expertise is here to support you and your business, so let’s start your legal journey today and get you in touch with the right lawyer to get you started.
Get in touch
For general enquiries, please complete this form and we will direct your message to the most appropriate person.