Skip to content
Penby
Analysis

AI bias isn't just an ethics problem – it's a legal one

Andy Williamson 4 April 2026
Waiting room

Most people assume AI bias is something for technology companies and ethics committees to worry about. A fairness issue, certainly, but not something that creates legal exposure for their own organisation.

That assumption is wrong. If your business uses AI systems that affect people – recruitment tools, credit scoring, customer profiling, automated decision-making of any kind – you already have specific legal obligations around bias. Not under some future regulation. Under laws that are in force right now.

Three of them, in fact. And they don't all come from the same place.

Three laws, three regulators, one problem

AI bias in the UK creates legal exposure under three separate frameworks, each enforced by a different body.

UK GDPR requires that personal data is processed fairly. The ICO – the Information Commissioner's Office, the UK's data protection regulator – interprets "fairly" broadly. Organisations must not process data in ways that are unduly detrimental, unexpected, or misleading to individuals. An AI system doesn't need to discriminate against a protected group to breach the fairness principle. Any unjustified adverse impact on individuals can be enough.

The Equality Act 2010 catches AI that produces discriminatory outcomes linked to protected characteristics – age, disability, race, sex, and five others. Section 19 – indirect discrimination – is the provision most likely to apply. It catches any AI system that applies criteria putting people who share a protected characteristic at a particular disadvantage. The organisation can defend an indirect discrimination claim, but the burden shifts to them. They'd need to show the AI tool was a proportionate means of achieving a legitimate aim, and that less discriminatory alternatives were genuinely considered.

The Data Use and Access Act 2025 (DUAA) – which replaced the old Article 22 rules on automated decision-making – now permits solely automated decisions with significant effects, but only if specific safeguards are met. Those safeguards include informing the individual, enabling them to make representations, providing meaningful human intervention, and allowing them to contest the decision.

The critical point: these three frameworks are enforced by different regulators (the ICO and the Equality and Human Rights Commission). Compliance with one does not guarantee compliance with the others. An AI system could satisfy UK GDPR transparency requirements while still producing outcomes that breach the Equality Act.

The liability sits with you, not your vendor

Here's the detail that changes the conversation. When an AI recruitment tool, credit scoring model, or customer profiling system produces discriminatory outcomes, it's the deploying organisation that faces the claim – not the vendor who built or sold the tool.

Under UK law, the organisation using the AI system is the data controller. The legal responsibility for how that system processes personal data, and for the fairness of its outcomes, belongs to the organisation that decided to use it.

This means buying an AI tool "off the shelf" and relying on the vendor's assurance that it's been tested for bias is not sufficient. If the outcomes are discriminatory, the liability is yours.

Vendor due diligence matters – and it needs to happen before procurement, not after a complaint.

What's already happened

No UK regulator has yet issued a fine specifically for biased AI outcomes. That's the honest position. But the enforcement direction is unmistakable.

In November 2024, the ICO published the results of its AI recruitment audit – the most significant UK regulatory action specifically addressing AI bias to date. The consensual audits, conducted between August 2023 and May 2024, examined AI recruitment tool providers and found serious problems. Tools that allowed recruiters to filter candidates by protected characteristics. Tools that inferred gender and ethnicity from applicants' names – processing special category data without a lawful basis. Scraping of social media profiles well beyond what data minimisation principles allow. The ICO made 296 recommendations, all accepted or partially accepted. No fines were issued – but the message was clear.

The Manjang v Uber Eats case brought the issue into sharper focus. Pa Edrissa Manjang, a Black Uber Eats driver, was permanently deactivated from the platform after Microsoft-powered facial recognition repeatedly failed to verify his identity. He lost his livelihood because an algorithm couldn't recognise his face. The Equality and Human Rights Commission funded the litigation – a significant step, signalling that the EHRC considers AI discrimination a priority worth actively supporting. The case settled in early 2024 with a financial payout; Uber did not admit liability, and there is no binding judgment on the merits. But the EHRC's willingness to fund these cases tells organisations everything they need to know about where equality law enforcement is heading.

Meanwhile, the ICO is consulting on updated automated decision-making guidance (open until 29 May 2026), Thompson v Metropolitan Police – a judicial review of live facial recognition – is pending judgment, and the Law Commission has included public sector automated decision-making in its programme of law reform.

The gap between documented AI bias and formal enforcement is narrowing. The question is whether organisations act before or after the first major penalty.

What to do about it

Here's what matters: proportionate action is achievable for any UK organisation. You don't need a data science team or a six-figure compliance budget. You need a clear starting point.

For any organisation using AI that affects people:

Conduct a DPIA – a Data Protection Impact Assessment – before deploying any AI system likely to result in high risk to individuals. The ICO explicitly includes AI, machine learning, and automated decision-making on its list of processing that requires a DPIA. Maintain an inventory of AI systems, noting what decisions they affect and what personal data they process. Provide privacy information that includes meaningful explanation of how AI systems work, what factors they consider, and what consequences they may have for individuals. And build a process for individuals to challenge automated decisions – a meaningful one, not a form that disappears into an inbox.

For mid-size businesses without technical AI expertise:

Demand transparency from your vendors – contractual clauses on bias testing, ongoing monitoring obligations, audit rights, and notification when models change. Complete the ICO's free AI and Data Protection Risk Toolkit, a structured self-assessment that requires no technical expertise and gives you a clear picture of where you stand. And monitor your own outcomes. If your AI recruitment tool is consistently filtering out candidates from particular backgrounds, that pattern is visible in your data before any regulator sees it.

For public sector bodies:

Everything above, plus a formal equality impact assessment under the Public Sector Equality Duty. The Bridges case established in 2020 that failure to assess whether an AI tool could have discriminatory impact is itself a breach of the duty – regardless of whether actual discrimination occurs. Public bodies face the highest legal standard and should expect the closest scrutiny.

The direction of travel is clear

UK businesses with EU customers should also be aware that the EU AI Act classifies recruitment AI and credit scoring AI as high-risk, with the most demanding compliance obligations applying from August 2026.

Within the UK, the regulatory trajectory points one way. Guidance is maturing. Enforcement activity is increasing. The ICO, EHRC, and sector regulators like the FCA – the Financial Conduct Authority – are all signalling that AI bias is a current priority, not a future one.

Organisations that understand their obligations now and build proportionate governance aren't just reducing legal risk. They're building the kind of accountability that clients, partners, and regulators increasingly expect – and they're doing it while the regulatory framework is still maturing, when the cost of getting it right is manageable.

The organisations that wait for the first major penalty to force the issue will find catching up considerably more expensive than getting ahead.

Share this article
LinkedIn

Put your data protection in safe hands

Contact us today for a free, no-obligation conversation with a data protection practitioner about your organisation's needs. No sales pitch – just honest, practical advice.

Get in touch