Skip to content
Penby
Guides

Automated decisions about people: your legal obligations explained

Andy Williamson 7 April 2026
Automated decisions about people

If your organisation uses AI tools that help make decisions about people, recruitment screening, credit checks, HR assessments, customer access, you no doubt have a person/human who reviews the outputs. You may believe that makes you fully compliant, it doesn't.

The law changed in February 2026, and the ICO has already found that most organisations' "human oversight" fails to meet the legal standard. The ICO examined over 30 employers and wrote to 16 directly. ICO's findings: organisations believed they had human oversight in place, when in practice their processes constituted solely automated decision making. All 16 organisation have committed to making changes.

The use of AI automation isn't a future risk, it's where we are today. And the reality of what organisations think they're doing and what the law actually recognises is significantly wider than most of us realise.

The law changed – and many businesses failed to notice

UK GDPR, the UK's data protection law, has always included rules about automated decisions. The old Article 22 was the main one. On 5 February 2026, the Data (Use and Access) Act 2025 – known as the DUAA – replaced it with a completely new framework: Articles 22A to 22D. The change is significant and prone to misinterpretation.

Under the old rules, making significant decisions about people using solely automated processing was prohibited by default, with three narrow exceptions. Under the new framework, it's permitted, provided mandatory safeguards are in place. The shift is from "no, unless you can justify an exception" to "yes, if you meet all of the requirements."

At surface level that sounds like a relaxation in regulation. In practice, it may be the exact opposite.

The old prohibition was poorly understood, rarely invoked, and almost impossible to enforce. It functioned more as a theoretical safeguard rather than a real one, the ICO never once issued a penalty directly citing Article 22. The new framework makes the obligations explicit and the safeguards mandatory. Requirements that were previously buried in a prohibition that many organisations didn't feel applied to them are now clearly stated in law. The new regulations make it harder to argue that you didn't understand them, and easier for the ICO to enforce them.

The rubber-stamp problem

This is where many organisations will identify with their own current processes.

A decision counts as "solely automated" where there is "no meaningful human involvement in the taking of the decision." That's the statutory definition. The ICO's operative test: can the human exercise "real influence" over the decision before it's applied? Do they have "the authority, discretion and competence to alter it"?

Rubber stamping processes don't qualify. Neither does reviewing 'only' the AI-approved results, or more often than not, clicking the "approve" button on a screen without actually assessing the decision independently.

The Uber case is the clearest illustration. In April 2023, the Amsterdam Court of Appeal found that Uber's human reviews of automated driver deactivation decisions were "little more than a purely symbolic act." Multiple automated processes completed, ride assignment, pricing, driver rating, fraud scoring, and account deactivation, all qualified as solely automated decision making, despite Uber claiming humans were in the loop.

The ICO's March 2026 recruitment investigation found the same pattern across UK employers. Organisations were using AI to screen CVs, rank candidates, and filter applications. They believed their processes included human oversight. In practice, the humans were there for comfort, not for real scrutiny.

If AI screens 200 CVs down to 20 and a human only reviews the shortlist, the 180 rejections are still solely automated decisions. Those 180 people who applied for a job, were rejected by AI with no human being ever looked at their application. That's a multi-stage failure to address the AI oversight challenge, and it's the single most common misunderstanding the ICO encountered.

Many organisations we speak to haven't considered this thourougly enough. They've focused on making sure the shortlist review is thorough, and they've genuinely tried to do the right thing, their intentions are good. But the law doesn't care about the 20 you reviewed carefully. It cares about the 180 you didn't.

What the law now requires

Article 22C sets out four mandatory safeguards for any significant decision made solely by automated processing. These are statutory requirements, not optional best practice.

Information. The person affected must be told about the automated decision, before it's made, not after. That timing matters. A rejection email that says "after careful consideration" tells the applicant nothing about how the decision was actually made. Under the new framework, the organisation has to be upfront about the automation before it runs, which means rethinking how most screening processes are communicated from the outset.

Representations. The person must have the opportunity to make their case before the decision is applied.

Human intervention. The person must be able to request, and receive a genuine human review of the decision. Not a rubber stamp process, but a real review by someone with authority to change the outcome.

Contestation. The person must be able to challenge the decision. In practice, the representations and contestation rights overlap, but the law treats them as separate obligations, one before, one after.

These safeguards apply regardless of what type of personal data is involved. For decisions based on special category data, health, ethnicity, biometrics, the original prohibition still applies, with only narrow exceptions.

One more detail worth flagging. Organisations cannot rely on the new "recognised legitimate interests" legal basis – Article 6(1)(ea), introduced by the same Act, for automated decision making. Article 22B(4) explicitly prevents the streamlined legitimate interests route from being used for automated decisions about people. A lot of the current commentary on the DUAA has missed this carve-out entirely, which is a problem, because it's exactly the kind of thing an organisation might wrongly assume it can rely on.

The second legal layer most businesses don't know about

Even if you get all of the above right, there's a separate problem. The Equality Act 2010 applies to automated decisions independently of UK GDPR, and the ICO says so explicitly.

Indirect discrimination under the Equality Act arises where a practice puts people sharing a protected characteristic at a particular disadvantage. The intent doesn't matter, it's always the outcome that matters. And automated systems can produce discriminatory outcomes through proxy variables that have nothing to do with protected characteristics on their face. A postcode can correlate with ethnicity. An employment gap can correlate with gender. A graduation year can correlate with age.

This creates a complex data processing challenge. Organisations that want to test their AI systems for bias may need to process special category data, race, disability, and gender, to complete their tests. But processing that data itself requires justification under UK GDPR. There is no neat resolution to this in current law, and anyone who tells you otherwise is selling you certainty that simply doesn't exist yet. It's one of the areas where organisations genuinely need to make a judgment call, and they must document their reasoning carefully, because a regulator may eventually ask to see it.

The practical point is this: an organisation can satisfy every UK GDPR requirement for automated decision-making and still face an Equality Act claim if its systems produce discriminatory outcomes it hasn't monitored for.

What happens next

The ICO's consultation on new automated decision making guidance closes on 29 May 2026. Specific guidance on automated decisions in recruitment is expected over the summer. A statutory Code of Practice on AI and automated decision making is in development. The direction of flow is more scrutiny, not less, regardless of what you may have read.

The 16 employers that the ICO wrote to in March all believed they were handling this this properly and ethically. What they lacked was human oversight that fully met the legal definition, and under the new framework, that definition is written into statute rather than buried in guidance that most organisations never read.

The ICO isn't waiting for the Code of Practice to be finalised before it investigates. The obligations are already in force. If your organisation uses AI to make or support decisions about people, the question worth asking now is a specific one: could the person affected by that decision point to the moment where a human being, with the authority and information to reach a different conclusion, actually considered their case? If the honest answer is no, the human oversight process needs to be hardened before the guidance arrives, not after.

Share this article
LinkedIn

Put your data protection in safe hands

Contact us today for a free, no-obligation conversation with a data protection practitioner about your organisation's needs. No sales pitch – just honest, practical advice.

Get in touch