If you asked your senior management team how many AI tools your business uses, you may well get a number in single figures. A chatbot here, an analytics platform there, maybe a pilot project someone in IT is running. That's the number of AI tools that your senior management govern.
Now consider a different number. According to Microsoft and Censuswide research from October 2025, 71% of UK employees have used unapproved consumer AI tools at work. More than half do so every week. Meanwhile, a SAP and Oxford Economics study from February 2026 found that only 7% of organisations have an enterprise wide AI strategy in place.
The majority of organisations are governing the AI they formally adopted while ignoring the AI their employees are actually using. Yet, UK GDPR doesn't distinguish between the two. It applies to all processing of personal data, regardless of how the technology arrived.
This isn't a problem you need to look out for in the future problem. The law already covers this. What's changed is the sheer scale of AI use, and with it, the compliance drift that most organisations haven't begun to measure. Every organisation I talk to thinks they have a handle on their AI exposure. Many of them don't.
UK GDPR already applies to AI processing
It's easy to assume that AI regulation is something on the horizon. A future Act of Parliament, perhaps, or something the EU is doing that might eventually affect UK businesses. That assumption is completly wrong.
UK GDPR applies to any processing of personal data, whatever technology is used. AI processing of personal data is processing under Article 4(2). The ICO has been unambiguous about this: "There is no 'AI exemption' to data protection law."
In practice, that means every Article 5 principle, the foundational obligations that govern all personal data processing, already applies to how your organisation uses AI.
Transparency requires your privacy notices to disclose AI processing, its purposes, and meaningful information about the logic involved. If your organisation is using AI and your privacy notice doesn't say so, the ICO's own audit framework puts it plainly: "If AI is in use and that is not communicated by privacy information, this may breach UK GDPR articles 5(1)(a), 12–15."
Purpose limitation means each stage of the AI lifecycle, from data collection to training to deployment, is a distinct processing purpose. Data collected for one purpose can't simply be repurposed for AI without a compatibility assessment. The ICO's December 2024 generative AI consultation made the point sharply: "common practice does not equate to meeting people's reasonable expectations."
Data minimisation creates an additional challenge with AI. These systems are designed to consume large volumes of data. UK GDPR requires you to justify that volume and consider alternatives.
Accuracy is where things get particularly interesting, and it's the principle I think most organisations are furthest from understanding. AI generated inferences and predictions about individuals are themselves personal data. The ICO draws a distinction here that matters: statistical accuracy, how well the model performs overall, is not the same as data protection accuracy, whether the data about a specific individual is correct. An AI system can be statistically impressive while getting detail about individual people wrong. Understanding the difference between "the model works" and "the model is fair to this person" is exactly where regulatory risk concentrates.
The obligations most organisations are already breaching
If your organisation has deployed AI tools that process personal data without first completing a data protection impact assessment, a DPIA, you are in most cases already in breach of UK GDPR.
The ICO's Article 35(4) mandatory DPIA list includes "innovative technology" as its very first item, defined as "processing involving the use of new technologies, or the novel application of existing technologies (including AI)." The ICO has stated that "in the vast majority of cases, the use of AI will involve a type of processing likely to result in a high risk" and will require a DPIA.
The Snap investigation shows what happens when organisations get this wrong. In 2023, Snap launched "My AI," a generative AI chatbot, to its entire user base, including children aged 13–17, without completing an adequate DPIA. Snap produced four successive attempts that the ICO rejected before a fifth finally satisfied the regulator. The ICO called it "a warning shot for industry."
Then there's the ICO's recruitment audit from November 2024, which exposed something more systemic. Consensual audits of a handful of AI recruitment providers produced 296 recommendations and 42 advisory notes. Some tools were inferring gender and ethnicity from candidates' names, processing special category data without lawful basis or the candidates' knowledge. Special category data, the sensitive categories like health, ethnicity, and political opinions, receives the highest level of protection under UK GDPR. One provider's accuracy standard amounted to "at least better than random," which the ICO stated "would usually not be sufficient to comply with data protection law." Think about that for a moment. Recruitment decisions affecting real people's careers, based on a system whose own accuracy bar was "better than the flip of a coin."
The ICO's March 2026 report on automated decision making in recruitment confirmed what practitioners have long suspected: "many employers do not acknowledge that they are carrying out ADM. As a result, employers fail to ensure sufficient safeguards are in place."
Lawful basis is another area where the gap between what organisations think they've done and what the law actually requires is wider than most realise. Legitimate interests, the realistic lawful basis for most commercial AI processing, requires a formal three part test: a specific evidenced interest, a demonstration that alternatives were insufficient, and a balancing assessment that considers the impact on individuals. A bare citation of "legitimate interests" in a privacy notice, without a documented assessment behind it, is not compliance.
The AI you don't know you have
Shadow AI, employees using consumer AI tools without organisational approval, is now the norm rather than the exception. Free tier ChatGPT is the clearest example. Input data can be used to train OpenAI's models by default. There's no data processing agreement, a DPA, in place. None of the formal contractual safeguards that UK GDPR requires between a controller and any organisation processing personal data on its behalf. And here's the point that catches most organisations off guard: the employer is the data controller for personal data processed by employees in the course of their work. Even on personal accounts. Even on tools nobody in the organisation approved, or were aware of.
If someone in your team is pasting customer details, employee records, or contract terms into ChatGPT to get a quick summary or draft an email, your organisation is the controller for that processing. There's no audit trail. No record in your processing activities. No way for a data subject to exercise their rights over data that's been fed into a system your organisation doesn't even know is in use.
Shadow AI is only part of the picture. AI is now embedded in the business software your organisation already uses. Salesforce Einstein, HubSpot AI, Slack AI, Zoom AI Companion, Microsoft 365 Copilot. Enabling these features, sometimes with a single toggle in an admin panel, may constitute new processing activities. That means updated records of processing, revised privacy notices, and in many cases new DPIAs. The obligation falls on your organisation as the controller, not the software vendor. I've seen organisations switch on Copilot across 500 seats without anyone asking whether it changed their processing register. It did.
What to do about it today
These obligations are serious, but they're manageable. The first step is understanding what you're actually dealing with.
Build an AI register. Map every AI tool in use across your organisation, including shadow AI. For each one, document what personal data it processes, who the controller is, what lawful basis applies, and whether a DPIA has been completed. This is the single most valuable thing you can do right now, because you can't govern what you haven't mapped.
Triage by risk. Prioritise any system that makes or influences decisions about people. Recruitment tools, customer service automation, credit scoring, employee monitoring. Systems processing special category data need the most urgent attention.
Complete DPIAs for high risk processing. The ICO's own DPIA for its Microsoft Copilot pilot is a useful reference for what good practice looks like: phased rollout, detailed data mapping, access controls, and a commitment to treat the DPIA as a living document.
Update your privacy notices. If your organisation uses AI and your privacy notice doesn't say so, that's a transparency breach you can fix today.
Set clear rules for generative AI. An acceptable use policy that specifies which tools are approved, what data categories may be input, and what the consequences of unapproved use are. The distinction between enterprise tools with DPAs and consumer tools without them is critical.
Ask your vendors the right questions. Is our data used to train your models? Where is it processed? Can you provide a DPA? How do you handle data subject rights? How do you test for bias?
The law hasn't changed. The scale of non-compliance has.
UK GDPR has applied to AI processing since it came into force. The organisations building AI governance now aren't early adopters of a new obligation. They're catching up with an existing one.
Go back to the question this article opened with. Ask your senior management how many AI tools the business uses. Then ask how many have been through a DPIA. How many appear in your records of processing. How many are covered by a privacy notice. The difference between those numbers is the difference between where your organisation is and where the law already requires it to be. Fixing that compliance drift starts with the register.