A company director gets in touch because their business needs 'an AI governance policy,' and they want help writing one. A few minutes into the conversation, the question changes. It turns out they don't just need a policy. They need to know what their business is actually doing with AI, where the legal obligations now sit, and who inside the business is supposed to be handling governance.
The policy wasn't the right thing to ask for at that stage, because the real question hadn't yet been worked out. Many conversations we have on this subject follow the same pattern. That moment, between knowing something is needed and not knowing exactlly what, is where many UK businesses now sit on AI governance.
It's also about a claim you'll read a lot in 2026: that the UK has a significant AI skills gap. The claim is true, but it's being misdiagnosed in much of the commentary, and the market response is following the misdiagnosis. The AI skills gap isn't simply a hiring problem. It's a problem of knowing what capability you actually need, and much of the employment market is trying to sell you something else.
What has actually changed
A reasonable first answer to 'do we need this capability?' is to look at what the law now asks of you that it didn't ask a year ago.
The most significant change for UK AI use came in with the Data (Use and Access) Act 2025. Section 80 of the DUAA replaced the old automated decision making framework in UK GDPR Article 22 with new Articles 22A to 22D. The main provisions took effect on 5 February 2026.
Strip out the article numbers and the picture is this. If your business uses automation alone to make a decision with a legal or similarly significant effect on a person (a credit decision, a hiring decision, a fraud block, an insurance assessment), you now owe four specific things to the person affected: information about the decision, a chance to make representations, meaningful human intervention where they want it, and the right to contest the outcome.
One piece of precision matters for anyone already running systems of this kind. The new recognised legitimate interests lawful basis, the one inserted into UK GDPR as Article 6(1)(ea), cannot underwrite any solely automated significant decision. Article 22B(4) says so directly. Several AI use cases that businesses have been quietly leaning on legitimate interests for now either need restructuring or need to move onto a different lawful basis entirely. That re-engineering won't appear in the board report yet, someone inside the business still has to tackle it.
On top of that sits the EU AI Act. Its reach into UK businesses is narrower than the commentary often suggests. Article 2(1)(c) catches UK providers and deployers whose AI output is used in the European Union, with main applicability from 2 August 2026. Not 'any UK business using AI,' and not every incidental touchpoint either. Alongside all that sit the sector regulators: the FCA on consumer duty and model risk, the PRA's SS1/23 for banks, the MHRA on AI as a medical device, the SRA on legal practice, and Ofcom on online safety.
The pattern underneath all of this is straightforward. AI governance is no longer an ethics committee nice-to-have. For most UK businesses using AI at any scale, it's now a statutory operational requirement under at least one regulatory regime. Whether the provision of people capable of delivering it has kept pace is a different question.
The scale of the shortfall, and the kind of skills gap it actually is
The supply side numbers have become impossible to ignore. The DSIT AI Labour Market Survey, published on 27 January 2026, found that 97% of the 119 UK organisations it spoke to had at least one AI skills gap. 57% reported a technical gap, engineers, MLOps, and data scientists. 30% reported a non-technical gap in governance, ethics, and compliance. That second number is the one that matters most for the people making hiring decisions about AI oversight, and it's consistently the smaller headline.
The Public Accounts Committee's Use of AI in Government report, HC 356 from March 2025, gives the point its sharpest edge. Around half of civil service digital and data roles advertised in 2024 went unfilled, and 70% of government departments now report difficulty recruiting and retaining staff with AI skills. If the most powerful single buyer of AI governance capability in the country is losing the hiring fight, smaller private organisations aren't going to find it easier.
From the professional body side, the International Association of Privacy Professionals (IAPP) disclosed 1,000 certified holders of its AI Governance Professional (AIGP) credential globally as of September 2024. The IAPP hasn't disaggregated that number by country. The UK AIGP cohort isn't yet large enough to be reported on separately. The IAPP's own AI Governance Profession Report 2025 found that 17% of organisations have only one person tasked with AI governance, and 23.5% identify finding qualified AI governance professionals as a key delivery challenge.
These numbers describe two overlapping gaps that look similar from the outside. The first is a technical AI talent gap: model engineers, MLOps, prompt engineering. That one is real but is being partly addressed, with the apprenticeship share of UK AI hires rising from 3% in 2020 to 19% in 2025. The second is a governance gap: people who combine regulatory fluency with AI specific literacy and the operational discipline to run an AI governance function inside an organisation under UK conditions. That second gap isn't being addressed by the apprenticeship channel, and it isn't visible in the generic 'AI skills' headlines. It's the one that most directly affects the business leader reading this, and in practice it's the one regulators will see first when anything goes wrong.
Why a competent DPO is not necessarily an AI governance practitioner
The easiest mistake to make in 2026 is to assume AI governance is just a short extension of the work a data protection officer, or DPO, already does. It looks adjacent. The acronyms overlap. A competent UK DPO has the regulatory spine: Articles 13 and 14 transparency, Article 35 Data Protection Impact Assessments (DPIAs), Article 5 accountability, lawful basis analysis. Surely governing AI is the same work with a new label?
It isn't. And this isn't a criticism of DPOs. It's a description of a profession that didn't use to include this work, and of a workload that's already full before you add the additional load.
The technical fluency alone is an entire second territory: training data composition, model architecture at a conceptual level, evaluation metrics, drift, bias testing, red teaming. The standards form another layer: ISO/IEC 42001, 23894, 42005, and the NIST AI Risk Management Framework. Beyond those, the generative AI controls that didn't exist two years ago: hallucination management, prompt injection defence, output filtering, content provenance, the lawfulness of training data. AI specific vendor governance follows, the discipline of reading model cards, negotiating AI specific processor addenda, and assessing an API based AI service you can't actually inspect. And then the sector rules sitting on top of all of it. The DPOs who have actually started to cover this territory are usually the ones who've carved out time to do it against the rest of their workload, not because their organisation freed them up to.
The asymmetry runs the other way too. An AI ethics researcher who has spent five years thinking about fairness, transparency, accountability, and harm taxonomies has produced frameworks. Frameworks are useful. What ethics doesn't produce on its own is regulatory grounding: UK GDPR, the DUAA, the sector rules, the EU AI Act. It doesn't produce enforcement literacy, the muscle of reading ICO reprimands, FCA Final Notices, and tribunal judgments. And it doesn't produce the operational discipline to turn a framework into the policies, records, contract clauses, and procedures that hold up under scrutiny.
AI governance as a discipline sits at the intersection of those two territories. Regulatory spine plus technical literacy plus applied controls plus sector awareness. The clearest public description of what the intersection actually contains is the IAPP AIGP Body of Knowledge version 2.1, which took effect on 2 February 2026 and organises the discipline into four domains: Foundations; Laws, Standards and Frameworks; Governing AI Development; and Governing AI Deployment and Use. It's a long list, and it's a realistic list. Most DPOs don't yet have all of it. Most AI ethicists haven't got there either, and most privacy lawyers are some distance off. The people who do have those skills are the ones your business is competing for, against government, financial services, and the Big Four.
Which is what actually sits underneath the hiring problem. It isn't a shortage of privacy lawyers, or ethicists, or compliance people. It's a shortage of people who can straddle the whole territory and understand how to run the work inside an organisation.
What the market is selling, and how to read it
Walk through the options your procurement team is currently looking at. Each can be the right answer in the right circumstance. Each, more often than not, is being bought before the organisation has worked out what it actually needs.
The AI policy pack. Cheap, fast, gives the appearance of governance. The trouble is that every ICO reprimand for governance failures, past and present, has reached organisations that had policies. Reddit's £14.47 million fine in February 2026 wasn't for the absence of a policy. It was for a failure to govern the intersection of user data and AI training arrangements. When the ICO investigates, it doesn't read your policy pack. It reads your records of processing, your DPIAs, your breach log, and the documented decisions made inside the business about the AI it actually uses. That's where the governance question is settled. Policies without the operational layer, the data map, the DPIA discipline, the vendor controls, the review cadence, are a receipt, not a control.
Adding AI to the existing DPO's remit. A reasonable answer if the DPO has actually trained on AI governance (AIGP, ISO 42001, sector specific training) and has the hours available in the week. A DPO already at 90% capacity doesn't become an AI governance practitioner simply by having the topic added to their job description.
Retaining a law firm. Excellent for advice on a specific question. Most firms are advisory rather than operational, and won't themselves run an AI governance function for a mid market business between engagements. The costs also tend to scale unpleasantly once the specific question turns into a long series of them.
Big Four and large consultancy engagements. Strong on methodology and assurance. Priced for large regulated organisations. Pivoting from advisory to independent third party assurance in response to DSIT's Trusted Third-Party AI Assurance Roadmap (September 2025), and the £18.8 billion market the government projects by 2035. For a mid market organisation, usually the right tool when the governance question is already answered and the next question is assurance.
Fractional DPO or DPO-as-a-Service with combined credentials. Works when the provider is genuinely practitioner qualified in both data protection and AI governance, is available operationally rather than advisor only, and is honest about the limits of what they cover. This is the category Penby sits in, alongside The DPO Centre, Securys, Evalian, Data Protection People, Trust Keith, Aphaia, Mishcon DPO, and PrivacySolved, among others.
Internal build. Realistic for organisations of scale and sometimes the best long term answer. Slower than the market wants you to believe. Three to six months of continuing professional development gets a DPO to baseline AI governance literacy. Applied competence is a different order. It's the difference between passing a driving theory test and being the person you want behind the wheel on the M25 in the rain; literacy takes months, real competence is a matter of years of doing the work.
The point of the list isn't to rank the options. It's to notice a pattern. The right answer depends on what AI your organisation is actually using, under which regime, with which data, at what scale. The wrong answer is choosing any of these before that picture is clear.
What to actually look for
When you do start assessing providers or candidates, there are five signals that separate the capable from the merely credentialed.
The first is describing how your AI use maps against UK GDPR and DUAA Articles 22A to 22D in plain language. The practitioner who starts reciting article numbers at you without context is performing competence. The one who asks what AI you have, what it decides, and who is affected by those decisions is identifying the real work.
The second is familiarity with ISO/IEC 42001 and the ICO's mandatory DPIA list. Familiarity with what each actually asks and how they relate.
The third is willingness to talk about specific enforcement signals and what they indicate. For example, Serco Leisure received an enforcement notice for unlawful biometric processing of over 2,000 employees in February 2024. Reddit, for failing to govern user data feeding AI training. Ofcom's November 2025 confirmation decision against Itai Tech over the Undress.cc age assurance failure, the first significant Online Safety Act action against a generative AI product. The Divisional Court in Ayinde and Al-Haroun [2025] EWHC 1383 (Admin), where Dame Victoria Sharp stated plainly that current generative AI is 'not capable of conducting reliable [legal] research.'
The fourth, credentialing that sits behind the practice rather than instead of it. AIGP, MBCS, BCS Practitioner Certificate in Data Protection, ISO 42001 Lead Implementer, combined with actual case experience.
The fifth, and in some ways the most telling, is the capacity to admit what they don't yet know. AI governance is an unsettled discipline. We're living through something closer to an industrial revolution than an ordinary regulatory update, and nobody has a complete road map. Anyone who claims total mastery of this field is selling, not advising. I'd include myself in that warning. I'm currently studying for the AIGP alongside my BCS Practitioner Certificate and IAPP membership, and I'd be wary of anyone telling you this field can be mastered in a single sitting, myself included.
The gap is real, and it's worth being picky about
The AI skills shortage isn't going to resolve quickly. The realistic trajectory is a staggered easing, from late 2027 for large regulated organisations and 2028 to 2030 for the UK mid market. Organisations buying AI governance capability in 2026 still have the advantage, provided they're disciplined about what they're actually aquiring.
The DUAA hasn't made this harder than it needed to be. It's made the shape of what a business owes a little clearer than it was a year ago. The company director who made that initial call, didn't need a policy. They needed to fully understand what their business was actually doing with AI, where the legal obligations sat, and who inside the company should be watching. Most of the conversations we're having in 2026 end up in the same place. Once you have the complete picture, the hiring or service requirement question tends to answer itself.