Skip to content
Penby
Guides

Building an AI governance framework: where to start

Andy Williamson 9 April 2026
AI Governance - Where To Start

If you've been told your business needs an AI governance framework, you may well be wondering where to start. The term itself sounds like a major undertaking, a new set of policies, a new compliance structure, possibly a new role to manage it all. Every guide you've read seems to assume you already have a team in place and a clear sense of what you're going to be governing.

There's strong evidence that you're not alone in wondering where to start. The Trustmarque AI Governance Index 2025, which surveyed 507 UK IT decision-makers, found that 93% of UK organisations are now using AI in some capacity. Yet, only 7% have a fully embedded AI governance framework. More than half have minimal governance or none at all. And 19% have no clear governance owner, nobody is responsible for overseeing how AI is used in their organisation.

Those figures look quite shocking, but that might actually be quite deceptive. If your organisation processes personal data, and nearlly every organisation does, you already have legal obligations under UK GDPR that cover most of what AI governance requires. You almost certainly don't need to build a complete new framework from scratch. For most organisations the mystery that needs unraveling first is knowing what AI tools your people are already using, until you have that knowledge, your existing data protection obligations aren't being met.

This article starts where most guides don't, not with a framework document, but with the practical steps that actually get you on track.

You already have most of the legal framework

The perception that AI governance requires a whole new set of rules is understandable but wrong. UK GDPR contains few AI-specific provisions, but its existing requirements apply in full to any AI system processing personal data. That includes lawful basis for processing, transparency obligations, data protection impact assessments, known as DPIAs, for high-risk processing, data protection by design, processor due diligence for third-party tools, and accountability.

In practice, the legal scaffold for governing AI already exists. The task is not learning a new regulatory language. It's recognising how existing obligations that your organisation currently applies also apply to the AI tools you're now adopting. Most of the organisations we talk to are surprised by how much ground their existing UK GDPR compliance already covers, the gap is normally in application, not in law.

The most significant recent development is the Data Use and Access Act 2025, which replaced the automated decision making provisions of UK GDPR with new Articles 22A through 22D, commencing on 5 February 2026. The old position under Article 22 was that solely automated decisions with legal or similarly significant effects on people were prohibited by default, with narrow exceptions. The new regime reverses that: automated decision making using non-special-category data is now permitted, provided specific safeguards are in place.

Those safeguards are concrete. Under Article 22C, if your organisation makes a significant decision based solely on automated processing, you must inform the individual, allow them to make representations, provide human intervention on request, and enable them to contest the decision. These are statutory requirements that apply right now to any organisation using AI systems to make or support decisions about people.

One critical term remains undefined: "meaningful human involvement." The DUAA grants the Secretary of State power to define this through secondary legislation, but no such regulations had been made as of April 2026. The ICO published draft guidance on automated decision making for consultation in March 2026. In the meantime, organisations making automated decisions about people need to form their own defensible view of what meaningful human involvement looks like in their context, document that view, and be prepared to adjust when guidance finally arrives.

Waiting for perfect clarity before acting is tempting, but it isn't a realistic governance strategy. The ICO expects organisations to be making reasonable progress, documented decisions happening now, not sitting on their hands until every detail is finalised.

Start with what you don't know you're using

If the legal framework already exists, why is the AI governance gap so large?

Because AI is happening so fast with new systems arriving on an almost weekly basis, most organisations don't know what AI they're governing. The evidence is consistent: between 38% and 71% of UK employees use AI tools their organisations haven't approved. The range reflects different survey methodologies, but the direction is the same everywhere you look. CybSafe found 38% of employees share sensitive work information with AI platforms without their employer's knowledge. Microsoft UK put the figure for unapproved AI tool use at 71%. And 57% of UK organisations cannot track sensitive data exchanges involving AI.

This is shadow AI, tools adopted by individuals and teams without formal organisational knowledge or approval. It's the single biggest reason governance frameworks designed on paper fail to address real risk. An AI acceptable use policy that covers your formally adopted tools is governing what you know about and missing everything else.

The first step in building an AI governance framework is not writing a policy. It's finding out what AI is actually in use. Don't just ask, instead run anonymous staff surveys, the anonymity matters, employees won't disclose unofficial AI tool use if they expect to be disciplined for it. Audit your technology stack through SaaS management platforms that can identify services with AI features. Monitor network traffic for AI tool connections.

The people using these tools aren't doing anything malicious. They're trying to work more efficiently with whatever's available. But every AI tool that processes personal data, customer information, or commercially sensitive material creates obligations your organisation needs to meet. You can't meet obligations you don't know about.

Any consultancy that starts an AI governance engagement by designing a framework rather than auditing what AI is actually in use is building governance around assumptions. Discovery comes first. Everything else follows from it.

Six practical steps from discovery to monitoring

There no single authoritative guide from the ICO or any UK body that tells organisations exactly where to start with AI governance. But practitioner consensus converges on a clear sequence, one that follows the logic and reality of how AI governance actually works. You cannot assess what you haven't found, and you cannot prioritise what you haven't assessed.

Step 1: Discover. Conduct an AI inventory and shadow AI audit. Map every AI tool in use, formal and informal, purchased and free, embedded in enterprise software and downloaded by individuals. This is your foundation. Everything else builds on it.

Step 2: Assess. Not every AI tool carries the same risk. A grammar checker processes far less risky data than a recruitment screening tool. An AI chatbot handling customer enquiries creates different obligations from an internal document summariser. Focus your governance effort on the highest impact deployments first. Any AI processing that imapcts decisions about people, any processing related to special category data such as health information or biometric data, anything customer facing comes first.

Step 3: Policy. Draft an AI acceptable use policy covering which tools are approved, what uses are prohibited, how data should be handled when using AI tools, and what to do if something goes wrong. Connect it to your existing data protection and IT security policies rather than creating a standalone document. Policies that sit apart from the rest of your governance don't get followed, they get ignored.

Step 4: DPIA. For any AI processing that's high-risk to individuals, a data protection impact assessment is legally required under Article 35 of UK GDPR. The ICO's enforcement action against Snap over its "My AI" chatbot is worth knowing about. Snap produced five successive DPIAs before the ICO accepted the fifth, and the ICO described the case as a "warning shot for industry." The lesson: DPIAs must be completed before an AI system launches, must be detailed rather than cursory, and must specifically address risks to vulnerable groups. The ICO's AI and Data Protection Risk Toolkit provides a starting point for this assessment.

Step 5: Accountability. Assign clear ownership. Someone needs to be made responsible, not in an abstract "governance is everyone's responsibility" sense, but named, accountable, and given the authority to make decisions. For smaller organisations, this might be the data protection officer or a senior manager picking up AI governance alongside existing responsibilities. For mid-size organisations, a designated governance lead with cross-functional visibility. The Trustmarque data showing 19% of organisations have no clear governance owner tells you how common this lack of structure is.

Step 6: Monitor. AI systems are not static. Models drift, regulations evolve, new tools get adopted. Set a review cadence, quarterly at minimum, to reassess your AI inventory, check whether governance measures are working in practice, and track regulatory developments. The ICO's forthcoming statutory Code of Practice on AI and automated decision making will be the most significant near term development to watch for.

One thing worth saying about this sequence: the numbering makes it look more rigid than it is. In practice, you'll find yourself looping back and forth, a new tool surfaces during monitoring that sends you back to assessment, or a DPIA reveals a policy gap you hadn't previously considered. That's proof that your governance is working with the fast changing reality of AI adoption.

What proportionate governance looks like

The six-step sequence applies to any organisation, but proportionate implementation looks different depending on scale. A company of 30 people does not need what a company of 3,000 needs.

For smaller organisations, the minimum viable starting point is an AI inventory, a basic acceptable use policy, DPIAs for any customer affecting AI, and a named person responsible for AI governance alongside their existing role. Enterprise grade AI tools, Microsoft Copilot, Google Gemini for Business, are preferable to free consumer versions because they come with contractual data processing terms your organisation can actually govern.

Mid size organisations typically need a designated governance lead, a cross-functional governance group, and a more structured risk assessment process. ISO 42001, the world's first certifiable AI management system standard, published in 2023, provides a useful reference framework. For most mid-size organisations, the proportionate approach is to use it as a gap analysis tool rather than pursuing formal certification, which carries significant cost and faces an immature auditor market.

Larger organisations, 250 employees and above, will typically need a dedicated AI governance role, board-level reporting on AI risk, and a more comprehensive bias monitoring programme. ISO 42001 certification becomes worth considering where commercially justified, particularly for organisations bidding for public sector contracts or operating in regulated sectors.

Across all of these: a well-enforced basic policy is more effective than a comprehensive framework nobody follows. Start with what matters most, enforce it properly, and build from there.

The hardest part isn't the framework

AI governance is not a completely new governance burden built from scratch. For UK organisations, it's an extension of obligations that already exist under UK GDPR, applied to AI tools your team are probably already using.

The real challenge is understanding that reality. What AI tools people are using. What data those tools are processing. What decisions are being informed or made by automated systems. That's the work that makes governance functional rather than theoretical.

The regulatory direction is clear, even if the detail isn't yet settled. The ICO's forthcoming statutory Code of Practice will add specificity, but the core principles, transparency, accountability, proportionate risk management, are not going to change. And the first step is simpler than most people expect: find out what AI you're using, assess where the risks lie, and start closing compliance gaps. The organisations that act now, even imperfectly, will be much better positioned than those left waiting for perfect guidance to arrive.

Share this article
LinkedIn

Put your data protection in safe hands

Contact us today for a free, no-obligation conversation with a data protection practitioner about your organisation's needs. No sales pitch โ€“ just honest, practical advice.

Get in touch