The governance gap
Somewhere in your firm right now, someone is pasting client data into ChatGPT.
That is not speculation. Microsoft's UK research from October 2025 found that 71% of employees use unapproved AI tools at work. More than half do so every week. A separate study from UpGuard found that over 80% of workers use AI tools their employer has not approved, and IBM's 2025 Cost of Data Breach Report attributed 20% of all breaches to shadow AI, with those incidents costing an average of $4.63 million – significantly more than breaches from other causes.
The adoption numbers are clear. The Bank of England and FCA's joint survey, published in November 2024, found that 75% of UK financial services firms are already using AI. PwC's 2025 survey of Top 100 UK law firms reported that nearly 90% have implemented or trialled AI tools. The ICAEW's global survey found that 83% of chartered accountants aged 18–24 use AI at least once a week.
Yet when Thomson Reuters surveyed legal organisations in late 2024, only 8% had generative AI covered under an existing technology policy. Three quarters had no policy at all.
The AI is already inside the building. The governance, for most firms, is not.
Why this matters for regulated firms
For a technology company, ungoverned AI use is a risk. For a regulated professional services firm, it is a category of risk that touches almost every obligation you already have.
Client confidentiality
When an employee enters client information into a consumer AI tool, that data leaves the firm's control. OpenAI's standard ChatGPT interface uses input data to train its models unless users specifically disable chat history. Enterprise API arrangements are different, but most shadow AI use involves the consumer product, not the enterprise one.
This is not a theoretical concern. In March 2023, Samsung employees entered proprietary source code and internal meeting content into ChatGPT within 20 days of the company granting access. The data was ingested into ChatGPT's training model and was deemed irrecoverable from OpenAI's servers. Samsung temporarily banned all generative AI tools on company devices.
For law firms, the confidentiality obligation is absolute. SRA Code of Conduct Rule 6.3 requires solicitors to keep client affairs confidential. Norton Rose Fulbright warned in 2025 that information entered into "open models" should be seen as "published to all the world." DLA Piper has explored whether existing privilege frameworks protect AI-assisted work product, and Stirling & Rose cautioned that importing legal advice into a large language model can "potentially inadvertently and unintentionally waive legal professional privilege." No UK test case has been decided on this point, which makes it a risk that cannot yet be sized – only managed.
For FCA-regulated firms, the Senior Managers and Certification Regime (SM&CR) makes individuals personally accountable for the activities within their area of responsibility. If a team member uses AI to draft client communications and the output is misleading, the accountability does not rest with the AI tool.
Data protection
Entering personal data into an AI system engages the UK GDPR. Questions that most firms have not answered: who is the data controller for this processing? What lawful basis applies? Have data subjects been informed that their data may be processed by AI? Given that most AI providers are US-based, have international transfer requirements been met?
The ICO's five-part generative AI consultation, concluded in December 2024, confirmed that organisations deploying AI must assess whether the model was developed lawfully and cannot assume that outputs are anonymised simply because the model processes data at scale. The European Data Protection Board's ChatGPT Task Force (May 2024) added that controllers cannot transfer responsibility to data subjects via terms of service that prohibit personal data input – the controller's obligation to protect data stands regardless.
Professional indemnity insurance
The insurance market is adjusting to AI risk, and not in your favour. Kennedys Law identified a "silent AI" coverage gap in March 2025: AI-related claims that are neither explicitly included nor excluded by standard PI policies, creating coverage ambiguity. Some insurers have responded by adding exclusions. Berkley introduced an "Absolute" AI exclusion for directors' and officers', errors and omissions, and fiduciary liability policies. Philadelphia Indemnity and Hamilton Select have excluded claims arising from AI use.
Others are building new products. Hiscox added specific AI risk protection to its technology PI product in June 2025. Armilla AI launched an AI liability insurance product underwritten by Lloyd's syndicates in April 2025, explicitly covering hallucinations, degrading model performance, and algorithmic failures.
The direction is clear: insurers want to know whether your firm governs its AI use. A firm with no governance framework faces a harder conversation with its insurer – and potentially a gap in coverage precisely when it matters.
What regulators expect
No regulator has yet prescribed a definitive AI governance framework. But every relevant regulator has made clear that they expect you to have one – and a joint FCA/ICO statutory code is now being written.
The FCA
The FCA has chosen not to introduce AI-specific rules. Nikhil Rathi, FCA CEO, confirmed this position at the FT Global Banking Summit in December 2025: "We are not going to come after you for everything that goes wrong. What we will be concerned about is egregious failures that are not dealt with."
That sentence carries more weight than it might first appear. The FCA's position is that existing obligations – the Consumer Duty, SM&CR, operational resilience requirements – already cover AI risk. The regulator will not tell you how to govern AI. But when something goes wrong, it will ask whether you governed it at all.
Sheldon Mills, FCA Executive Director, announced in January 2026 a board-commissioned review into the long-term impact of AI on retail financial services, due to report in summer 2026. His framing was pointed: "Fraud models, trading systems, credit decisioning – nothing new… But the last two years have been different. Generative AI. Multimodal systems. Emerging AI agents."
For financial advisory firms, the Consumer Duty overlay is particularly relevant. If AI is used in any part of the client journey – drafting communications, analysing portfolios, generating recommendations – the firm must demonstrate that the use delivers fair outcomes. That requires governance, documentation, and oversight.
The FCA is also moving beyond general expectations toward specific guidance. In June 2025, following a roundtable where firms requested clearer direction, the FCA and ICO announced they are jointly developing a statutory code of practice for AI and automated decision-making in financial services. The code will cover transparency, explainability, bias, discrimination, and consumer redress – essentially defining what good AI governance looks like for regulated firms. The ICO is developing the code over the 2025/26 period and plans to consult on updated automated decision-making and profiling guidance. This initiative sits within the Digital Regulation Cooperation Forum's (DRCF) workplan, ensuring the two regulators present a joined-up position. For firms that have been waiting for regulators to show their hand, this is it: the framework is being written now, and firms that have already built governance will be ahead of it rather than scrambling to catch up.
The SRA
The SRA published its Risk Outlook report on AI in the legal market in November 2023, identifying hallucinations, bias, confidentiality, and accountability as key risks. The SRA Chief Executive, Paul Philip, framed the obligation in terms any managing partner would understand: "Just as a solicitor should always appropriately supervise a more junior employee, they should be overseeing the use of AI."
The SRA has been criticised for what Legal Futures termed "regulatory silence" – a lack of substantive guidance on how the duty of competence applies to AI tools. No formal Warning Notice on AI has been issued, and no disciplinary action specifically for AI misuse has been reported as of February 2026.
But the courts have not been so quiet. In June 2025, Ayinde v London Borough of Haringey became the first UK High Court case directly addressing AI-generated fabrications in legal documents. Five fabricated cases were cited in grounds for judicial review. Dame Victoria Sharp, President of the King's Bench Division, warned: "Lawyers who do not comply with their professional obligations in this respect risk severe sanction." Wasted costs orders were made, and the matter was referred to the SRA and Bar Standards Board.
That case was not isolated. In the same month, Al-Haroun v Qatar National Bank saw a solicitor submit AI-fabricated case citations, accepted by the court as "misplaced trust" rather than intent to deceive – but the solicitor and firm were still referred to the SRA. By October 2025, Ndaryiyumvire v Birmingham City University became the first known UK case where AI errors in legal software (not a consumer tool, but the AI research feature in LEAP practice management software) led to sanctions and the claim being struck out.
Counsel Magazine reported over 50 documented incidents of AI hallucinations in courts in July 2025 alone.
The message from the courts is ahead of the SRA's formal guidance: govern your AI use or face professional consequences.
The ICO
The ICO designated AI as a key focus area for 2024–25 and has committed to publishing a statutory code of practice for AI. Its AI and Biometrics Strategy (June 2025) prioritises transparency, bias, and individual rights. For any firm processing personal data through AI tools – which means virtually every professional services firm using generative AI – the ICO's existing data protection toolkit applies now, not when the code is published.
The ICAEW
The ICAEW's Ethics and AI Roundtable report (2024) proposed a pragmatic test for AI use: "Can we?" (is it lawful and competent?) and "Should we?" (does it align with professional values?). The ICAEW Code of Ethics was revised effective 1 July 2025 to include technology-centred provisions, and seven professional bodies published joint Professional Conduct in Relation to Taxation (PCRT) guidance on AI in tax work in January 2026.
What is coming: the EU AI Act and the UK direction
The regulatory window for voluntary governance is narrowing.
The EU AI Act
The EU AI Act entered into force on 1 August 2024. It is not a future concern; parts of it are already law.
Since 2 February 2025, the Act's prohibited practices and AI literacy obligation have been applicable. The prohibited practices include emotion recognition in the workplace (except for medical or safety purposes) and certain forms of manipulation in client engagement – both relevant to professional services. The AI literacy obligation (Article 4) requires organisations deploying AI to ensure their staff have a sufficient level of AI literacy, proportionate to their role and the risk level of the AI systems used. It applies to all AI systems regardless of risk classification.
Since 2 August 2025, obligations for general-purpose AI (GPAI) model providers – the companies behind ChatGPT, Copilot, and Gemini – have been in force. UK professional services firms are generally deployers, not providers, and do not bear these obligations directly. But the distinction matters: if a firm significantly modifies or fine-tunes a model and makes the modified version available in the EU market, it could assume provider obligations.
The high-risk AI system provisions are due to become applicable in August 2026, though a European Commission proposal from November 2025 (the Digital Omnibus) may delay this to December 2027. The categories most relevant to professional services include AI used in employment decisions (recruitment, performance monitoring, termination) and AI used in credit and insurance assessments.
The extraterritorial reach is the critical point for UK firms. The Act applies to organisations outside the EU if the output of their AI system is "used within the EU." A UK law firm using AI to draft advice for EU clients, or a UK financial advisory firm deploying AI analysis for EU-based portfolios, falls within scope. The obligations scale with risk, but the jurisdictional trigger is broad.
The UK direction
The UK has no AI-specific legislation as of February 2026. The Artificial Intelligence (Regulation) Bill reintroduced in the House of Lords in March 2025 is a Private Members' Bill without government backing. The government's own AI Bill has been delayed; Liz Kendall, Secretary of State for DSIT, signalled in December 2025 a preference for "specific areas where we may need to act rather than a big all-encompassing bill."
What the UK does have is an existing regulatory framework that applies to AI. The Data (Use and Access) Act 2025 (Royal Assent 19 June 2025) reforms automated decision-making rules under UK GDPR. The FCA, SRA, and ICO all exercise their existing powers over AI within their jurisdictions. The Digital Regulation Cooperation Forum (DRCF) provides cross-regulator coordination, and its Thematic Innovation Hub launched in October 2025 with agentic AI as its first focus area.
The practical implication: UK firms cannot wait for a single, comprehensive AI law. Governance needs to be built against the existing obligations, with an eye on the EU requirements that apply to firms with European clients or operations.
A practical starting framework
A minimum viable AI governance framework for a professional services firm does not require ISO certification or a dedicated AI team. It requires clear policy, consistent process, and named accountability.
Acceptable use: the traffic-light system
Classify AI use into three categories:
Green (permitted): Internal research, summarisation of non-confidential material, drafting support for internal documents, general knowledge queries. Permitted with approved tools only.
Amber (permitted with controls): Drafting client communications (subject to mandatory human review), analysis involving non-personal or adequately anonymised data, document review where the output will be verified. Requires approved tools, logging of use, and a named reviewer.
Red (prohibited): Entering client-identifiable personal data into any AI system without a data processing agreement in place. Using AI output in court filings, regulatory submissions, or formal client advice without independent verification. Using consumer-grade AI tools (personal ChatGPT accounts, free-tier products) for any client work. Making decisions on the basis of AI output alone where those decisions affect client outcomes.
Data classification
Map your existing data classification to AI use. If your firm classifies data as public, internal, confidential, and highly restricted, define which categories may be used with which AI tools. Client names, case details, financial information, and personal data should never enter a tool unless the tool's data processing terms have been reviewed and a processing agreement is in place.
Human review
Every piece of AI-generated content that reaches a client, a court, a regulator, or the public must be reviewed by a qualified professional before it leaves the firm. This is not about distrusting AI. It is about maintaining the same standard of review you would apply to work produced by a junior colleague. As Paul Philip put it: the supervision obligation is the same.
Record-keeping
Log which AI tools are used, for what purpose, and by whom. This does not need to be burdensome; a simple register maintained per-matter or per-project is sufficient. When a regulator asks how AI was used in a specific piece of work – and that question is coming – you need an answer.
Vendor due diligence
Before adopting an AI tool, ask the vendor: where is the data processed? Is input data used for model training? What security certifications does the tool hold (SOC 2, ISO 27001)? Will the vendor sign a data processing agreement that meets UK GDPR requirements? If a vendor cannot answer these questions clearly, the tool is not suitable for regulated professional services work.
Training
The EU AI Act's Article 4 literacy obligation applies to firms with EU touchpoints, and good practice demands it regardless. Training does not need to be a week-long course. It needs to cover: what AI can and cannot reliably do, the firm's acceptable use policy, confidentiality obligations when using AI, and how to recognise and handle hallucinated outputs. The ICAEW's "Can we? Should we?" framework is a practical starting point for structuring the conversation.
Where to start: three immediate actions
If your firm has no AI governance framework today, these three steps will close the most significant gaps:
First, audit what is already happening. Survey your teams: which AI tools are in use, how often, and for what purposes? The Microsoft data tells us 71% of employees are using unapproved tools. Assume your firm is not the exception. You cannot govern what you have not mapped.
Second, publish an acceptable use policy. Use the traffic-light model above. It does not need to be perfect; it needs to exist. A clear, written policy that distinguishes between permitted, controlled, and prohibited AI use gives your people a framework and gives your firm a defensible position if something goes wrong.
Third, mandate human review of all AI-assisted client work. No AI-generated or AI-assisted output should reach a client, a court, or a regulator without review by someone qualified to assess its accuracy and appropriateness. This is the single control that addresses the widest range of AI risk.
AI governance is not a compliance project to be deferred until regulators prescribe the format. It is risk management for a technology that is already embedded in how your people work. The firms that build governance now will have a defensible position when the regulator asks – and a competitive advantage when clients start asking how their data is being handled.
The firms that wait will be explaining why they did not.