Making progress with AI governance (Part 1): creating an AI Policy

21/01/26 – As AI tools and systems scale from evaluations/POCs to live deployments, businesses will want to start thinking about AI governance – putting in place policies, practices and processes for directing, monitoring and managing their AI activities.  A good start is a strategic, dynamic, cross-functional AI Policy which becomes a cornerstone of the organisation’s AI governance framework.

Although no two AI Policies will be the same, some elements of developing and rolling out an AI Policy will be common to most SMEs:

  1. Confirm who within the organisation is responsible for AI governance, including the implementation and periodic review of the AI Policy.  Consider establishing an AI Governance Group with representatives from key departments, e.g. Finance, HR, IT and Legal/Compliance, but with accountability assigned to individuals and not committees.
  2. Document the organisation’s processes for the evaluation and approval of third party AI tools, as well as AI features being introduced for the organisation’s existing CRM or HR tools, to assess:
    • Regulatory compliance
    • Transparency and explainability
    • Bias and discrimination
    • IP/ copyright ownership
    • Data privacy
    • Security risks.
  3. For agentic AI, consider additional or increased risks arising from the deployment and ongoing orchestration of multiple AI agents operating with significant autonomy, e.g. unauthorised access to systems by agents, agents sharing inaccurate or irrelevant data, and data leaks resulting from agents exchanging data without adequate oversight.  Depending on the use cases and workflows for the proposed agentic AI deployment, consider:
    • Carrying out capability testing, adversarial evaluations and red‑team exercises as part of the assessment.
    • Implementing additional guardrails, e.g. human-in-the-loop oversight, deployment-specific training of users, automated stop mechanisms, incident response plans and legal disclaimers.
  4. Core techniques of privacy compliance (like data mapping, data protection impact assessments (DPIAs) and transparency disclosures) are directly relevant when evaluating the data privacy implications of AI tools. If available, consider drawing on internal know-how and experience of privacy compliance and DPIAs for carrying out AI impact assessments as part of the evaluation/approval processes.
  5. According to research by Microsoft in October 2025, 51% of employees use ChatGPT or other consumer grade Gen AI tools for work purposes at least once a week.  Given the risk of confidential or proprietary data being included in employees’ input data and being used for further training of the AI model, consider providing relevant employees with a subscription Gen AI tool and prohibiting the use of consumer grade tools.
  6. Include and maintain a register of AI tools that have been approved by the organisation, with details of any limitations or caveats on their use.
  7. Identify functions or specific business processes for which the use of AI is actively encouraged (or even mandated), e.g. first drafts of client reports, pre-screening of candidate CVs, notetaking for non-sensitive meetings.
  8. By the same token, identify activities for which AI must not be used (or must only be used with documented, human-in-the-loop oversight/verification), e.g. finalising client reports, making employee performance monitoring decisions, interview shortlisting, notetaking for commercially sensitive or legally privileged meetings.
  9. Confirm the organisation’s approach to transparency in its use of AI, including regulatory compliance and use of customer-provided data:
    • In addition to legally required notices where personal data is used, what types of AI usage will be disclosed to customers, employees and other stakeholders?
    • How will transparency disclosures be communicated?
    • Will AI-generated content be labelled or otherwise identified, and if so how?
    • Will the organisation publish an AI Usage Statement?
  10. Consider whether the organisation supports specific ethical standards (algorithmic bias, risk of job displacement, sustainability etc), and how the organisation’s approval, use and monitoring of AI tools, align with those standards.  This is likely to be particularly relevant for tools used for HR and recruitment, e.g. CV screening, automated shortlisting, psychometric testing, productivity tracking software.
  11. Encourage employees and other stakeholders to question or challenge the deployment, use and monitoring of AI tools within the organisation, as well as the outputs generated by the tools. Consider introducing a process for collecting employee/stakeholder feedback on the performance of AI tools.
  12. Check whether the organisation’s insurances (particularly professional indemnity (PI) and cyber) have any AI-specific requirements or exclusions, and address these in the AI Policy.
  13. Confirm training arrangements for employees to enable them to use AI confidently and safely, including regular refreshers/updates.  If possible, tailor the training to job functions, e.g. employees with HR responsibilities will want training on the risks of bias and unconscious discrimination, whereas the IT team may benefit more from a bootcamp on library creation.  For businesses which need to be compliant with the EU AI Act, consider Article 4 which requires deployers of AI systems to ensure their staff have a “sufficient level of AI literacy”.
  14. Finally, make sure that the AI Policy is:
    • Practical, user-friendly and written in plain English
    • Effectively communicated to employees and other relevant stakeholders
    • Reviewed regularly, e.g. every six or even three months.

     

Part 2 of Making progress with AI governance will look at procuring AI systems.

Receive email updates

Subscribe to updates on topics relevant to you. We won’t use your email address for any other purpose, and you can of course unsubscribe at any time.

Subscribe