AI risks for charities in the UK
Trustees and managers of charities in the UK face a number of legal and ethical challenges around the use of artificial intelligence (AI). As AI becomes increasingly integrated into charitable operations — such as fundraising and governance — those running the organisations will need to navigate a complex legal landscape of data protection, bias, transparency and accountability, reputation, and cybersecurity.
There is no doubt that the use of AI can greatly benefit charities, from fundraising through to data analysis, but it is important that Trustees never lose sight of the need to uphold their fundamental legal duties:
-
Duty of care – to act with reasonable skill and diligence
-
Duty of compliance – to follow the law and the charity’s governing document
-
Duty to act in the charity’s best interests – which includes managing risk and ensuring public trust
An outline of some of the risks presented by AI
AI and Data Protection/GDPR
AI systems often rely on personal data. Improper handling of personal data can breach the UK General Data Protection Regulation and Data Protection Act 2018.
AI and Discrimination/Bias
AI tools may be biased, so their use in areas such as recruitment or service eligibility may result in unintentional discrimination and could breach the Equality Act 2010.
inadvertent use of biased and potentially discriminatory GenAI systems
AI and Accountability/Transparency
Trustees remain legally accountable for decisions, even if they rely on AI. Delegating decisions to AI without oversight may constitute a breach of their duties.
AI and Reputational/Ethical Risks
Misuse of AI or errors made by using it can damage trust with donors, beneficiaries, and regulators.
AI Misuse and Cybersecurity
AI systems may introduce new vulnerabilities or be used for malicious purposes, such as deepfakes and fraudulent donor targeting.
Measures that Trustees should consider taking to avoid legal liability
Governance and Oversight
-
Conducting risk assessments before adopting AI tools.
-
Establishing policies and procedures for AI use, including oversight mechanisms. See our separate article for suggestions about what a charity’s AI Usage Policy may cover.
-
Regularly reviewing AI systems and their outcomes for fairness, transparency, and alignment with charitable objectives.
-
Ensuring that AI-generated content or decisions are subject to human review.
Data Protection
-
Carrying out a Data Protection Impact Assessment (DPIA) before using AI to process personal data.
-
Ensuring lawful basis for data use (consent or legitimate interest).
-
Avoiding fully automated decisions that have a legal effect unless the conditions in GDPR are met.
Transparency and Accountability
-
Being able to explain AI-driven decisions, especially where service users are affected.
-
Keeping clear records of decision-making processes and trustee oversight.
-
Not delegating critical decisions entirely to AI.
Bias and Discrimination
-
Monitoring and auditing AI tools for bias and fairness, particularly in sensitive areas such as grant allocation and recruitment.
-
Involving diverse stakeholders when designing or choosing AI systems.
Staying Informed
-
Keeping up to date with ICO guidance, and Charity Commission updates, such as their article, Charities and Artificial Intelligence.
-
Seeking legal or technical guidance where appropriate. Taking professional advice can help Trustees to demonstrate compliance with their legal duties.
Free Legal Helpline
If you require guidance on the use of AI within a charity or assistance with preparing an AI Usage Policy then contact our free helpline for a chat, or send us an email.