This AI Governance Plan outlines the principles and strategies to ensure the ethical, responsible, and secure integration of AI into our digital platform via IBM Watsonx Assistant. It is rooted in the five pillars of responsible AI: Fairness, Transparency, Accountability, Privacy, and Security. These principles guide our commitment to serving our beneficiaries with integrity, safeguarding user trust, and aligning our work with the charity’s mission.
Fairness is about making sure our virtual assistant treats everyone equally and avoids bias. It means the system works well for all groups of people, no matter who they are. AI can even help us make fairer decisions by spotting and countering human biases.
Bias happens when the system, or the data it’s trained on, is unintentionally unfair. This can show up if the system reflects cultural or institutional prejudices, wasn’t designed with enough care, or is used in ways no one planned for. Fairness also means thinking about diversity—like having a mix of voices on our team and listening to the people our service affects most.
Fairness is at the heart of what we do as a charity. We want our services to be accessible and beneficial for everyone. Making sure the virtual assistant is fair helps us stick to our mission of equity and inclusion.
Transparency is about making sure everyone understands how the virtual assistant works and why it makes the recommendations it does. It’s about being open—letting people see how the system operates, what it’s good at, and where its limits are.
When we’re transparent, we build trust. This means being clear about what data we collect, how we use and store it, and who has access. Transparency also means explaining the purpose of the system and giving users the tools to understand how decisions are made.
For example, technology companies should say who trained their AI systems, what data was used, and how the algorithms reach their conclusions. Sharing this kind of information helps users know if the system is right for their needs.
Trust is essential in everything we do. Transparency helps our users, beneficiaries, and trustees feel confident in the virtual assistant and the decisions it makes.
Explainability is about making sure the virtual assistant can clearly show how and why it makes its recommendations. It’s not enough for the system to work—it needs to be understandable.
An explainable system lets people see what went into its decisions, like what data it used and how confident it is in its answers. It means being able to explain things in simple, non-technical terms, so even someone without a technical background can understand how it works.
If a system has a big impact on someone’s life, it’s even more important to explain its reasoning. This might include sharing things like confidence levels, how consistent the system’s decisions are, or how often errors happen. A system that hides its workings isn’t trustworthy; transparency is key to building trust.
Explainability helps users, staff, and trustees trust the system. When people understand how the virtual assistant works, they’re more likely to feel confident using it. For our charity, this is essential to ensure the AI aligns with our values and serves our beneficiaries effectively.
Privacy is about keeping people’s data safe and making sure we follow the rules, like GDPR. It means being upfront about what data we collect, why we’re collecting it, how it’s stored, and who can see it.
We need to collect only the data that’s absolutely necessary and make sure it’s used for the purpose we said it would be. People should have control over their data, with clear and easy-to-use privacy settings.
Protecting data also means using strong security practices, like encryption and limiting access to only those who need it.
As a charity, trust is everything. Protecting people’s sensitive information helps us maintain that trust and meet legal obligations like GDPR.
Robustness is about making sure the virtual assistant is secure, reliable, and able to handle unexpected situations. This means protecting the system from cyberattacks, preventing unauthorised access, and ensuring it works as expected—even when things don’t go as planned.
Robust systems are built to deal with unusual inputs or malicious attempts to interfere, like someone trying to corrupt the training data. They’re designed to keep running smoothly and safely, giving users confidence in their outcomes.
If the virtual assistant isn’t secure, it could lead to data breaches, technical failures, or even harmful decisions. Strong protections are vital to safeguard sensitive information, maintain trust, and keep the system running without interruptions.
Last updated: 2nd April 2025