Building Trustworthy AI through effective AI Governance
Watch the presentation on this topic by AI Governance expert James Winters.
Scientific discovery creates power, and power without discipline is dangerous. When we embark on AI innovation, we're embarking on scientific discovery. We can access open source models through libraries like Hugging Face. We can buy and procure existing technologies and build on top of them. We can access research papers from academic articles. Here we stand on the shoulders of giants. Unless discipline is learned (or enforced!), people and society face great risk.
Scientific discoveries can be made in a laboratory by a person in a white cloak and glasses, focused on a particular biological substance, or discoveries can be made by a person on a sofa in a hoodie and sweats, protected by an AI risk assessment, using a technical suite of tools for the detection of bias with guidance from their organisations Responsible AI principles.
There is a careful balance which needs to be met when building and adopting AI. On the one hand, we're looking to innovate and improve, identify new use cases, and make money for ourselves and our organisations. But when doing so we need to make sure that these developments aren't negatively impacting people in society.
AI Governance is the vehicle through which organisations can manage their AI related risks, through efforts in areas including policy, processes, training and the utilisation of technological tools.
Things have changed a lot in a short space of time. In the last few years, we have seen breakthroughs that have put the technology in the hands of the public and have made it more accessible to enterprises. But we've also seen massive advancements in the space of governance. We've seen global regulations come to fruition, and we've seen more frameworks from NGOs, consortancies, and governments than you can shake a stick at.
This is a complex and fast moving space. The rest of this article outlines in brief what the key stakeholder groups are and how AI Governance frameworks are structured so that policy and strategy can be operationalised.
The A(I) Team:
When approaching the space of AI Governance, some organisations are at first expanding the remit of existing roles, in spaces such as Privacy, Legal and GRC (Governance Risk and Compliance). Others are opting to create new functional areas in the organisation's structure to manage this complex space (e.g Responsible AI, AI Governance, AI Ethics or AI Assurance Leaders and Teams).
Whatever the approach taken, it is crucial that this team engages with cross functional domains across the organisation. There are a wide array of important factors and stakeholders to the success and operation of enterprise AI systems, they need to be included in the governance journey. It is important to have both diversity in background and diversity in thought when the Governance structure is being built and AI risk actively managed.
Essential high level groups to be included in the governance structure include:
- Business Stakeholders: Business leaders and users possess deep knowledge of organisational processes and the domain in which AI will be applied. They can identify both opportunities for AI application and potential risks. Their insights are crucial in aligning AI initiatives with business objectives, ensuring that the technology addresses real-world needs and that domain specific risks are identified and effectively addressed and monitored.
- Legal Stakeholders: Legal experts understand the regulatory landscape and can ensure that AI implementations comply with relevant laws and regulations. Privacy officers and cybersecurity teams, who are already versed in protecting organisational data, can extend their expertise and operations to include AI-specific considerations. Their role is vital in navigating the complex legal requirements and safeguarding against regulatory risks.
- Technical Stakeholders: Technical leaders and developers have or can build a deep understanding of how AI integrates into the broader technical infrastructure of the organisation. They are responsible for the practical aspects of AI development, including the selection of appropriate technologies and tools, and ensuring that AI systems are built and deployed correctly. Their expertise is critical in managing the technical risks associated with AI, and mitigating broader risks through technical mitigation routes.
A well-rounded AI team is essential for effective governance. A cross-functional group helps to ensure that all aspects of AI development and implementation are considered, from technical feasibility to regulatory compliance and business alignment. By fostering collaboration among these diverse stakeholders, organisations can create a robust governance framework that supports responsible AI innovation.
This collaborative approach ensures that AI initiatives are not only innovative and profitable but also ethically sound and legally compliant, fostering long-term trust and sustainability in AI technologies.
Organisational AI Governance & AI Model Governance
In preparation for emerging regulations, organisations are developing frameworks to manage their AI risk and work towards regulatory compliance. An important distinction is between organisational AI governance and AI model governance.
Organisational AI governance is strategic, involving the creation and management of policies, processes and organisational structures that govern the use and development of AI technologies across an organisation. It requires engagement from senior leaders across a range of domains and includes the likes of developing organisational structures (e.g ethics boards and escalation processes), identifying the values and risk appetite of the business and the creation of AI Governance policies and processes.
Whereas AI model governance focuses on specific AI projects, models and systems. It includes efforts to ensure that specific instances of AI are developed, deployed and managed in line with the organisational AI Governance framework and regulatory requirements.
The two are fundamentally interconnected. Organisational policies and processes provide the framework within which AI model governance operates and guides teams in their efforts. For example, risk assessment processes and mitigation strategies will be developed at the organisational level. These are then applied to specific AI projects. Similarly, tools procured for organisational use are utilised in the development of specific projects and models.
Different stakeholder groups are involved in each level and model governance requires more operationally focused teams. Organisations are quickly moving from policy to practice and process as regulations such as the EU AI Act approach.
Oncoming regulations and standards have specific requirements against both levels here.
Understanding the distinction helps organisations to define their strategies and roles and responsibilities effectively, ensuring that operations are linked with a broader strategy and direction.
Continuous Assessment and Adaptation
Regular assessment and horizon scanning, comparing AI strategies and operations to emerging requirements is crucial. A dynamic approach allows organisations to adapt to the fast changing global regulatory landscape and keep pace with technological advancements, allowing for continuous innovation and compliance.
Here it is essential that organisations map and manage their use of AI technologies across their business functions, in a centralised manner. This mapping should be an ongoing process, allowing for real-time adjustments as new AI applications (or tools with AI capabilities) are implemented and risks identified.
Compliance not only helps avoid falling foul of regulation, but also enhances the trustworthiness of AI systems in use. Being confident in your governance approach allows you to confidently innovate at pace and scale!
Final Thoughts
Incorporating AI into an organisation requires a balanced approach that prioritises both innovation and responsibility. By building a diverse and cross functional AI team, establishing comprehensive governance frameworks, and continuously assessing AI initiatives, organisations can benefit from the power of AI whilst mitigating its risks.
As in any change and advancement, the use of AI should match each organisation's specific values, risk appetite and regulatory requirements.
A structured approach, with a clearly defined strategy and framework at the organisational level, which is effectively deployed at the model level, helps ensure that AI technologies contribute positively to business objectives and more generally, society.
Let us solve your impossible problem
Speak to one of our industry specialists about how Artificial Intelligence can help solve your impossible problem