List of recommendations
Ethical principles and guidance
There are currently three different sets of ethical principles intended to guide the use of AI in the public sector – the FAST SUM Principles, the OECD AI Principles, and the Data Ethics Framework. It is unclear how these work together and public bodies may be uncertain over which principles to follow.
a. The public needs to understand the high level ethical principles that govern the use of AI in the public sector. The government should identify, endorse and promote these principles and outline the purpose, scope of application and respective standing of each of the three sets currently in use.
b. The guidance by the Office for AI, the Government Digital Service and the Alan Turing Institute on using AI in the public sector should be made easier to use and understand, and promoted extensively.
Articulating a clear legal basis for AI
All public sector organisations should publish a statement on how their use of AI complies with relevant laws and regulations before they are deployed in public service delivery.
Data bias and anti-discrimination law
The Equality and Human Rights Commission should develop guidance in partnership with both the Alan Turing Institute and the CDEI on how public bodies should best comply with the Equality Act 2010.
Regulatory assurance body
Given the speed of development and implementation of AI, we recommend that there is a regulatory assurance body, which identifies gaps in the regulatory landscape and provides advice to individual regulators and government on the issues associated with AI. We do not recommend the creation of a specific AI regulator, and recommend that all existing regulators should consider and respond to the regulatory requirements and impact of the growing use of AI in the fields for which they have responsibility. The Committee endorses the government’s intention for CDEI to perform a regulatory assurance role. The government should act swiftly to clarify the overall purpose of CDEI before setting it on an independent statutory footing.
Procurement rules and processes
Government should use its purchasing power in the market to set procurement requirements that ensure that private companies developing AI solutions for the public sector appropriately address public standards. This should be achieved by ensuring provisions for ethical standards are considered early in the procurement process and explicitly written into tenders and contractual arrangements.
The Crown Commercial Service’s
Digital Marketplace The Crown Commercial Service should introduce practical tools as part of its new AI framework that help public bodies, and those delivering services to the public, find AI products and services that meet their ethical requirements.
Government should consider how an AI impact assessment requirement could be integrated into existing processes to evaluate the potential effects of AI on public standards. Such assessments should be mandatory and should be published.
Transparency and disclosure
Government should establish guidelines for public bodies about the declaration and disclosure of their AI systems. Recommendations to front-line providers, both public and private, of public services The Committee makes seven recommendations to front-line providers of public services to help establish effective risk-based governance for the use of AI.
Evaluating risks to public standards
Providers of public services, both public and private, should assess the potential impact of a proposed AI system on public standards at project design stage, and ensure that the design of the system mitigates any standards risks identified. Standards review will need to occur every time a substantial change to the design of an AI system is made.
Providers of public services, both public and private, must consciously tackle issues of bias and discrimination by ensuring they have taken into account a diverse range of behaviours, backgrounds and points of view. They must take into account the full range of diversity of the population and provide a fair and effective service.
Providers of public services, both public and private, should ensure that responsibility for AI systems is clearly allocated and documented, and that operators of AI systems are able to exercise their responsibility in a meaningful way.
Monitoring and evaluation
Providers of public services, both public and private, should monitor and evaluate their AI systems to ensure they always operate as intended.
Providers of public services, both public and private, should set oversight mechanisms that allow for their AI systems to be properly scrutinised.
Appeal and redress
Providers of public services, both public and private, must always inform citizens of their right and method of appeal against automated and AI-assisted decisions.
Training and education
Providers of public services, both public and private, should ensure their employees working with AI systems undergo continuous training and education.
We conclude that the UK does not need a new AI regulator, but that all regulators must adapt to the challenges that AI poses to their specific sectors. We endorse the government’s intentions to establish CDEI as an independent, statutory body that will advise government and regulators in this area.
Committee on Standards in Public Life