Artificial Intelligence poses a challenge to our principles

feature-image

Play all audios:

Loading...

Algorithms that help councils detect potholes, AI tools that help doctors know when patients are stable enough to return home, AI that scans and finds melanomas . . . innovations like these


will become the norm. But there are other aspects to this tech, like the use of live facial recognition in our cities or automated decisions about benefit entitlement.


Artificial intelligence is starting to prove game-changing for some applications. It’s reasonable to think that efficiency and effectiveness of our public services can be radically improved


through its use. But it is clear that the public will need more reassurance about the way in which AI will be used by government, especially since, in the public sector, citizens will often


have no choice but to be subject to an algorithm’s decision-making power.


Over the past year, Committee on Standards in Public Life has been talking to tech specialists, policy professionals, academics, legal experts and private companies to examine the impact of


artificial intelligence on public sector ethics.


The Nolan principles of honesty, integrity, objectivity, selflessness, leadership, accountability and openness are well known to most public sector employees, from nurses to government


ministers. Written into codes of conduct, embedded in values statements and posted on walls across the public sector, they represent the standards the public expects of those who serve them.


How can a public sector employee working with an AI system demonstrate they are living up to these principles? What can government and regulators do to build trust and confidence in these


new systems?


Of those Seven Principles of Public Life, we found that AI poses a particular challenge for three: openness, accountability, and objectivity.


On openness, government is currently failing. Public sector organisations are not sufficiently transparent about their use of AI and we found it almost impossible to find out where and how


new technology is being used. We add our voice to that of the Law Society and the Bureau of Investigative Journalism in calling for better disclosure of algorithmic systems.


Explaining AI decisions is critical for accountability and many have warned of the prevalence of “Black Box” AI. However, we found that explainable AI may not be an unrealistic or


unattainable goal for the public sector, so long as government and the private companies working for them understand and prioritise public standards when designing and building AI systems.


A lot has been written about data bias already. AI systems using unrepresentative data, risk automating inbuilt discrimination and present a real challenge to the principle of objectivity — 


and here the Committee has cause for serious concern. Technical “solutions” to deal with bias do not yet exist, and we do not know yet how our anti-discrimination laws will apply to


data-driven systems. Public sector organisations are going to have to be fully alert to how their software solutions might affect different communities — and act to minimise any


discriminatory impact.


While the UK is at the forefront of thinking on AI and ethics, our regulatory and governance framework for AI in the public sector remains a work in progress and deficiencies are notable. We


have made a number of recommendations to push government in the right direction.


In our view, the risks posed by AI will not be solved by a new super-regulator. We believe all regulators and public bodies will need to step up.


AI does not require major change in the governance of public sector organisations. As for all major technological change, standards issues raised by AI systems should be part of the


organisation’s active risk management, during which any ethical risks are properly flagged and mitigated.


Government also needs to make greater use of its huge market power to demand more from the private sector. The procurement process should include a significant focus on public standards.


Ethical requirements should be made explicit to tech companies who want lucrative public sector contracts, so that provisions for upholding openness, accountability and objectivity can be 


built in to their AI systems.


The other measures we recommend should not come as a surprise to those working in this sector. They include maximising workforce diversity, setting clear responsibility for officials


involved in an AI decision-making process, and establishing oversight of a whole AI process. We also recommend a clear system of redress for citizens against automated and AI assisted


decisions.


Putting standards at the heart of AI should unite the technology’s evangelists and its critics. The public will only trust government to introduce new technology if it is clear that


standards will be upheld as it is doing so.


Our recommendations are not a barrier to change — quite the opposite. Meeting AI’s ethical challenges should accelerate, rather than delay, AI adoption and help ensure the benefits of the


data revolution are shared by everyone.


You can read “Artificial Intelligence and Public Standards – A Report by the Committee on Standards in Public Life” in full here.


By proceeding, you agree to our Terms & Conditions and our Privacy Policy.


If an account exists for this email address, you will shortly receive an email from us. You will then need to:


Please note, this link will only be valid for 24 hours. If you do not receive our email, please check your Junk Mail folder and add info@thearticle.com to your safe list.