In accordance with its own guidelines published in the recent National AI Strategy, HMG has today announced that it will launch an algorithmic transparency standard for government departments and public sector bodies.
This standard has been developed by The Cabinet Office’s Central Digital and Data Office and it is intended to be trialled via several public sector organisations that will then feedback on its effectiveness.
The aim is to promote the use of such algorithms in HMG's digital framework and to generate public trust that any legal or financial decisions that will impact upon the lives of people in the UK will be transparent as to how decisions were made where AI is being used, how the systems have worked and how much human oversight there has been.
The pilot phase should be completed in 2022, with suitable adjustments being made to the standard during this period.
This is arguably the first standard of its kind anywhere in the world and its adoption come as a result of many AI experts and the general scientific community focusing on the need for the public to trust such systems. Without public trust artificial intelligence applications can't be deployed and used effectively to aid society as a whole.
The aim is for the UK government to use AI systems to make fairer decisions, more quickly and efficiently, at the same time reducing the associated costs. A laudable ambition - but it remains to be seen how such standards work in practice and whether society will accept a "computer says no'" approach to the provision of public services.
What we can be sure of is that as the use of AI algorithms on large datasets becomes common practice, these systems will be used more and more frequently.