This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 2 minutes read

Government launches AI white paper to guide the use of artificial intelligence in the UK

This AI white paper, to be published on Wednesday 29th March, looks to build upon the National AI Strategy released by HMG in December 2022.

In particular, it is hoped that it will shed more light on 'Pillar 3' of the National AI Strategy which looked at how AI can be governed effectively.

The UK Government's sectoral approach is distinctly different to the 'one size fits all' approach that the EU has adopted under its AI Act and, in our opinion, is a far better way to deal with a technology that means different things to different industrial sectors.

For example, the use of AI in the UK to diagnose patient illness and prescribe treatment carries way more risk to individuals than using AI to suggest a playlist over a streaming app - and so specific regulation in specific sectors makes perfect sense.

The AI Act, on the other hand, seems to rely on a single definition of what AI is (and that in itself is problematical) to then prescribe how regulators need to deal with its risks across the board. We do not think that is an easily workable solution.

To understand what regulations need to be implemented, one has to understand what the different risks are to society and individuals in those specific areas where the technology is used. Current common law in England does a very good job of setting out what 'negligence' means in different circumstances and there is a great deal of product liability and consumer care legislation that could be used in relation to goods incorporating the use of AI.

The AI white paper will contain 5 principles to be used for guidance by each sector specific governing authority, including safety, security and robustness and transparency and explainability.

To that end, it is also very interesting to see, as reported in the Financial Times on 29 March (subscription required), that leading Tech companies appear to be reducing the number of AI ethicists and others who make up internal 'responsible AI teams' who have the specific task of reviewing the risks and preventing the use of 'unsafe AI'. 

This can only be a step backwards if, as HMG suggests, the aim of the game is to increase the public's awareness and acceptance of the use of AI across the country.

As advisors in this area for a number of years, we believe that good and proportionate regulation is long overdue for certain sectors and ensuring that the UK is ready for the AI explosion is a necessity.

This includes making it very clear to the public what risks exist with the use of AI and addressing any lack of trust in the technology. Without that, the appeal of certain goods and services looking to use AI for future development may not be as appealing to potential customers as was initially thought.

“It is shocking how many members of responsible AI are being let go at a time when arguably, you need more of those teams than ever,” Andrew Strait, former ethics and policy researcher at Alphabet-owned DeepMind and associate director at research organisation Ada Lovelace Institute

Tags

regulation, artificial intelligence, robotics, technology, digital disruption