This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 5 minutes read

Are data protection governance frameworks a good model for AI governance?

What feels really British but isn’t? “Almost everything in the British Museum” goes the old gag. Could a similar analogy be made about the role of data protection governance frameworks as a model for shaping AI governance? Does data protection feel as if it might be a perfect fit for shaping AI governance, and actually isn’t?

Reasons to be cheerful (“DP governance = AI governance”)

There is a school of thought that for effective AI governance, organisations could do worse than start with the data protection governance frameworks that many, particularly in Europe, already have in place. Here are some of the reasons we commonly hear in favour of this:

  • Many organisations in Europe now have a data protection governance framework and a fast growing number operating further afield do too. In Europe a few of these pre-existed the GDPR, but many were created for GDPR and so many organisations have experience in living memory of setting up such frameworks. Why waste effort that has already been invested in setting up and maintaining these structures when instead they could be adapted for AI governance?
  • The features of typical data protection compliance and governance programmes (at least in the EU and UK) generally cover a lot of the ground needed for an effective AI governance programme. For example, a data inventory (including mapping data flows) looks broadly like the sort of inventory for which the equivalent is needed in relation to AI systems.
  • Similarly, the third party supplier contract due diligence exercises many organisations conducted during GDPR implementation sound useful for discovering whether your existing suppliers are already using AI to deliver services to you but may have neglected to share this with you. Other features of data protection governance only serve to reinforce this view.
  • Both the fields of data protection and AI are “team sports” requiring the participation of stakeholders with different qualifications, and also different organisational responsibilities, to be effective. Privacy professionals are already adept at leading multi-disciplinary teams.
  • Many of the skill sets necessary for effective governance look essentially similar in both fields. Take, for example, the concept of ongoing monitoring under the AI Act; doesn’t this look in effect loosely similar in intended outcomes to the concept in the GDPR of “accountability”?

Reasons to think twice

Where are the limits to the arguments above? How well do they withstand scrutiny? 

The primary purpose of the Act is to prevent risks caused by “high-risk” AI systems (read 'Is my AI “high-risk” under the AI Act?’ article). For that reason, we confine ourselves below to assessing only how high-risk AI systems measure up against these arguments. 

At the heart of the AI Act is the Title III regime, which governs AI systems that are deemed to pose a “high-risk” as Recital 43 and Article 7(2) make clear, to “health, safety and fundamental rights”. As noted in this series (see 'What is my role under the AI Act?' article), the twin pillars of the Act’s approach to its concept of “high-risk” AI systems are, on the one hand, the health and safety of individuals, and on the other, the fundamental rights and freedoms of individuals as enshrined in the EU’s Charter of Fundamental Rights.

This regime is based on a common EU approach, which has been around for decades, to regulate products where safety is of particular importance – medical devices and lifts to name two examples. 

Under this approach, the manufacturer must establish the safety of regulated products through conformity assessments of the products against certain essential statutory requirements. This must be done before the products can be placed on the market (first commercialised). Thereafter, products may be marked “CE”, allowing their marketing and distribution across the EU. However, it should be stressed that this is not the end of the process, and such products are subject to continuous monitoring and vigilance obligations.

Providers of high-risk AI systems, on whom most of the AI Act’s essential requirements fall (see Chapter 2 of the Act), must create and operate a quality management system for the AI systems they have developed. The Act’s requirements for the quality management system are set out in Article 9. 

If this regime sounds hardly like the legislative regime for data protection in the EU at all, that is because it is not. There is little similarity between these core features of the Act in relation to high-risk AI systems and the EU legislative approach to data protection as embodied in the GDPR. 

Delving further, conformity assessment as a regulatory model depends on standardisation organisations and notified bodies for its efficacy. Broadly, providers that follow a standard developed by one of the European Standardisation Organisations do not need to interpret the essential requirements of the legislation but will instead simply be able to follow the relevant standard. The capacity problems in the EU’s system of regulation for this approach are ably described in the following article, but that is another matter. The GDPR’s attempt to kickstart a market in mechanisms loosely akin to such standards, that is, certification schemes, certification bodies, and codes of conduct for data protection has been one of its notable failures. Six years on from GDPR implementation, the rate of adoption by data controllers of such certification schemes and codes of conduct remains underwhelming and undoubtedly a disappointment to policymakers and market participants alike.

Conclusion

So why is the school of thought advocating data protection governance as a model for AI systems governance getting the traction it is? Notwithstanding its shaky assumptions about the legislative scheme of the AI Act (i.e. very different to that of the GDPR), it seems that, at a high level at least, some aspects of governance programmes probably do reflect business processes that do not change that much whatever their subject matter. It seems that, for now at least, organisations are quick to see the opportunity to increase the return on investment in frameworks that they have already developed (for GDPR) by recycling them for AI systems:

  • It is realistic to conclude that AI governance cannot safely be parked with one function or role (e.g. the CIO) and left to thrive safely there without input and oversight from, in all likelihood, several other functions. The roles of the DPO’s office in data protection and the CIO, CISO, Legal, Compliance & Ethics, Internal Audit and the business do not seem so over-engineered after all. 
  • It does seem sensible for an organisation to make an inventory early on in its governance process of AI systems that: (a) it is already using internally; (b) it wishes to deploy in the near or medium term; and (c) that its suppliers are already using to deliver services to it (sometimes without having informed the client). Such inventories are, superficially at least, not unlike the data mapping and contract inventories maintained under GDPR. 
  • Documentation: perhaps this is one person’s icing on the cake and another’s “killer app”. What percentage of good governance is attributable simply to documenting your processes, your controls, your “guardrails” and your mission statement? Ask any data protection specialist and you will find documentation is crucial. The requirements of the AI Act, as we wait with bated breath for a deluge of guidance from the Commission, ENISA and other bodies over the coming 12 to 18 months, are very similar in this regard.
To hear more from our experts on AI, visit our dedicated page here and register now for our Tech Summit 2024!

Subscribe to receive our latest insights - on the topics that matter most to you - direct to your inbox, at your preferred frequency. Subscribe here

Tags

spotlighton-euaiact, bristowsshorts, artificial intelligence, data protection and privacy, technology, article