What feels really British but isn’t? “Almost everything in the British Museum” goes the old gag. Could a similar analogy be made about the role of data protection governance frameworks as a model for shaping AI governance? Does data protection feel as if it might be a perfect fit for shaping AI governance, and actually isn’t?
Reasons to be cheerful (“DP governance = AI governance”)
There is a school of thought that for effective AI governance, organisations could do worse than start with the data protection governance frameworks that many, particularly in Europe, already have in place. Here are some of the reasons we commonly hear in favour of this:
|
Reasons to think twice
Where are the limits to the arguments above? How well do they withstand scrutiny?
The primary purpose of the Act is to prevent risks caused by “high-risk” AI systems (read 'Is my AI “high-risk” under the AI Act?’ article). For that reason, we confine ourselves below to assessing only how high-risk AI systems measure up against these arguments.
At the heart of the AI Act is the Title III regime, which governs AI systems that are deemed to pose a “high-risk” as Recital 43 and Article 7(2) make clear, to “health, safety and fundamental rights”. As noted in this series (see 'What is my role under the AI Act?' article), the twin pillars of the Act’s approach to its concept of “high-risk” AI systems are, on the one hand, the health and safety of individuals, and on the other, the fundamental rights and freedoms of individuals as enshrined in the EU’s Charter of Fundamental Rights.
This regime is based on a common EU approach, which has been around for decades, to regulate products where safety is of particular importance – medical devices and lifts to name two examples.
Under this approach, the manufacturer must establish the safety of regulated products through conformity assessments of the products against certain essential statutory requirements. This must be done before the products can be placed on the market (first commercialised). Thereafter, products may be marked “CE”, allowing their marketing and distribution across the EU. However, it should be stressed that this is not the end of the process, and such products are subject to continuous monitoring and vigilance obligations.
Providers of high-risk AI systems, on whom most of the AI Act’s essential requirements fall (see Chapter 2 of the Act), must create and operate a quality management system for the AI systems they have developed. The Act’s requirements for the quality management system are set out in Article 9.
If this regime sounds hardly like the legislative regime for data protection in the EU at all, that is because it is not. There is little similarity between these core features of the Act in relation to high-risk AI systems and the EU legislative approach to data protection as embodied in the GDPR.
Delving further, conformity assessment as a regulatory model depends on standardisation organisations and notified bodies for its efficacy. Broadly, providers that follow a standard developed by one of the European Standardisation Organisations do not need to interpret the essential requirements of the legislation but will instead simply be able to follow the relevant standard. The capacity problems in the EU’s system of regulation for this approach are ably described in the following article, but that is another matter. The GDPR’s attempt to kickstart a market in mechanisms loosely akin to such standards, that is, certification schemes, certification bodies, and codes of conduct for data protection has been one of its notable failures. Six years on from GDPR implementation, the rate of adoption by data controllers of such certification schemes and codes of conduct remains underwhelming and undoubtedly a disappointment to policymakers and market participants alike.
Conclusion
So why is the school of thought advocating data protection governance as a model for AI systems governance getting the traction it is? Notwithstanding its shaky assumptions about the legislative scheme of the AI Act (i.e. very different to that of the GDPR), it seems that, at a high level at least, some aspects of governance programmes probably do reflect business processes that do not change that much whatever their subject matter. It seems that, for now at least, organisations are quick to see the opportunity to increase the return on investment in frameworks that they have already developed (for GDPR) by recycling them for AI systems:
- It is realistic to conclude that AI governance cannot safely be parked with one function or role (e.g. the CIO) and left to thrive safely there without input and oversight from, in all likelihood, several other functions. The roles of the DPO’s office in data protection and the CIO, CISO, Legal, Compliance & Ethics, Internal Audit and the business do not seem so over-engineered after all.
- It does seem sensible for an organisation to make an inventory early on in its governance process of AI systems that: (a) it is already using internally; (b) it wishes to deploy in the near or medium term; and (c) that its suppliers are already using to deliver services to it (sometimes without having informed the client). Such inventories are, superficially at least, not unlike the data mapping and contract inventories maintained under GDPR.
- Documentation: perhaps this is one person’s icing on the cake and another’s “killer app”. What percentage of good governance is attributable simply to documenting your processes, your controls, your “guardrails” and your mission statement? Ask any data protection specialist and you will find documentation is crucial. The requirements of the AI Act, as we wait with bated breath for a deluge of guidance from the Commission, ENISA and other bodies over the coming 12 to 18 months, are very similar in this regard.
To hear more from our experts on AI, visit our dedicated page here and register now for our Tech Summit 2024! |