This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 3 minutes read

What employers need to know about the EU AI Act

Uses of AI systems in an employment context are to be tightly regulated by the AI Act given the potentially significant impact their use could have on a person’s career prospects and ability to earn a living. There is a real risk that AI systems used in recruitment may perpetuate historic biases, e.g. against women or people of certain races, and using AI systems to monitor an individual’s employment performance risks interfering with their fundamental rights and privacy. Note that, in common with the provisions governing high-risk AI systems, the key employment-related provisions will not apply for another two years from the date the Act enters into force. 

Why does the AI Act matter to UK employers now that we have left the EU?

Whilst the AI Act may not be relevant to all UK employers, those who have a European footprint and intend to utilise the same AI systems across all territories may well decide to put in place similar arrangements in the UK to those required in EU states. It is expected that the EU’s approach to AI will set a minimum global standard for regulation.

Further, the AI Act applies if the output of an AI system is to be used within the EU, so any UK-based employers who target, or accept applications from, EU-based candidates will need to comply with the AI Act to the extent they use AI as part of their recruitment process.

What is a high-risk AI system?

As described in this article, the AI Act identifies certain AI uses as “high-risk”, which means that they pose a significant threat to a person’s health, safety, or fundamental rights, and are therefore subject to stricter regulation. Various employment-related uses of AI are deemed high-risk. They fall broadly into two categories:

  • Recruitment uses: targeted job adverts, screening of applications and evaluations/selection of candidates. 
  • Uses within an ongoing employment relationship: performance evaluation, work allocation on the basis of behavioural or personal traits, and promotion and termination decisions.

Are there any exceptions?

If an AI system does not “pose a significant risk of harm, to the health, safety or fundamental rights of natural persons” then it will not be deemed high-risk. The AI Act gives four specific examples of when a system will fall within this exception/derogation, all of which could potentially be relevant to HR-related AI systems. For example, an AI system intended to improve the result of a previously completed human activity, or an AI system intended to carry out a narrow procedural task. 

Further guidelines on high-risk AI systems are to be published by the Commission 18 months from entry into force of the AI Act. These will include practical examples of what would and wouldn’t be considered a high-risk system, so this will hopefully provide clarity on what will fall within the high-risk derogation. 

What does this mean for employers using or considering the use of AI systems as part of their HR processes?

Most of the obligations relating to high-risk AI systems fall on providers, who are those that develop AI systems or have them developed in their name. In summary, providers of AI systems are required to design systems that have appropriate risk management systems in place; 

  • that are developed using high-quality data sets; 
  • for which they can provide detailed technical documentation; 
  • that automatically keep adequate records; 
  • are transparent;
  • allow effective human oversight; and 
  • are accurate, robust and secure.

Most employers will be “deployers” for the purpose of the AI Act. The key obligations for deployers of high-risk AI systems are:

  • ensuring compliance with the AI system’s instructions; 
  • assigning human oversight to competent individuals who have the necessary training, authority and support; 
  • monitoring the operation of the AI system and informing the system provider/distributor and, where relevant, the market surveillance authority of identified risks and serious incidents; 
  • retaining automatically generated logs if under their control; 
  • informing worker representatives that workers will be subject to the use of a high-risk AI system and informing individuals where an AI system is used to make or assist in making decisions about them; and 
  • co-operating with the relevant competent authorities in any action relating to the AI system. 

Is there anything else employers should be aware of?

Whilst the majority of HR AI use cases will fall within the high-risk category, there are also certain uses that are entirely prohibited. The prohibition most relevant to employers is the restriction on using AI systems to infer emotions in the workplace. Certain AI-powered video interview software already on the market that analyses a job candidate’s facial expression may well fall within this category. 

Keep an eye out for our future updates on the AI Act as further guidance is published by the Commission.

To hear more from our experts on AI, visit our dedicated page here and register now for our Tech Summit 2024!


spotlighton-euaiact, article