Events

AI in recruitment: the ICO lifts the curtain on providers of AI tools

The UK’s Information Commissioner’s Office (ICO) has released a report on AI in Recruitment, which asks helpful questions and provides recommendations for employers using AI recruitment tools.

The report is the output of consensual audit engagements with providers of AI recruitment tools, and the findings are insightful for both users (recruiters) and providers of such tools. Whilst the report focuses on the principles of UK data protection laws (the UK GDPR), its observations and findings have practical, although not binding, application internationally and in relation to laws beyond data protection, such as the EU AI Act and those covering discrimination.

The growing role of AI in recruitment

In recent years, AI tools have rapidly gained traction in recruitment due to their ability to streamline candidate sourcing, screening and selection. The report identifies the most common uses of AI in recruitment, including:

  • suggesting potential candidates from a database of profiles that match vacancies;
  • finding candidates in underrepresented groups to increase diversity;
  • scoring candidate skills and competencies from applications and CVs/ resumes;
  • AI powered games and psychometric tests as part of the selection process; and
  • scoring candidate competencies and skills based on transcripts of interviews.

The report highlights the inherent risks of greater reliance on AI systems in such processes, namely that AI recruitment algorithms can be unfair, they can learn to emulate human bias, and can perpetuate digital exclusion of minorities. These are not new risks, but by consulting with developers, the ICO has been able to provide some fascinating details as to how developers are tackling these issues. For in-house legal teams, recruiters and AI developers alike, the findings underscore the necessity of aligning technological advancements with robust privacy safeguards.

Key takeaways

Fairness

The ICO recommends that both AI providers and recruiters ensure that personal information processed using an AI tool is processed fairly. This includes monitoring for accuracy and bias issues in the AI and the output.

Somewhat alarmingly, at least one provider professed to an accuracy of just “at least better than random”, which the ICO unsurprisingly considers is not enough to demonstrate that AI is processing personal information fairly.

The report also states that AI providers and recruiters should ensure any special category data processed (such as about race or ethnicity) for use in monitoring for bias and discriminatory outputs is adequate and accurate enough to fulfil that purpose. The report reveals that there is a common practice of inferring certain characteristics such as gender and ethnicity from other data, and using this to monitor for bias and discrimination. The ICO considers this practice to be inadequate and inaccurate and therefore not in compliance with data protection law.

Transparency and explainability

Transparency and explainability are common concepts across new and incoming AI laws around the world. In the report, the ICO focuses on the importance of providing transparency and explainability to candidates through privacy information which should clearly explain:

  1. what personal information is processed by AI and how;
  2. the logic involved in making predictions or producing outputs; and
  3. how personal information is used for training, testing or otherwise developing the AI.

Point 1 and 3 are consistent with information that should already be provided to candidates for traditional recruitment processes but the report reveals some challenges in these respects. In particular, it observed a lack of clarity on whether personal information of candidates is used for training or testing or both (what the report describes as “secondary purposes”). The report identifies that many providers may be using data in ways not explained to candidates, particularly for training and testing. Point 2 can be more challenging because it relies on AI providers being able to explain to recruiters how the AI works, and explain in such a way that can be understood by candidates.

The ICO also found that there is often a lack of responsibility between the provider and recruiter as to who should provide the privacy information to candidates. A number of instances were identified of both parties referring to the other’s privacy notice, with neither actually providing the information.

Data minimisation and purpose limitation

The recommendations in this area come as no surprise but are a useful reminder. Both providers and recruiters should collect the minimum personal information to achieve the AI’s purpose and only process that information in line with the purpose.

The ICO found examples of providers collecting and storing more information than they needed, such as photographs of candidates. It also found that the majority of AI providers had repurposed candidate personal information to train, test and maintain their AI tool and, in several instances, develop new products. In many cases, the providers weren’t able to demonstrate that this secondary use of personal information aligned with the original purpose of collection.

Data protection impact assessments (DPIA)

The ICO recommends completing a DPIA early in AI development and prior to processing, where AI is likely to result in a high risk to people. In our experience, DPIAs are a useful exercise to ensure that recruiters understand the details of the tools they are using, can knowledgably risk assess them and mitigate those risks.

The ICO found that whilst the majority of providers had completed a DPIA, some had only just done it before the audit and many DPIAs were not sufficiently detailed.

Data controller and processor roles

The report sets out that AI providers and recruiters must define whether their role is controller, joint controller or processer. For example, if a provider uses the personal information it processes on the recruiter’s behalf to develop a central AI model that they deploy to other recruiters, they are a controller.

The report notes however that several providers audited had incorrectly determined their role as controller or processor, or not determined their role at all.

Lawful basis and additional condition

The ICO emphasises that providers and recruiters must identify the lawful basis on which they rely for each instance or processing personal information where they are the controller. Where that personal information is special category data (such as gender, race or ethnicity), they must identify the additional condition for processing that data.

Where consent is relied on, it must be specific, granular and with a clear opt-in.

Using personal data to train and test AI

The ICO found that almost all AI providers had trained and tested their tools using candidate information that they had already collected from recruiters (usually pseudonymised or anonymised). Reassuringly, the ICO found providers were generally separating training and testing data. The report however includes a case study of a provider that was potentially training and testing with the same data, which means that accuracy and bias issues went undetected.

Accuracy, fairness and bias mitigation in AI

The AI providers involved in the audit had usually considered and tested their tools for accuracy, but there appears to be no standard method of measurement or threshold (and, as mentioned above, at least one provider’s threshold was “at least better than random”).

The findings indicate that whilst most AI providers are aware of the risk of bias, they aren’t always adequately addressing it. Generally, providers were using the ‘four fifths rule’ to assess bias. This means that the selection rate for any group must be at least 80% of the group with the highest rate. (In the US, guidance issued by the US Equal Employment Opportunity Commission explores the four fifths rule in more detail and casts some doubt on its suitability in defending discrimination claims).

Most concerningly, the ICO found that a number of providers are using inferred or estimated information to measure and monitor bias. In particular, gender, ethnicity and age are being inferred from other information (such as names, education history, etc.). Most AI providers inferring such information were unable to demonstrate that it was reliable and accurate.

Human reviews in AI

The ICO looked into how AI outputs and decisions were being reviewed and quality checked and if AI made automated decisions. Whilst it found that most AI providers included human intervention at some point in the AI process, this was not always formalised and staff were typically not trained and did not complete reviews consistently and thoroughly.

The report notes that most of the tools audited were designed to support human recruiters rather than make automated decisions. The report also touches briefly on the risk of recruiting managers using the AI outputs to make automated decisions where this was not intended (e.g. if the tool gives suitability scores and the recruitment manager does not review those candidates with the lowest score, automated decision making will have taken place). In our view, this is a key area of concern that warrants further guidance. Recruiters are procuring AI tools specifically to increase efficiency and reduce human review time, so it would not be surprising if those using the tool day-to-day felt that it empowered them to skip the review of low-scoring candidates.

We note that solely automated decision making can be permitted in certain circumstances and with safeguards, but recruiters should be aware that it is restricted in recruitment and should take advice if candidates are being rejected without human review.

Third party relationships

The report highlights that AI systems can involve complex data supply chains. Some contracts reviewed by the ICO did not contain sufficient information about what personal information would be processed and how, the responsibilities of each party, and what would happen when the contract had ended.

Both providers and recruiters should consider contractual terms carefully, ensure they are tailored to the specific engagement and cover data protection issues adequately.

Practical checklist for employers using AI in recruitment

  • Ensure regular training to identify the use of AI tools. Raise awareness of AI internally so that teams can flag any proposed use of tools which can be assessed by relevant stakeholders, including the in-house legal and IT teams.
  • Conduct due diligence on the vendor. In particular, ask them about how they test and monitor for bias; how they use candidate data for training and testing; whether they have classified themselves as a data controller or processor; whether data is being transferred to other third parties or to other countries and how they keep information secure.
  • Consider legal risks and issues from a data protection and employment law perspective and get appropriate legal advice when reviewing and negotiating contracts with vendors. Ensure there is clarity in the contract about the obligations and responsibilities of the parties. Ensure advice is sought in all jurisdictions where the tool is used to identify all applicable laws.
  • Carry out a DPIA for the tool and ask to see your vendor’s DPIA (in countries where DPIAs are required or recommended).
  • Work with the vendor to understand accuracy and bias in the tool and have processes in place to monitor this.
  • Consider if your organisation will be using the tool for solely automated decision making (such as where there is no human review of low scoring candidates) and if so, consider how you will inform candidates and give them opportunity to challenge the decision.
  • Ensure that candidate privacy notices explain the use of personal data by the AI in a transparent way.
  • Consider what global scope your AI tools may have and seek guidance on other laws and regulations that will apply. An AI recruitment tool that is either recruiting for roles in an EU member state or is open to applicants based in the EU may engage the EU AI Act.

Simmons & Simmons data protection, employment and AI specialists are experienced in helping clients navigate AI tools in recruitment, including preparing DPIAs and recommending contractual terms between providers and vendors. Please contact Emily Jones, Olivia Ward or Lauren Dickinson for more information.