Employment law implications when developing and implementing Artificial Intelligence in your business

For businesses, Artificial Intelligence (AI) can represent an attractive cost saving measure. In many cases well implemented AI can improve the profitability of a business. However, the recent settlement in a case involving Uber Eats continues to illustrate that this as yet imperfect technology must be implemented with care.

  1. Large language models (LLM) “train” by consuming content. A November 2023 study by Briesch, Sobania and Rothlauf demonstrated that where subsequent generations of an LLM are trained on content which it, itself, has produced, its output rapidly lost accuracy and diversity (following an initial improvement); and
  2. Facial recognition software is significantly less accurate when identifying non-white faces as compared to, in particular, white male faces.

In the case of Mr P E Manjang v Uber Eats UK Ltd and others. Mr Manjang, a black man, was suspended from the app which provides Uber Eats’ drivers with access to work following failed facial recognition checks and an automated process. He alleged indirect discrimination and the case ultimately settled in his favour.

Meanwhile, in March of this year the government published guidance on Responsible AI in Recruitment. Law firms are no strangers to receiving hundreds (and sometimes thousands) of applications for training contract positions. Again, the temptation to implement AI in the recruitment process is great – in a competitive labour market where employers receive far more applications than they have allocated the resources to process, automating aspects of this process can vastly reduce the cost of finding the right candidate. But, with the Briesch et al study in mind, consider what might happen if the tool in question was allowed to learn what the business wants in its employees by looking at the current crop of employees, some of whom (and an ever-increasing proportion of whom) it will have selected, or had a hand in selecting, itself.

While the government does not have immediate plans to regulate AI, the Uber Eats case illustrates the pitfalls in the existing law which must be considered in the implementation and development of these AI tools. To avoid inadvertently discriminating, employers seeking to improve their business and recruitment processes must properly understand how these tools work, especially as more technology companies jump on the bandwagon of adding AI, machine learning and LLM technology to their products. Equally those companies who produce this technology will want to consider how their tools might be exposing their clients, and consequentially themselves, to risk.

Whether you are a technology company, an employer (or both!), contact Andrew Firman or Kate Boguslawska in the corporate and employment departments today at andrewfirman@cartercamerons.com, kateboguslawska@cartercamerons.com for advice and support. 

Julian Smith

Author Julian Smith
Position: Trainee Solicitor
Telephone: 020 7406 1000
Email: JulianSmith@cartercamerons.com