The pros and cons of the increasing use of AI in the workplace is a hotly debated topic. Employers and employees alike have strong views on whether it should be encouraged and whether humans are actually required to perform all tasks in light of this fast moving and developing technology.
Tech embracing companies like Amazon are already taking big steps to replace employees with AI-driven alternatives, having announced this week that they plan to cut around 14,000 roles globally in a drive to operate “more leanly”. Closer to home, earlier this year, the CEO of BT confirmed that up to 55,000 workers will be made redundant by the end of the decade to reflect the potential of AI. The advancing technology does pose a real opportunity for employees in other areas however, with companies like Google investing heavily in AI, pledging up to £5billion in UK investment in the next 2 years, this may lead to additional jobs being created to support this.
On the employee side, workers are also reporting an increased reliance on AI tools to carry out their duties. In a recent poll undertaken by SnapLogic, an agentic integration company, 81% of employees surveyed reported such AI tools in their role.
This article considers some of the potential uses of AI in the UK employment law context and some of the dangers and pitfalls which employers need to consider.
How AI is being used in the workplace
AI in recruitment and hiring
There has been a rise in the use of AI recruiting software within the marketplace. AI tools are increasingly being used to draft job vacancies, prepare job descriptions and screen applicants.
When it comes to job offers, AI tools can be used to draft and populate offer letters and contracts of employment and to facilitate the completion of onboarding steps such as providing right to work information, for example.
AI for performance management
JP Morgan have recently confirmed that it is encouraging its staff to use its chatbot tool to answer prompts which in turn will draft a performance review. The hope is that this will streamline the processes for line managers, as undertaking personal performance reviews can be time consuming in large organisations.
We could also see AI’s big data analysis capabilities being harnessed to analyse employee data, identify trends and patterns in performance to support with identifying performance concerns or dips in productivity. This may be particularly useful in industries where KPIs are more easily recorded such in sales-based roles.
AI for content creation and policy drafting
Generative AI such as ChatGPT and Gemini are able to prepare policies on relevant workplace topics such as grievances and dress codes. Employers may also choose to use it more broadly to assist with marketing content or thought leadership pieces.
AI to enhance employee experience
Employee satisfaction surveys will not be a new concept to many within the HR space. However, AI tools can be used to analyse and report on trends, be adaptive to specific issues, concerns or points made by employees and identify skills or management gaps.
Some large organisations may also seek to use AI-powered chatbots to be the first point of contact for everyday employee relations questions.
Employment law risks of AI in the workplace
No matter the benefits and use cases for the use of AI within the workplace, the technology is not without issues and risks both legal and practicable. Employers who use or allow their employees to use AI without clear boundaries and a proper understanding of its limitations can open themselves up to legal risk.
We address some of the main employment law risks below:
Algorithmic bias and discrimination risks
Unconscious bias is a well understood and unfortunately a commonplace feature of human decision-making. AI technologies are built and “learn” from human driven data and so inequalities within society can become coded into AI systems. For any employer who uses AI for decision making processes, such as a paper sift of applicants for a new role, they must be acutely aware of the discriminatory risk which may follow this.
The consequences of AI bias in the recruiting process have been seen in the high-profile example of Amazon. It used a machine learning system to screen CVs within its hiring process. To teach the system examples of the experience and education that it wished to employ, it used a training data set of some of its engineers who were mostly male. The result of this was that the system would disregard CV which included the word “women’s” and therefore, favoured male applicants. Consequently, Amazon’s recruiting processes became open to claims of indirect sex discrimination. Trade unions such as the TUC have now called for additional protection within the Equality Act 2010 to guard against “discrimination by algorithm”.
Transparency and accountability challenges
When faced with a Tribunal claim, the defence of the same relies heavily on a paper trail and/or documentary evidence in support of actions taken and decisions made. Thereby, where decision making AI or machine learning technology is used, accountability and decision-making processes can be difficult to evidence. This poses a challenge for respondents in Tribunal in evidencing how certain conclusions and decisions have been made. Employers may use AI as a tool to assist in decision making processes however, they would be best advised that actual decisions which affect employees positively or negatively are made by a human decision maker. In the event of litigation, this then gives the employer the comfort of a person who can provide evidence to explain how AI driven data was collated, implemented and the extent to which it was relied upon in making decisions about an employee’s ongoing employment.
Skills shortages and workforce implications
As the power and capabilities of AI continues to evolve, this will inevitably have an impact on the job market as automation and human-machine collaboration displaces workers. Some might assume therefore that this will result in increasing levels of unemployment as the years progress. However, it is felt that the opposite is likely to be true and AI technologies will create more jobs than are displaced. In the 2025 Future of Jobs Report by the World Economic Forum, it was estimated that 11 million jobs will be created globally but will only displace 9 million workers between 2025 and 2030. As would be expected, the report also confirmed that the three job roles which are experiencing the highest growth are big data specialists, fintech engineers and AI and machine learning specialists.
For employers who are looking to embrace advancing technology and implement automation and machine learning within their business, the UK workforce risks facing issues of skills shortages. The demand for the sorts of skills and qualifications which will be needed to implement and maintain new technologies are not currently present within many adopting businesses. UK employers may find it difficult to bring in those skills from the job market due to the ever-increasing demand for specialists.
This may leave employers having to think about either upskilling their existing workforce, potentially those who would be displaced by new technologies or alternatively relying increasingly on the outsourcing to service new skills needs. Those working in HR will need to think about how employers can safeguard their investment in training their staff, maybe through the use of training fee agreements. For those who wish to bring in talent at a more junior level, AI and data science apprenticeships are currently offered by 12 training providers nationally. Where an employer chooses outsourcing, they will need to consider carefully whether TUPE will apply and ensure that compliant processes are followed for affected workers.
Ensuring legal compliance in AI use
Whilst AI technologies such as ChatGPT and Gemini can produce template contracts of employment and workplace policies, its interpretation of the sources which it relies on to produce the document could and often is incorrect. Therefore, employers must ensure that they have sufficient knowledge of employment law in order to sense check any documents produced by AI to ensure that it meets legal requirements for the laws of England and Wales. For instance, employers must ensure that all the required terms contained in section 1 of the Employment Rights Act 1996 are included within their contract. Failure to do so can result in a Tribunal awarding an employee up to 4 weeks’ pay (subject to a statutory cap), albeit such a claim cannot be brought on its own.
Conclusion
For an article written on the dangers of AI, it felt fitting to let the technology itself conclude on the topic. I therefore asked ChatGPT to “write a conclusion to a professional article on the risks of AI” with a UK employment law focus. It wrote:
AI is already reshaping the way UK organisations recruit, manage, and assess employees - but it also brings new legal risks that can’t be ignored. Automated systems can unintentionally discriminate, breach data protection rules, or make decisions without proper human oversight, creating exposure under key legislation such as the Equality Act 2010 and UK GDPR.
Employers should take a proactive approach: carry out regular audits of AI tools, document decision-making processes, and ensure there is always a route for human review where employment rights could be affected. Clear internal policies, transparency with staff, and ongoing training for HR and compliance teams are also vital.
Ultimately, AI can be a valuable tool in the workplace - but only if it’s used responsibly. By building fairness, accountability, and transparency into every stage of AI deployment, UK employers can innovate with confidence while staying firmly on the right side of the law.
The information provided in this article is provided for general information purposes only, and does not provide definitive advice. It does not amount to legal or other professional advice and so you should not rely on any information contained here as if it were such advice.
Wright Hassall does not accept any responsibility for any loss which may arise from reliance on any information published here. Definitive advice can only be given with full knowledge of all relevant facts. If you need such advice please contact a member of our professional staff.
The information published across our Knowledge Base is correct at the time of going to press.