Artificial intelligence has rapidly become part of everyday business operations, and one of the most common uses is within HR and recruitment. Tools like ChatGPT can produce job descriptions, sift applications, draft policies and even conduct early-stage candidate screening. While AI can bring efficiency and clarity to HR processes, it also creates significant legal risks that employers must manage carefully. In the UK, these risks primarily involve discrimination, data protection, confidentiality and a growing expectation from regulators that employers fully understand how they use AI in decision making.
With the increasing reliance on AI tools, HR teams are looking for guidance on what is lawful, what is risky and how to use technology without exposing the organisation to employment claims. Understanding the potential pitfalls is essential for ensuring compliance, fairness and transparency across recruitment and HR practices.
How AI Can Create Discriminatory Outcomes
One of the most significant concerns around ChatGPT in recruitment is the risk of indirect discrimination. Although the tool does not make decisions independently in the way automated screening software does, it can still generate content that reflects biases found in the data it was trained on. This can occur when the system mirrors historical patterns in employment, meaning that certain groups may be favoured or excluded in subtle and unintended ways.
A job description created using AI may include language that discourages older applicants or emphasises characteristics that indirectly disadvantage women, disabled people or ethnic minority candidates. Phrases such as energetic, fast paced or native English speaker might seem harmless, yet tribunals have previously found that such terminology can amount to discriminatory practice.
Password-secured shortlisting processes can also be affected. If an HR professional asks ChatGPT to rank candidate CVs or identify the strongest applicants based on criteria the system has generated, the organisation risks delegating part of the selection decision to an unregulated third party. The Equality Act 2010 requires employers to ensure recruitment processes do not discriminate and that candidates are assessed fairly. If employers cannot demonstrate how decisions were reached or what influenced them, liability may follow.
Data Protection Concerns When Using AI
Another major legal risk lies in data protection compliance. HR professionals often work with highly sensitive information, including health data, criminal records, ethnicity, gender identity and financial details. Inputting any of this information directly into ChatGPT or other generative AI tools can amount to an unlawful disclosure of personal data.
Under the UK GDPR and the Data Protection Act 2018, employers must ensure that personal data is processed securely, lawfully and transparently. ChatGPT is not designed to store or process confidential HR information, and the organisation using the tool may not know where the data is held or whether it is deleted. This creates a significant risk of data breaches, regulatory enforcement and reputational damage.
The Information Commissioner’s Office has already issued several warnings to organisations about AI usage, emphasising that employers remain fully responsible for protecting personal data, even when technology is used to support operations. Any disclosure of employee or candidate information through AI tools could be reportable to the ICO and may lead to compensation claims from affected individuals.
Transparency and Accountability in HR Decisions
The use of AI also presents transparency challenges. Employees and job applicants have the right to understand how decisions affecting them are made. If AI has influenced shortlisting, policy development or internal investigations, employers must be able to explain how the tool was used.
There is also growing regulatory expectation that organisations adopt clear governance frameworks for the use of AI. The UK Government’s AI guidance and the ICO’s detailed recommendations emphasise that employers must understand the limitations of AI, ensure human oversight and document decision making.
If internal investigations or disciplinary actions are guided by AI generated content, there is a risk that allegations of unfair dismissal or procedural unfairness may arise. Tribunals have consistently held that employers must conduct full and reasonable investigations, relying only on information that is accurate, verifiable and fairly applied. AI created content does not always meet that standard.
Confidentiality and Intellectual Property Issues
ChatGPT learns from the prompts it is given. If an HR professional inputs details of a grievance, internal conflict, upcoming restructuring process or confidential business plan, they may inadvertently disclose commercially sensitive information. Once information is entered into a public AI tool, control over that information is lost.
This raises concerns not only around confidentiality but also intellectual property. If AI is used to draft policies, employment contracts or handbooks, the organisation must ensure it has full ownership of the material and that the content is legally accurate. AI generated documents may include outdated wording, incorrect statutory references or clauses that are unenforceable under UK law.
How Employers Can Use AI Responsibly
Despite the risks, there is no requirement for employers to avoid AI entirely. The key is to use it responsibly, transparently and with strong governance. AI can be helpful for producing early drafts, brainstorming ideas or generating neutral language, as long as a qualified professional reviews and refines the final content.
Employers should create internal guidelines explaining how AI may and may not be used by HR teams. This should include prohibitions on inputting personal data, restrictions on using AI to inform hiring decisions and clear requirements for human oversight. IT and legal teams should work together to ensure that policies reflect evolving guidance from regulators.
Training for HR and management teams is essential. Many legal issues arise because individuals misunderstand the capabilities of AI or over rely on its output. Emphasising that AI is a support tool and not a decision maker helps mitigate the risk of discriminatory or unlawful practices.
How Penerley Can Support You
ChatGPT and other AI tools are here to stay, and many organisations will continue to integrate them into their HR and recruitment processes. However, without appropriate safeguards, businesses can unintentionally expose themselves to legal risk.
At Penerley, we advise employers on how to implement AI responsibly while remaining compliant with employment law and data protection regulations. We can review your recruitment processes, update your HR documentation, draft AI usage policies and provide training to ensure your teams understand the boundaries of lawful AI use.
If you would like advice on managing the legal risks of AI in HR and recruitment, please contact Penerley today. Our team is here to support you in building safe, compliant and future ready HR practices.
