Artificial Intelligence (AI) continues to develop and expand into everyday life.
Many employers are interested in utilising AI to minimise overheads, increase automation and target client bases. As such, AI will inevitably influence employment; in obvious ways, such as human replacement in lower-skilled labour and less obvious ways – such as compliance with equality legislation.
AI & Employment: Rapid evolution
With AI, as is the case often in employment law, legislation and case law will play catch-up to real-life practices.
AI is already present in most businesses – for example, in using facial recognition to secure company devices or spam filters on email. Some firms use (or are considering using) AI at recruitment to screen candidates. Already, it has been suggested that some AI programmes carry out this process without regard to relevant employment law protections or potential latent biases of the software programmers – for example, by filtering out certain language patterns which are more prevalent amongst different groups of people.
In Ireland, this could expose the employer to litigation risk – it is well established that our equality protections extend to the recruitment stage. Even regarding employment equality legislation, AI-led employment decisions are potentially unlawful. Individuals have the right not to be subject to automated decision-making under GDPR rules. There are existing heightened data protection obligations under the GDPR (complemented by the AI Act, discussed below). For example, providing information and certain types of data processing by AI systems may require a data protection impact assessment. Further, the risk of general litigation may increase due to the proposed “AI Liability Directive”, which changes the burden of proof in claims for damage caused by AI systems.
The protection against automated decision-making is an existing concept that legislation will likely expand on in the future, such as the draft EU regulation on AI; the AI Act. The AI Act is a draft piece of EU legislation expected to come into law in late 2023 or early 2024 and will have a massive effect in much the same way as the GDPR did in 2018.
The AI Act classes AI systems which deal with employment as “High Risk”. High-Risk systems will be subject to further scrutiny and control: regular formal risk assessments, data processing impact assessments and onerous record-keeping requirements.
The CJEU has also recently commenced hearing a reference as to the implications of automated decision-making regarding credit ratings – in an interesting case which is expected to test the limits of what can be subject to AI-made / AI-led decisions.
AI & Potential Dangers
As well as the risks above, there are potential dangers in respect of AI and employment where roles previously carried out by humans are automated and replaced with AI.
While redundancy legislation is designed to protect, engage, and compensate employees when their role is no longer required, it will be optically challenging for any employer to announce redundancies where AI will carry out the work in the future. A valuable example of public backlash against AI carrying out “human” work is the story of a journalist from The Atlantic magazine using an AI-generated image in August 2022 – which went viral in a way that caused huge negative publicity for the Company.
There is also the nebulous risk of AI creating intellectual property exposure where an AI programme creates valuable data – such as client lists, candidate databanks or creative works. If adequate IP provisions are not in place, who legally owns the data might not be clear: the employee, the employer, or the AI provider. While employers can somewhat protect against this risk by using strong user agreements between the employer and the AI provider, this depends on employee compliance. For example, an employee “teaching” the AI feeds the algorithm material subject to copyright. This could put the employer in a compromised position.
As well as that, the law in Ireland and the UK is an outlier in how it treats IP in AI-generated works – the person who makes the necessary arrangements for a computer-generated work where there is no human author is the author. Therefore, an employer could possibly own the copyright in a piece of work if an employee made the necessary arrangements to create that work.
However, most jurisdictions require a human author to avail of copyright protection. It remains to be seen how this area will evolve, and copyright ownership, for now, will need to be dealt with on a case-by-case basis. It is not clear if any IP created by AI systems will be capable of IP protection – but inserting AI-specific IP clauses into employment contracts may help address the issue.
How employers can prepare for AI adoption
While we await new rules, employers can prepare for the widespread adoption of AI in the workplace by reviewing their processes and determining which roles may be now (or in the future) at risk of being replaced/impacted by AI. This will help employers put in place strategies for the effective adoption of AI technologies and protection from potential challenges from disgruntled employees or an increasingly competitive market.