Home Knowledge Industry Impacts? Council of the EU publishes new compromise text for the Artificial Intelligence Act

Industry Impacts? Council of the EU publishes new compromise text for the Artificial Intelligence Act

Industry Impacts? Council of the EU publishes new compromise text for the Artificial Intelligence Act

As the AI Act comes closer to being law, the new compromise text published by the Council of the EU gives us an insight into what the end result may be, and the significant legal and regulatory obligations for industry and organisations using or providing AI systems in the EU market. The new compromise text, which is thought may be the last of the Council of the EU for the AI Act, deals with General Purpose AI definitions, High-Risk AI amendments, the addition of elements of autonomy, user obligations and deep fakes.

On 19 October 2022, the Czech Presidency of the Council of the European Union published the new compromise text of the Artificial Intelligence Act (AIA). Due to several amendments tabled through the EU’s trialogue legislative process, the AIA legislative process may take longer to complete than the anticipated Q4 2023/Q1 2024. It is hoped that this new compromise text may be the last from the Council of the EU.

General Purpose AI Systems

The initial proposal for the AIA did not include General Purpose AI (GPAI) systems, and Title IA has introduced regulations for these systems. Recital 12(c) states that “general purpose AI systems are AI systems that are intended by the provider to perform generally applicable functions, such as image/speech recognition, and in a plurality of contexts.” GPAI is defined in Article 3(1)(b) as:

” an AI system that – irrespective of how it is placed on the market or put into service, including as open source software – is intended by the provider to perform generally applicable functions such as image and speech recognition, audio and video generation, pattern detection, question answering, translation and others; a general purpose AI system may be used in a plurality of contexts and be integrated in a plurality of other AI systems.”

Title IA is potentially problematic and may have the effect of weakening the AIA. Article 4b states that a GPAI system, which may be used as a high-risk AI system or even as a component of a high-risk AI system, must comply with the AIA requirements that the European Commission would set out under its committee procedure. One of the amendments is that the Commission must consider the AI value chain when implementing acts concerning GPAI systems. However, Article 4c states that 4b only applies where the provider of the GPAI has excluded all high-risk uses in the instructions of use or information accompanying the GPAI. This means that if a GPAI was intended to be used to create pictures of cute cats but could also be used to programme drones to kill all cats in the world, provided that the GPAI’s instructions of use say “this AI System is not to be used to bring about the extinction of cats”, then Article 4b does not apply to that system.

If the language of 4b and 4c remains in the final version of the AIA, one of the most significant legal and liability issues facing producers of GPAI will be ensuring that their instructions of use sufficiently disclaim any prohibited and high-risk use activities. Similarly, users of GPAIs will need to ensure that their use of GPAI systems does not go outside what is permitted in the usage instructions and may need legal assistance in setting those parameters.

High-Risk AI

A proposed amendment to Article 6(3) is that an AI system will be considered high-risk “unless the output of the system is purely accessory in respect of the relevant action or decision to be taken and may therefore lead to a significant risk to the health, safety or fundamental rights.” Current high-risk activities include AI systems which sort out resumés for job applications, for example. However, an AI system that sorts out job applicant resumés by creating a music playlist for each person based on their CV, rather than identifying whether they are suitable for a role, may not be high-risk. This tends to suggest that it is not the fact that AI systems use personal data, which is the risk, but rather the purpose of the processing.

The European Parliament’s co-rapporteurs of the AIA have proposed expanding the European Commission’s powers to extend the list of high-risk systems and prohibited practices later. Euractiv reported that the eighth batch of compromise amendments on the proposed Artificial Intelligence regulation was shared by leading lawmakers Dragoș Tudorache and Brando Benifei with the representatives of the other political groups on 21 October 2022. In the original proposal for the AIA, the European Commission could not change the list of high-risk areas apart from modifying or deleting the examples provided. However, it is proposed that the European Commission be permitted to make further amendments and extend the list of prohibited AI activities.

Elements of Autonomy

The Council has proposed an amendment to the existing AI definition by saying that AI is a system that “is designed to operate with elements of autonomy”. Recital 6 of the proposed legislation states that the “notion of AI system should be clearly defined to ensure legal certainty, while providing the flexibility to accommodate future technological developments”. However, “elements of autonomy” does not give legal certainty. A cleverly designed formula in a spreadsheet might exhibit “elements of autonomy”. A Recital 6 amendment provides that the “concept of the autonomy of an AI system relates to the degree to which such a system functions without human involvement.”

Reference to “the degree to which such a system functions” in Recital 6 suggests that the “elements of autonomy” in the AI definition are qualitative rather than quantitative. However, it is still to be determined if this is the case, and it needs to be apparent how a qualitative assessment should be undertaken.

Obligations of Users

The introduction of Article 2(8) is potentially problematic because this proposed amendment states that the AIA would not apply to “obligations of users” who are natural persons using AI in the course of a purely personal non-professional activity (except Article 52, which deals with transparency obligations). This means that as long as you are purely doing it for personal reasons, nothing stops you from making an AI that would otherwise be prohibited under the AIA. A person could create an AI set loose on social media to manipulate emotions, which would otherwise be prohibited without legal repercussions under the AIA. Similarly, an individual acting in their personal capacity would not be required to follow user instructions as required under the AIA and could deploy otherwise harmless AI systems for nefarious purposes, provided that they are personal.

Hypothetically, if a supervillain were to create an evil AI system to unleash on the world, provided that the motive is simply personal revenge and not for professional reasons (super villainy would need to be this individual’s hobby as opposed to their profession), this would not be caught by the AIA under the most recent proposal.

Deep Fakes

Article 52 introduces an exception which should give pause for thought. Article 52(3) of the AIA states that users of an AI system that generates or manipulates image, audio or video content that appreciably resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful (‘deep fakes’), shall disclose that the content has been artificially generated or manipulated. The exception is that “where the content is part of an evidently creative, satirical, artistic or fictional work or programme”, the content does not need to be labelled as a deep fake, “subject to appropriate safeguards for the rights and freedoms of third parties”.

This is potentially a significant failing. Poe’s law is an adage of Internet culture saying that, without a clear indicator of the author’s intent, every parody of extreme views can be mistaken by some readers for a sincere expression of the views being parodied. If a comedy show doctored a video of a politician, this would fall under Article 52’s satire exception and would not require labelling as a deep fake. For example, suppose a comedy show doctored President Biden’s interview to say he was a Russian secret agent. In that case, it would likely be circulated in some corners of the internet as factual news. Deep fakes, despite the original intentions, can always have the potential for harm.

Conclusion

As the final text of the AIA draws closer, it is becoming increasingly evident that it will have ramifications for the industry not seen by the introduction of legislation since the GDPR. When the new regulation becomes law, there will be 18 months to implement it. However, as experienced with GDPR, institutions may need more than 18 months of lead-in time, especially when one considers that high-risk AI systems must be designed to incorporate certain transparency and explainability obligations. Organisations should look at the systems they use (including related policies and procedures) to identify if and to what extent the regulations may apply to them and then put in place a legal, liability and risk management system that incorporates policies and procedures that deal with the new legislation.