Home Knowledge The Time to (AI) Act is Now: A Practical Guide to General Purpose AI Models and Systems Under The AI Act

The Time to (AI) Act is Now: A Practical Guide to General Purpose AI Models and Systems Under The AI Act

The EU’s Artificial Intelligence Act (the AI Act) marks a pivotal step in regulating artificial intelligence (AI) by establishing a framework to ensure ethical AI use while safeguarding fundamental rights.

The AI Act, published in the Official Journal on 12 July 2024, introduces strict rules on the deployment and use of certain AI systems. This article provides a detailed and practical guide for businesses to navigate the AI Act, focusing on the obligations businesses will have under the AI Act concerning general-purpose AI models and systems.

A. Overview of General-Purpose AI Models and Systems

1. Definition and Importance:

  • General-Purpose AI Model: An AI model trained with large datasets, capable of performing a wide range of tasks and integrated into various downstream systems or applications (Article 3(63)).
  • General-Purpose AI System: An AI system based on a general-purpose AI model that serves multiple purposes directly or when integrated into other AI systems (Article 3(66)).
  • Importance lies in their versatility and widespread applicability, which necessitates robust regulatory oversight to manage risks and ensure ethical deployment.

2. Classification of General-Purpose AI Models with Systemic Risk:

  • A general-purpose AI model is classified as having systemic risk if it has high-impact capabilities, such as significant computational power (exceeding 10^25 FLOPs) or other criteria set by the European Commission (Article 51). FLOPs (Floating Point Operations per Second) measure the computational power of a system by counting the number of floating-point calculations it can perform per second. This high computational threshold indicates the model’s ability to handle extensive and complex tasks, necessitating robust regulatory oversight.

3. Responsibilities along the AI Value Chain:

  • Providers, deployers, and other third parties can be considered providers of high-risk AI systems if they substantially modify a general-purpose AI system’s intended purpose leading to it becoming a high-risk AI system (Article 25).
  • Providers of general-purpose AI models must cooperate with downstream providers to ensure compliance with the AI Act’s obligations (Article 25(4)).

4. Obligations for Providers of General-Purpose AI Models:

  • Draw up and maintain technical documentation of the model’s training and evaluation (Article 53).
  • Make available documentation and information to downstream providers to understand the model’s capabilities and limitations (Article 53(b)).
  • Put in place a policy to comply with EU copyright law, ensuring all content used for training respects reserved rights (Article 53(c)).
  • Publish a summary of the content used for training the model (Article 53(d)).

5. Special Obligations for Providers of General-Purpose AI Models with Systemic Risk:

  • Perform model evaluations using state-of-the-art protocols, including adversarial testing (Article 55).
  • Mitigate systemic risks and ensure cybersecurity protections are in place (Article 55).

6. Transparency Obligations (Article 50):

  • Providers of AI systems, including general-purpose AI systems, that generate synthetic content must ensure the outputs are marked in a machine-readable format as artificially generated or manipulated.
  • Technical solutions for these markings should be effective, interoperable, robust, and reliable, considering technical feasibility and costs.

7. Deployer Obligations:

  • Deployers, defined as entities using an AI system under their authority (Article 3(4)), must ensure that the AI systems they use comply with the AI Act’s requirements.
  • Deployers must collaborate with providers to maintain compliance and report any substantial modifications that might change the AI system’s risk classification.

B. Key Dates:

  • 12 July 2024: The AI Act published in the Official Journal.
  • 1 August 2024: The AI Act will become law.
  • 2 August 2025: Rules on General Purpose AI Models and Systems come into effect.

C. Enforcement and Penalties

1. Supervision and Enforcement:

  • The AI Office and national authorities will monitor compliance, with the AI Office having exclusive powers to enforce obligations related to general-purpose AI models (Article 88).
  • Providers must respond to documentation requests and facilitate evaluations by the AI Office (Articles 91 and 92).

2. Fines and Penalties:

  • The European Commission can fine providers of general-purpose AI models up to 3% of their total worldwide turnover from the previous financial year or €15 million, whichever is higher.
  • This happens if the provider is found to have intentionally or negligently violated the provisions of the AI Act, failed to comply with a request for documents or information under Article 91, provided incorrect, incomplete, or misleading information, ignored a measure requested under Article 93, or did not provide the European Commission with access to the general-purpose AI model or a model with systemic risk for evaluation under Article 92.

D. Steps to Compliance

1. Conduct an AI Inventory:

  • Begin by creating a comprehensive inventory of all AI systems currently in use within your organisation.
  • Categorise these systems based on their purpose, functionality, and the data they process.

2. Assess AI Systems Against General-Purpose AI Models and Systems Rules:

  • Review each AI system to determine if it could be classified as a general-purpose AI model or system.

3. Implement Compliance Measures:

  • If any AI systems are identified as a general-purpose AI model or system, develop a plan to ensure compliance.
  • Establish internal policies and procedures for ongoing monitoring and assessment of AI systems to prevent non-compliance.

4. Training and Awareness:

  • Educate employees, especially those involved in AI development and deployment, about the new regulations and the importance of compliance.
  • Provide specific training on identifying and mitigating risks associated with a general-purpose AI models or systems.

5. Documentation and Reporting:

  • Maintain detailed records of all general-purpose AI models or systems, assessments, and compliance measures undertaken.
  • Be prepared to provide documentation to regulatory authorities if required.

The AI Act represents a comprehensive effort by the EU to regulate AI technologies and protect fundamental rights. Ensuring compliance with the obligations for general-purpose AI models is crucial for fostering trust and promoting ethical AI practices. By taking proactive steps to assess, document, and monitor your AI systems, your organisation can navigate these regulations effectively and maintain a competitive edge in a rapidly evolving technological landscape.

For further guidance and support on AI compliance, please contact Barry Scannell, Leo Moore, Rachel Hayes or any member of the William Fry Technology Department.