Home Knowledge Just using AI? You may still have onerous AI Act obligations

Just using AI? You may still have onerous AI Act obligations

Users of high-risk AI systems will bear a substantial set of obligations to guarantee the safe and lawful use of such technologies.

Under the AI Act, a high-risk system includes safety components of defined EU regulatory legislation, such as that relating to machinery; it also includes systems which are used in areas like biometric identification, critical infrastructure management, education, employment, essential services access, law enforcement, migration, and judicial and democratic processes.

In the AI Act, “users” are referred to as “deployers”. For the purposes of this article, however, we will refer to them as “users”. You’re an AI system user if you are using an AI system under your authority, except where the AI system is used in the course of a personal non-professional activity.

User obligations under the AI Act

Users of high-risk AI systems have multiple obligations stemming from the AI Act:

  • The first mandate for users is to undertake suitable technical and organisational measures to utilise the systems as per the accompanying instructions of use. Users exercising control over high-risk AI systems are required by the AI Act to introduce human oversight . This is a measure which is seen as pivotal to avoiding undue risks associated with autonomous operations of AI systems. It is essential that the individuals entrusted with this oversight are well-qualified, adequately trained, and possess the resources necessary for effective supervision in line with Article 14 of the AI Act.
  • Furthermore, users are obliged to ensure the regular monitoring and updating of robustness and cybersecurity measures. This is a strategy designed to maintain the system’s integrity over time. Users have a duty to ensure that the input data is relevant and representative of the AI system’s intended function, aligning with a doctrine promoting the reliability and impartiality of AI applications.
  • Users must sustain monitoring of the system’s operation in accordance with the system’s instructions of use, with a duty to promptly inform providers and relevant authorities in instances where the AI system poses a risk or it malfunctions significantly. It is worth noting that credit institutions regulated by Directive 2013/36/EU can satisfy this monitoring obligation by adhering to the rules delineated in Article 74 of that directive.
  • Users are directed to retain automatically generated logs by the high-risk AI systems for a minimum of six months. This mandate envisages the promotion of accountability and transparency in the system’s functioning.
  • The use of HR technologies which incorporate AI may be considered high-risk. Prior to putting into service or using a high-risk AI system at the workplace, users will need to consult workers’ representatives with a view to reaching an agreement and inform the affected employees that they will be subject to the system.
  • Moreover, users are bound to inform individuals subjected to decisions assisted or made by high-risk AI systems detailed in Annex III about the intended use and type of decisions the system is configured to make. This upholds a principle of informed consent in AI applications.
  • Users must undertake a fundamental rights impact assessment prior to deploying a high-risk AI system (in addition to a data protection impact assessment which users may need to carry out). This in-depth assessment should encompass various elements including outlining the intended purpose, geographic and temporal scope of the system’s deployment, and the categorisation of likely affected natural persons and groups, among other factors. The assessment promotes a careful scrutiny of the system’s adherence to existing laws and its potential impacts on fundamental rights and the environment.
  • If the risks cannot be mitigated, users are compelled to refrain from deploying the system and to inform the provider and the national supervisory authority, instituting a precautionary approach in AI deployment. Additionally, they are required to notify and involve several stakeholders including equality bodies and data protection agencies in the impact assessment process, introducing a multi-stakeholder approach to AI governance. Users are also guided to coordinate the fundamental rights impact assessment with any requisite data protection impact assessment, envisaging a harmonised approach to safeguarding rights in the AI ecosystem.

Users deemed to be Providers

The AI Act outlines the scenarios under which entities such as users, distributors, importers, or other third parties may be regarded as providers (i.e. the entities with the bulk of the AI Act’s regulatory obligations) of a high-risk AI system. This transformation in status triggers the application of providers’ obligations specified under Article 16 to the newly construed providers.

  • Users may be deemed a provider if they place their name or trade mark on a pre-existing high-risk AI system that has either been placed on the market or has been put into service. This attribution of identity to the system directly engages the user in the regulatory obligations vested with a provider.
  • Another circumstance that redefines a user as a provider is when users carry out a substantial modification to an already operational high-risk AI system, maintaining its high-risk classification. Furthermore, even alterations to a general-purpose AI system which transitions the system into a high-risk category would categorise the user as a provider.

Following this transition of role from a user to a provider, the initial provider of the AI system is absolved from the responsibilities of a provider in connection to the specific AI system under the purview of the AI Act. Consequently, an obligation is vested in the original provider to facilitate the new provider with all of the requisite technical documentation and other pertinent information which would enable the latter to uphold the regulatory responsibilities effectively. This obligation also applies to foundation models which are directly integrated into high-risk AI systems.

When this occurs, a written agreement must be put in place between the new provider and any third party contributing tools, services, components, or processes integrated into the high-risk AI system, outlining the details of technical access, assistance, and information requisite for the provider to adhere to the obligations outlined in the regulation.

Conclusion

Users of software cannot ignore the AI Act simply because they are not developing the product incorporating high-risk AI or are just subscribing to such a product. Users of these systems need to know that they are using such systems which are caught by the AI Act, in addition to understanding what their obligations are.

The key takeaways from the user obligations of the AI Act are that users need to ensure that they have an adequate AI governance framework in place. This requires human oversight at its core, and continuous monitoring of the systems, their security, and their operation. Retention of logs and maintenance of a paper trail will be key in regulatory compliance, as will transparency with stakeholders and the provision of information to affected parties.

Organisations which incorporate high-risk AI systems into their business, even just by using products which include those systems, need to consider whether they have an adeqauate AI governance framework in place in advance of the AI Act.

This is an area in which William Fry is actively engaging with its clients.