Home Knowledge Ireland Designates Nine Authorities to Safeguard Fundamental Rights Under the EU AI Act

Ireland Designates Nine Authorities to Safeguard Fundamental Rights Under the EU AI Act

On 31 October 2024, Ireland designated nine national authorities to oversee the protection of fundamental rights in the context of artificial intelligence (AI).

This move, announced by Minister of State for Trade Promotion, Digital and Company Regulation, Dara Calleary, positions Ireland as a proactive leader in ethical AI governance. The initiative aligns with the EU Artificial Intelligence Act (AI Act) and underscores Ireland’s commitment to ensuring that AI systems are developed and deployed responsibly, without compromising public rights.

Context: The EU AI Act’s Comprehensive Regulatory Framework

The AI Act establishes a multi-layered regulatory approach to AI oversight by designating specific roles for various regulatory bodies. Each role has a distinct focus, ensuring that the technical, ethical, and human rights implications of AI are fully addressed:

  • Notifying Authorities (Article 28): Responsible for assessing, designating, and monitoring conformity assessment bodies.
  • Notified Bodies (Article 31): Required to meet stringent organisational, resource, and process standards to perform their tasks under national law.
  • Market Surveillance Authorities (Article 74): Tasked with overseeing AI systems in the EU market to ensure compliance with the AI Act’s standards.
  • National Competent Authorities (Article 70): Each Member State must designate a competent authority and single point of contact to implement and enforce the AI Act’s provisions.

In contrast to these entities, Article 77 empowers ‘national public authorities or bodies’ to specifically protect fundamental rights in high-risk AI applications. This focus distinguishes them from technical or market-based regulatory bodies, making them essential players in the EU’s ethical oversight framework. The nine designated authorities outlined below are authorities under Article 77 of the AI Act.

The Designated Authorities and Their Mandates

Under Article 77 of the AI Act, national public authorities or bodies which supervise or enforce the respect of obligations under EU law protecting fundamental rights, including the right to non-discrimination, in relation to the use of Annex III high-risk AI systems referred to in Annex III have been granted enhanced powers under Article 77, enabling them to access documentation, conduct technical testing, and collaborate with market surveillance authorities to ensure high-risk AI systems do not infringe on fundamental rights.

In cases where documentation alone is insufficient, Article 77 allows these authorities to request technical testing of AI systems through the market surveillance authority, ensuring robust enforcement against AI systems that could harm public rights. This safeguard aligns with the AI Act’s wider objective of upholding health, safety, and fundamental rights, extending to high-risk and prohibited AI systems alike.

Each authority will play an important role in overseeing AI systems in sectors that intersect with their specific mandates:

  • Data Protection Commission (DPC): The DPC is expected to play a central role in AI governance, given its regulatory expertise in data protection under the General Data Protection Regulation (GDPR). It will likely oversee AI systems to ensure compliance with data protection standards, focusing on issues such as data minimisation, purpose limitation, and transparency, particularly for high-risk AI applications that process personal data.
  • Coimisiún na Meán (Media Commission): The Media Commission will likely focus on AI’s influence within media and public discourse. This could involve monitoring generative AI content, deepfakes, and misinformation to ensure that AI-generated media is appropriately labelled and transparent, protecting the public from potentially manipulative or misleading content.
  • Irish Human Rights and Equality Commission (IHREC): IHREC is expected to examine AI applications where human rights concerns may arise, especially in areas prone to discrimination, such as employment and financial services. Its role will likely include playing a part in helping ensure AI-driven decisions are free from bias and do not unfairly impact individuals based on protected characteristics, such as gender, ethnicity, or socio-economic background.
  • An Coimisiún Toghcháin (Electoral Commission): The Electoral Commission is anticipated to focus on preserving the integrity of Ireland’s democratic processes in relation to AI. Its role may include monitoring AI technologies used in political campaigns, voting systems, or political advertisements, to ensure these technologies do not undermine fairness or transparency in democratic activities.
  • Ombudsman for Children: The Ombudsman for Children is expected to safeguard children’s rights in the context of AI, for example, in sectors like education and healthcare. This oversight might involve ensuring that AI systems affecting children are fair, transparent, and designed with consideration for children’s privacy, safety, and welfare.
  • Environmental Protection Agency (EPA): The EPA is likely to oversee AI applications related to environmental monitoring and protection. This could include AI systems used in sectors like agriculture, energy, and industry, ensuring that AI-driven technologies adhere to environmental standards and do not contribute to ecological harm.
  • Financial Services and Pensions Ombudsman: This Ombudsman is expected to monitor the use of AI within financial services, focusing on consumer protection. AI-driven decisions, such as those used in credit scoring, loan approvals, or insurance underwriting, may be scrutinised to prevent unfair treatment or discriminatory outcomes for consumers.
  • Ombudsman: The Ombudsman is likely to handle complaints related to AI in public services, ensuring that AI systems used by government entities are transparent, fair, and accountable. This may include overseeing AI systems in sectors such as social services or healthcare and providing a pathway for citizens to address any adverse impacts.
  • Ombudsman for the Defence Forces: The Ombudsman for the Defence Forces is anticipated to oversee the use of AI within the defence sector, with a focus on safeguarding the rights and welfare of military personnel. This could involve monitoring AI applications to ensure that they are used ethically and do not infringe on the rights of service members or compromise their safety.

Legal Implications for Businesses  

For businesses providing or using high-risk AI systems in Ireland, these regulatory designations represent a new level of compliance oversight. As of August 2026, businesses in Ireland must be prepared to meet these documentation, transparency, and accountability standards under the AI Act, especially for high-risk AI applications. Below are the primary legal implications to consider:

  1. Enhanced Documentation and Disclosure Obligations: Companies deploying high-risk AI will be required to maintain accessible documentation detailing their system’s functionality, decision-making processes, and compliance measures. This documentation must be in a format that authorities can readily interpret and review, placing an increased burden on AI developers to ensure clarity and precision in their records.
  2. Transparency and Fairness in AI Systems: The designated authorities will help enforce transparency requirements, especially in sectors where AI-driven decisions can significantly impact individuals’ rights. Companies involved in finance, healthcare, employment, and public services will need to demonstrate that their AI systems operate in a fair, transparent, and non-discriminatory manner. This may necessitate adjustments to algorithmic design, model training, and data usage to mitigate bias.
  3. Potential for Technical Testing of AI Models: In cases where authorities determine that documentation alone does not suffice, businesses may be subject to technical testing of their AI systems, coordinated by the market surveillance authority. This procedural mechanism allows regulators to examine the actual operation and outcomes of AI models, including high-stakes applications such as financial decision-making tools or predictive healthcare algorithms. Companies should be prepared for the possibility of such examinations and ensure their models meet compliance standards under simulated scrutiny.
  4. Legal Risks for Non-Compliance: The AI Act’s provisions grant these authorities significant powers to enforce compliance, including the ability to impose fines and, in severe cases, restrict or suspend the use of AI systems found to violate fundamental rights. Businesses that fail to meet the AI Act’s standards face considerable legal risks, including reputational harm, fines, and operational disruptions. This is particularly pertinent for sectors where AI misuse could lead to discrimination, privacy breaches, or safety risks.
  5. Proactive Compliance Strategies and Internal Audits: To mitigate legal exposure, companies should conduct regular internal audits of their AI systems and engage in proactive compliance strategies, including risk assessments and bias testing. Developing a robust compliance framework will be crucial for navigating this complex regulatory environment and demonstrating good-faith efforts in the event of regulatory scrutiny.

Strategic Opportunities and Implications for International Business

The AI Act’s extraterritoriality is well-known at this point. Ireland’s proactive stance in AI governance not only provides a clear framework for compliance but may also set a precedent for international regulatory standards. Companies operating across EU jurisdictions should monitor Ireland’s approach as it may inform similar structures in other Member States. Additionally, for non-EU businesses with operations in Ireland, these new requirements highlight the growing need for alignment with EU-level standards on AI, particularly for companies seeking to establish a foothold in the European market.

The framework also offers a strategic advantage for businesses that prioritise ethical AI development. By aligning with Ireland’s standards, companies can demonstrate a commitment to responsible AI use, strengthen stakeholder trust, and potentially enhance their market position as leaders in ethical AI.

Conclusion

By designating these nine authorities to safeguard fundamental rights, Ireland is not only meeting its obligations under the AI Act but also setting a benchmark for ethical AI oversight. With a dual focus on innovation and accountability, Ireland is signalling its commitment to a balanced regulatory framework that protects citizens’ rights while encouraging responsible AI development. For companies operating within this framework, understanding and adhering to these regulatory requirements will be essential to ensure compliance and foster a sustainable approach to AI deployment.

Ireland’s designation of these authorities not only enforces the ethical and legal foundations of AI but also positions the nation as a regulatory leader, ensuring AI’s societal benefits do not come at the expense of fundamental human rights.

For further guidance and support on AI compliance, please contact Barry Scannell, Leo Moore, Susan Walsh, Rachel Hayes, or any member of the William Fry Technology Department.

Contributed by Cian Byrne.