Home Knowledge Building a Responsible AI Framework using the AI Act

Building a Responsible AI Framework using the AI Act

Given the shockingly rapid advancement of artificial intelligence (AI) technologies, adopting a Responsible AI framework is crucial for companies to ensure that AI is developed and employed ethically and sustainably.

The European Union’s Artificial Intelligence Act (AI Act) could play a pivotal role in assisting organisations to establish such a framework. By implementing the principles set out in the AI Act, companies can not only mitigate potential risks inherent in AI adoption but also enhance their reputation as responsible technology leaders. This article highlights the significance of a Responsible AI framework for businesses and outlines how the AI Act can facilitate the integration of these crucial practices within organisations.

What is Responsible AI?

Responsible AI refers to the development and deployment of AI systems in a manner that is ethical, transparent, accountable and respects human rights, values and safety. It aims to ensure that AI technologies are designed and used to benefit society as a whole, while minimising potential harm, risk, and unintended consequences.

A responsible AI framework fosters trust with customers, partners, and stakeholders, by instilling confidence that AI systems are designed and deployed ethically and safely.

The AI Act

The AI Act, proposed by the European Commission in April 2021, is a comprehensive draft legal framework that aims to ensure AI’s ethical use and compliance with fundamental rights. At the time of writing, the EU Parliament is about to vote on the legislation. The AI Act will then enter the EU’s legislative trilogue process before becoming law, possibly by the end of 2023. The legislation applies to a wide range of AI systems, including those developed or deployed in the EU, as well as those imported into the region. Key aspects of the draft AI Act include:

  • The legislation categorises AI systems based on their potential impact on society, with a focus on high-risk AI systems that could have significant consequences if not properly managed. Article 6 and Annexes II and III list specific AI applications and sectors, such as biometric identification, critical infrastructure management, and employment-related decision-making, which are considered high-risk.
  • High-risk AI systems are subject to specific legal requirements, including transparency, accountability, and data governance. Chapter II of the AI Act outlines these obligations, which include data quality, documentation, traceability, and human oversight obligations.

Leveraging the AI Act for Responsible AI

The AI Act aims to provide the basis of a robust framework that organisations can follow to develop and deploy AI systems responsibly. Following the AI Act’s guidelines helps organisations ensure compliance with EU regulations, reducing the risk of non-compliance penalties and reputational damage. By adhering to the principles laid out in the legislation, companies can ensure that their AI systems are designed and operated ethically and safely. Key principles of Responsible AI which are complimented by the AI Act include:

  • Transparency: This means making AI systems machinations, decision making processes, and data usage clear and understandable to stakeholders and users. Article 13 of the AI Act mandates that AI systems provide users with clear, meaningful, and timely information about their capabilities and limitations. Article 52 requires that AI-driven outputs be labelled accordingly when AI systems are dealing with people. This ensures transparent operation and fosters trust with users and stakeholders.
  • Fairness: Fairness in AI means ensuring AI systems treat all users equitably, avoiding biases that may lead to unfair or discriminatory behaviours or outcomes. Recital 15 of the AI Act emphasises the importance of developing AI systems that respect fundamental rights, particularly non-discrimination. AI systems must be designed to minimise bias and avoid discriminatory outcomes, promoting equitable treatment for all users and stakeholders.
  • Accountability: Ensuring that AI developers, operators and users are held accountable for the consequences of their AI systems’ actions. Article 9 underlines the responsibility of AI providers and users to implement risk management systems and ensure compliance with the AI Act. This establishes a clear chain of responsibility for AI system development and deployment, addressing and mitigating any negative impacts. Regularly engaging in third-party audits to review AI systems and practices can help ensure compliance with the AI Act and provides an additional layer of accountability.
  • Data governance: A mature data governance framework blends the existing requirements of GDPR with the AI-specific requirements of data, particularly around datasets, machine learning and AI training. Article 10 of the AI Act requires high-quality training, validation, and testing data sets for AI systems, in addition to data protection and minimisation principles. Robust data governance practices ensure the security, privacy, and quality of data used in AI systems.
  • Human oversight: Designing AI systems to align with and respect human values, ensuring that they augment human capabilities and serve societal needs is a vital aspect of Responsible AI. Article 14 specifies that high-risk AI systems must have human oversight mechanisms in place, allowing humans to intervene, halt, or override AI-driven decisions when necessary. This prevents misuse or unintended consequences.

Conclusion

The AI Act, when implemented, will offer a comprehensive regulatory framework that will guide organisations toward Responsible AI practices. By adhering to the key principles and implementing practical steps, companies can establish a robust Responsible AI framework that ensures ethical AI system development and deployment. This, in turn, fosters trust, regulatory compliance, and risk mitigation, positioning your organisation for success in an increasingly AI-driven world.

For more information and assistance on implementing AI, and AI governance programs please feel free to contact us.