Home Knowledge Cavalry Call: Can the New Standard Published on AI Management Systems Lead the Charge for AI Act Readiness?

Cavalry Call: Can the New Standard Published on AI Management Systems Lead the Charge for AI Act Readiness?

On 18 December 2023 the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) published their new standard for AI Systems Management, the ISO/IEC 42001:2023 standard (Standard).

The Standard is a voluntary set of guidelines which:

  • provides a framework for establishing, implementing, maintaining, and improving an artificial intelligence management system within organisational contexts;
  • is designed to ensure responsible development, deployment, and use of AI systems, by considering ethical implications, data quality, and risk management; and
  • offers high-level principles and objectives for stakeholders to establish policies and procedures for AI systems.

The Standard seems to be closely aligned to many of the regulatory requirements under the EU’s Artificial Intelligence Act (AI Act), and adherence to the Standard could potentially assist organisations with their AI Act compliance.

Integration and Risk Management

The Standard emphasises the integration of AI management within the overall management system of an organisation, aligning with organisational processes and objectives, adaptable and scalable for any organisation involved with AI. It underscores the importance of risk management specific to AI use cases, advocating a risk-based approach within the organisation’s operations and providing detailed controls and guidance for implementation in its annexes.

Performance, Effectiveness, and Conformity

Performance measurement is highlighted, based on both quantitative and qualitative findings, with continual monitoring required to enhance performance. The need for AI systems to be effective is detailed, meaning they should achieve intended results as planned. The Standard covers the necessity for conformity to requirements and systematic audits to evaluate whether AI systems meet set criteria.

Impact Assessment

Formal assessment of AI’s impact on individuals and society is emphasised, with a call for thorough evaluation and mitigation by organisations. The importance of data quality is outlined, with a requirement for data in AI systems to meet organisational needs for specific contexts.

Documentation and Governance

Organisations are required to document all necessary controls for AI systems and to justify the inclusion or exclusion of certain controls. The role of a governing body is addressed, responsible for the organisation’s performance and conformance regarding AI systems. Information security is also a critical aspect, referring to the preservation of confidentiality, integrity, and availability of information.

Adaptation and Accountability

Organisations must adapt management systems to incorporate AI-specific considerations such as ethical use, transparency, and accountability. A risk-based approach is critical for the identification, assessment, and mitigation of AI-associated risks. Continuous performance evaluation and improvement are necessary to ensure AI systems are beneficial and not harmful. The Standard mandates clear documentation and justification for AI-related processes and decisions to support traceability and accountability.

Transformational Impact of AI

Given the transformative nature of AI in various sectors, this Standard serves as a comprehensive guideline for organisations to harness AI’s potential responsibly while addressing its challenges. It is crucial for organisations to conform to such standards to maintain trust, comply with regulations, and ensure sustainable and ethical use of AI technologies.

Alignment with the AI Act

Incorporating the directives of the Standard can potentially significantly aid organisations in adhering to the forthcoming AI Act from the EU, which will establish a legal framework for AI’s development, deployment, and use. The AI Act requires that AI systems are safe, ethical, and respectful of fundamental rights and data protection.

The Standard’s emphasis on responsible AI implementation, risk management, and continuous improvement is closely aligned with the AI Act’s objectives. Organisations following this Standard may be able to prepare for compliance with the AI Act by:

  • demonstrating accountability and governance over AI systems;
  • conducting rigorous impact assessments and audits;
  • ensuring data quality and security;
  • maintaining transparency and traceability in AI processes and decisions; and
  • establishing robust mechanisms for managing AI-related risks.

Proactive Compliance with AI Act

Adherence to the Standard could be a proactive step for organisations to meet the AI Act’s regulatory requirements, fostering trust and facilitating a responsible AI ecosystem within the EU. It offers a structured approach to help mitigate potential legal and ethical issues associated with AI, providing a clear pathway to AI Act compliance.

The AI Act categorises AI systems into prohibited and high-risk categories, each with distinct compliance obligations. The Standard’s emphasis on responsible AI implementation, risk management, data quality, and transparency closely aligns with these obligations, thereby potentially providing a structured pathway for organisations to meet the AI Act’s requirements.

Compliance with Prohibited AI Systems

The AI Act lists specific AI systems that will be prohibited, such as biometric categorisation systems and untargeted scraping for facial recognition. The Standard, with its focus on ethical AI management and data  governance, could help guide organisations in identifying and discontinuing such prohibited AI applications.

Alignment with High-Risk AI Systems Requirements

For high-risk AI systems, the AI Act mandates rigorous risk management, registration, data governance, transparency, human oversight, accuracy, robustness, cybersecurity, and record-keeping. The Standard aligns with these requirements by providing a framework for risk assessment, data quality management, and transparency in AI operations, which could potentially be adapted to meet the specific obligations for high-risk AI systems under the AI Act.

Supporting High-Risk Systems User Obligations

Organisations using high-risk AI systems must adhere to specific obligations under the AI Act, such as ensuring human oversight and cybersecurity measures. The Standard’s guidelines on management systems, data governance, and human oversight could help support users in fulfilling these obligations.

Foundation Models and General Purpose AI (GPAI)

The AI Act requires foundation models and GPAI to adhere to rigorous standards. The Standard can provide a foundational approach to managing these systems, particularly in terms of transparency, risk assessment, and data governance.

Conclusion

The Standard provides organisations with a comprehensive approach to managing AI systems that in many ways appears to be compatible with the AI Act’s requirements. By adopting this Standard, organisations may be able to proactively prepare for the AI Act’s implementation, ensuring compliance with its provisions and contributing to the development and use of AI that respects fundamental rights and ethical standards.