This article is about checklists for the EU AI Act. The content can be accessed as a section via the main navigation. The topic area contains checklists with varying levels of detail on important aspects of the EU AI Act. The checklists are accompanied by a target group recommendation.
The role of checklists
Regulatory issues are difficult to deal with in operational AI practice without checklists. For this reason, the “Checklists” section has been introduced on CAIR4, which deals exclusively with this topic.
With regard to checklists, a distinction must be made between two variants:
- the new CAIR4 checklists for the EU AI Act
- the adaptation of established checklists in the context of the EU AI Act
1. New CAIR4 checklists
The EU AI Act has raised many new questions for which there have been no checklists before. These include checklists for the role of open source AI systems or the differentiation between provider and operator as well as the high-risk classification in each case within the meaning of the EU AI Act.
The new CAIR4 checklists will be expanded gradually.
1.1 Target group
The implementation of the EU AI Act is so demanding, among other things, because different departments have to be involved on one and the same topics, including
- Specialist areas
- Legal, Ethics, and Compliance
- Product or project management
- Software Development
- Privacy
- IT Security
- Marketing and communication
- sometimes the management is also necessary
But not every AI topic requires the participation of all departments. In this respect, this content depends on the target group. This is illustrated by the following exemplary overview:
The different target groups are indicated on the overview pages of the respective checklist. The target group information is to be understood as a recommendation.
1.2 Ongoing updating
Since the EU AI Act is still new and there is hardly any practical experience, the first versions of the new CAIR4 checklists are primarily based on the analysis of the EU AI Act and the evaluation of corresponding technical contributions. The checklists are versioned and adapted to new findings on the respective topic.
1.3 Link to the category “Checklists for the EU AI Act”
The section “Checklists for the EU AI Act” can be accessed via this link:
2. Adaptation of established checklists
The EU AI Act sets out specific requirements and regulations for the development, deployment and monitoring of AI systems, in particular with regard to transparency, security, ethics and human rights. Some of the established AI checklists, which target already known topics, would have to be adapted in the future to meet the specific requirements of the EU AI Act.
Here’s a recommendation on which AI checklists would need to be adjusted and why:
2.1 Ethical AI and Responsible AI Checklists
- Why adaptation necessary?
- The EU AI Act contains clear provisions on ethics that are specifically aimed at upholding European values such as fundamental rights, democracy and data protection. These ethical requirements are more comprehensive and specific than many global or national guidelines. Therefore, existing ethical checklists may need to be updated to ensure that they are in line with the explicit requirements of the EU AI Act, especially in relation to human-centered AI and the protection of fundamental rights.
- Adaptations:
- Integration of the specific requirements of the EU AI Act regarding non-discrimination and transparency.
- Consideration of the requirements for human-centered approaches as set out in Article 1 and Article 3 of the EU AI Act.
An exemplary checklist for ethical AI from the USID (US Agency for International Development) can be found here (PDF).
2.2 Bias and Fairness in AI (e.g. AI Fairness Checklist)
- Why adaptation necessary?
- The EU AI Act emphasizes the need to avoid systemic discrimination and requires AI systems, especially high-risk AI, to be developed and monitored in a fairness-oriented manner. Existing checklists on fairness and bias must take these requirements more into account.
- Adaptations:
- Complement specific methodologies and criteria to identify and mitigate discrimination risks in accordance with the requirements of the EU AI Act.
- Implementation of audits and reporting requirements specifically for high-risk AI systems.
A version of the AI Fairness Checklist can be found on the Microsoft website (PDF as of 2020)
2.3 AI Model Development (e.g. Google Responsible AI Practices)
- Why adaptation necessary?
- The EU AI Act requires detailed technical documentation, risk management and conformity assessments for AI models, especially when used in the context of high-risk AI. Many existing development practices are focused on general principles and do not sufficiently address specific regulatory requirements.
- Adaptations:
- Integration of requirements of the EU AI Act, such as technical documentation and risk management systems for AI models within high-risk AI and GPAI models.
- Ensuring compliance with EU requirements for continuous monitoring and adaptation of AI models throughout their lifecycle.
The Google Responsible AI Framework is not an AI checklist in the narrower sense, but a more comprehensive toolkit and can be accessed here.
2.4 Data Privacy and GDPR Compliance (z.B. ICO GDPR Checklists)
- Why adaptation necessary?
- While the EU AI Act is compatible with the GDPR in many areas, there are specific requirements, especially regarding the use of data for training AI systems, that go beyond the GDPR’s data protection requirements. This requires an adjustment of the data protection checklists to also cover the specific requirements of the EU AI Act.
- Adaptations:
- Incorporating the specific requirements for data processing and data protection under the EU AI Act, including transparency obligations and requirements for the traceability of decisions.
- Expand data protection assessments to integrate the requirements of the EU AI Act for high-risk AI systems.
The ICO checklist including explanations can be found here (website).
2.5 Explainable AI (XAI) Checklists (e.g. DARPA XAI Program)
- Why adaptation necessary?
- The EU AI Act emphasizes the need for transparency and thus also explainability for AI systems, especially those used in sensitive areas. Many existing XAI frameworks focus on the technical explanation of models, but do not fully address the regulatory requirements of the EU AI Act.
- Adaptations:
- Adaptation of XAI checklists to ensure that the declarations are not only technical but also understandable to end-users and regulators.
- Integration of the specific reporting requirements and transparency requirements of the EU AI Act.
The ICO checklist including explanations can be found here (website)
2.6 AI Safety and Security Checklists (e.g. NIST AI Risk Management Framework)
- Why adaptation necessary?
- The EU AI Act contains specific requirements to ensure the security and robustness of AI systems that go beyond general safety principles. These must be reflected in existing safety checklists.
- In this regard, the adoption of the Cyber Resilience Act (CRA), which is directly linked to the EU AI Act with regard to high-risk AI, is also imminent.
- Adaptations:
- Complement specific safety requirements and measures prescribed by the EU AI Act for high-risk AI systems.
- Implementing regular reviews and audits to ensure compliance with security requirements throughout the lifecycle of an AI system.
NIST AI RMF is also more of a framework, but adjustments would be useful. It can be accessed here (PDF as of 2023).
3. Conclusion and outlook
The new CAIR4 checklists will be primarily dedicated to those topics that have hardly been dealt with in checklists so far due to the enactment of the EU AI Act. The offer will be successively expanded.
Regardless of this, it is important to adapt the many established AI checklists to the EU AI Act. Although these also cover aspects of the EU AI Act, they must be adapted to the specific and often stricter requirements of the EU AI Act. These adjustments mainly affect the areas of ethics, fairness, data protection, security and explainability of AI to ensure that they comply with the legal and regulatory framework set out in the EU AI Act. This challenge can only be pointed out here at the moment.
About the Autor:
Be First to Comment