A practical, five-phase approach to EU AI Act compliance. From discovering your AI systems to ongoing monitoring — every step your business needs.
The EU AI Act is enforced in phases, with the most significant obligations taking effect in August 2026. This checklist breaks compliance into five manageable phases that any organization can follow, regardless of size or technical expertise.
Understand what AI you have and your role under the regulation
Designate a person or team responsible for AI Act compliance. This does not have to be a new hire — it can be your DPO, CTO, or compliance officer with expanded responsibilities.
Catalogue every AI tool, model, and automated decision-making system in your organization. Include third-party SaaS tools, embedded AI features, and internally developed models.
Determine whether you are a provider (developer/creator), deployer (user/operator), importer, or distributor for each AI system. Your obligations differ significantly based on your role.
For each AI system, document what data it processes, where data flows, and which individuals are affected by its outputs or decisions.
Classify your AI systems and assess their risk levels
Review all AI systems against Article 5 prohibited practices. Social scoring, manipulative AI, and untargeted biometric scraping are banned since February 2025.
Determine whether each system is high-risk (Annex III), limited-risk, or minimal-risk. High-risk classification triggers the most extensive obligations.
For systems that appear high-risk, check whether the Article 6(3) exception applies — AI that does not materially influence decision-making may be exempt.
For high-risk systems, perform a formal risk assessment per Article 9. Identify hazards, evaluate likelihood and severity, and document mitigation measures.
Build your compliance documentation package
For high-risk systems, prepare technical documentation covering system design, development methodology, testing procedures, and performance metrics (Article 11).
Prepare user-facing transparency notices for all AI systems. For chatbots and limited-risk systems, ensure users know they are interacting with AI (Article 50).
Define how humans will oversee high-risk AI system operation, including who can intervene, escalation paths, and override procedures (Article 14).
Implement automatic logging for high-risk AI systems. Logs must be retained for an appropriate period and be available for regulatory inspection (Article 12).
Ensure your team has the required AI literacy
All staff who interact with AI systems must have sufficient AI literacy (Article 4). Training should be proportionate to their role and the risk level of systems they use.
Provide system-specific training for deployers and operators of high-risk AI. Include instructions for use, known limitations, and human oversight procedures.
Record who received training, when, on what topics, and assessment results. This documentation serves as evidence of Article 4 compliance.
Maintain compliance as systems and regulations evolve
For high-risk systems, establish ongoing monitoring of system performance, accuracy, and bias. Define metrics, thresholds, and review cadences (Article 72).
Create procedures for reporting serious incidents to market surveillance authorities. Know your reporting timelines and what constitutes a reportable incident (Article 73).
Conduct quarterly or semi-annual reviews of your AI inventory, risk assessments, and documentation. The regulatory landscape will continue evolving through 2027.
Deepen your understanding with these guides and tools
AktAI automates every phase of this checklist — from AI inventory to ongoing monitoring. Start free and get compliant before August 2026.
No credit card required. Free tier available.