EU AI Act Conformity Assessment: A Practical Guide for High-Risk AI Providers
What the EU AI Act conformity assessment actually requires from high-risk AI providers before August 2026 — the documentation, the process, and the pitfalls most businesses miss.
The Obligation Nobody Talks About Enough
Ask most businesses about EU AI Act compliance and they mention risk classification, technical documentation, and transparency requirements. Far fewer mention the step that legally gates everything else: the conformity assessment.
Under Article 43, providers of high-risk AI systems cannot legally place their systems on the EU market — or put them into service — without completing a conformity assessment first. It is not something you do after launch. It is not a checkbox you add to your release checklist. It is a prerequisite, and as of August 2, 2026, it is enforceable.
This guide explains what a conformity assessment actually involves, what it produces, when you need a third party and when you can self-assess, and where most organizations are getting it wrong.
First: Are You a Provider?
Before worrying about conformity assessment procedures, confirm that you are actually a provider in the legal sense.
The EU AI Act defines providers as entities that develop an AI system and place it on the market or put it into service under their own name or trademark. If you build an AI-powered product and sell it, license it, or deploy it commercially, you are almost certainly a provider. The same applies if you significantly modify an AI system from another source — you may inherit the provider's obligations.
Deployers — organizations that use AI systems built by others — have a different, narrower set of obligations. They must perform fundamental rights impact assessments for certain use cases, maintain human oversight, register in some circumstances, and cooperate with providers. But the heavy conformity machinery falls primarily on the provider.
If you are a business deploying a third-party AI product (say, an HR vendor's recruitment screening tool), make sure your vendor has completed the conformity assessment. Request a copy of their EU declaration of conformity. If they cannot provide one, treat that as a significant red flag — they are placing a non-compliant system in front of you.
What the Conformity Assessment Is Actually For
The conformity assessment is the formal mechanism by which a provider demonstrates that their high-risk AI system meets all the requirements of Chapter III, Section 2 of the AI Act. This includes:
- Article 9: A risk management system covering the full AI lifecycle
- Article 10: Data governance requirements for training, validation, and testing datasets
- Article 11: Technical documentation that would allow an authority to reconstruct and assess the system
- Article 12: Logging and record-keeping capabilities built into the system
- Article 13: Transparency and information provision to deployers
- Article 14: Human oversight mechanisms designed into the system
- Article 15: Accuracy, robustness, and cybersecurity requirements
Meeting these requirements is one thing. The conformity assessment is how you prove you meet them — with documentation, testing records, and a formal declaration.
Two Paths: Self-Assessment vs. Third-Party
The AI Act provides two routes to conformity assessment, and which one you must take depends on what type of high-risk AI system you have.
Internal Control (Annex VI) — Self-Assessment
For most high-risk AI systems, providers can conduct an internal conformity assessment following the procedure in Annex VI. This is self-assessment — no notified body involvement required. You build and document the evidence yourself, then sign a declaration of conformity.
Do not mistake "self-assessment" for "informal assessment." Annex VI requires you to:
- Document that your quality management system meets Article 17 requirements
- Apply relevant harmonized standards (or demonstrate equivalent compliance without them)
- Review and update technical documentation in line with Article 11
- Maintain internal audit procedures and records
- Sign and store the EU declaration of conformity
The self-assessment path is rigorous. Regulators can request access to everything you produced during the assessment process. If you signed a declaration of conformity without the substance to back it up, that is the kind of thing that triggers the heaviest enforcement attention.
Third-Party Assessment (Annex VII) — Notified Body Involvement
For a narrower category of high-risk AI systems — specifically, those that are safety components of products covered by other EU harmonized legislation (like medical devices or machinery), and where no harmonized standards exist that cover the AI-specific aspects — a third-party assessment by a notified body is required.
Notified bodies are accredited organizations designated by member states to assess conformity. They charge fees (often substantial), operate on their own timelines, and are in limited supply. If you fall into this category and have not already engaged a notified body, your August 2026 window is tight.
Check the NANDO database (the European Commission's official notified body database) to find accredited bodies for your product category. Do this now — lead times can be months.
The Conformity Assessment Process in Practice
Regardless of which route applies, the practical work looks similar. Here is a realistic breakdown of the steps involved.
Step 1: Determine High-Risk Classification (If Not Already Done)
Before you can assess conformity, you need to confirm your system is actually high-risk. Article 6 and Annex III define the categories. Common high-risk domains include:
- Biometric identification
- Critical infrastructure management
- Education and vocational training
- Employment and worker management
- Access to essential private and public services
- Law enforcement
- Migration and asylum
- Administration of justice
If your system falls into one of these categories and makes decisions, assists in decisions, or influences decisions affecting individuals, you are very likely in high-risk territory. The EU AI Act compliance checker can help you work through the classification logic.
Do not assume you are not high-risk just because you are a small company or because your system is not the "main" decision-maker. The Act explicitly covers systems that assist or influence decisions, not just fully autonomous ones.
Step 2: Establish (or Audit) Your Quality Management System
Article 17 requires providers to implement a quality management system (QMS) that covers the entire AI system lifecycle. This is not an optional governance nicety — it is a formal requirement that feeds directly into the conformity assessment.
Your QMS must cover at minimum:
- A strategy for regulatory compliance, including how you identify and apply harmonized standards
- Procedures for design, development, and design change review
- Procedures for technical documentation management
- Procedures for post-market monitoring (Article 72)
- Procedures for handling serious incidents and reporting to authorities
- Data management procedures (feeding into Article 10 compliance)
- How you ensure human oversight is implemented (feeding into Article 14)
- How you manage third-party AI component suppliers
If you already have an ISO 9001-certified QMS, that is a starting point, not a finish line. The AI Act QMS requirements go further into AI-specific obligations that most existing QMS frameworks do not cover.
Step 3: Build the Technical Documentation
Article 11 and Annex IV define what technical documentation must contain. This is the most documentation-intensive part of the conformity assessment, and the most commonly underestimated.
Required documentation includes:
- General description: What the system does, its intended purpose, deployment context, and the persons or groups it is intended to be used for or by
- System design and architecture: Functional description, key design choices, system components including software and hardware, the computational graph and input/output specifications
- Training methodology: Training data, preprocessing steps, labeling approach, data validation, test and validation procedures
- Performance metrics: Accuracy, robustness, and any output limitations — measured across relevant subpopulations and deployment conditions
- Risk management documentation: The full record of your Article 9 risk management process — identified risks, risk estimates, mitigations adopted, residual risks accepted
- Changes log: A complete record of changes made to the system during development and after deployment
- Standards applied: List of harmonized standards or common specifications applied, and how compliance was demonstrated
- Human oversight description: How the system provides for operator oversight and intervention, including what override mechanisms exist and how operators are trained
This documentation needs to be maintained for the full lifecycle of the system plus ten years after it is placed on the market. Build systems for maintaining it from the start — retrofitting documentation years later is painful and produces worse outputs.
Step 4: Conduct and Document Testing
Your conformity assessment is not credible without empirical evidence from testing. The AI Act requires testing against intended purpose and foreseeable use — including foreseeable misuse — across the relevant population groups.
Critical testing requirements:
- Dataset validation: You must be able to demonstrate that training, validation, and test datasets are relevant, representative, free of errors to the extent possible, and have appropriate statistical properties for the intended purpose
- Bias and fairness testing: For systems that affect individuals, you must test for bias across protected characteristics and document results — including where bias remains after mitigation
- Robustness testing: Testing for behavior under edge cases, adversarial inputs, and operational stresses
- Human oversight validation: Demonstrating that human override mechanisms work as intended under realistic conditions
Keep raw test results, methodology documentation, and test dataset descriptions. These are exactly what a market surveillance authority will request if they investigate.
Step 5: Sign the EU Declaration of Conformity
Once the assessment is complete, the provider must draw up an EU declaration of conformity in accordance with Article 47. This is a legally binding document declaring that the system meets all applicable requirements of the AI Act.
The declaration must be kept for ten years and must be updated when the system is substantially modified (which may trigger a new conformity assessment).
Do not sign this document until the underlying assessment work is actually complete. Signing a declaration of conformity for a system that has not undergone proper assessment is a straightforward path to enforcement action.
Step 6: Affix CE Marking and Register in the EU Database
Article 48 requires high-risk AI systems to bear CE marking (confirming conformity with the Act and any other applicable EU legislation). Article 49 requires providers to register their systems in the EU database before placing them on the market.
The EU database is maintained by the Commission and will be publicly accessible. Registration requires providing summary information about the system, its purpose, risk category, and the provider. This creates an accountability trail — registration means you are publicly on record as a provider of a high-risk AI system.
The Substantial Modification Problem
One area where organizations frequently get caught out is the substantial modification trigger. If you make significant changes to a high-risk AI system after placing it on the market, the AI Act may require you to repeat the conformity assessment for the modified system.
The Act does not define "substantial modification" with mathematical precision — it is a judgment call based on whether the change affects the system's compliance with Chapter III requirements. Relevant factors include:
- Changes to the intended purpose
- Changes to training data or training methodology
- Significant changes to model architecture or components
- Changes that affect the system's accuracy, robustness, or the way human oversight functions
- Changes that affect the risk profile identified during the original assessment
The practical implication: if you have a high-risk AI system in production and you deploy a significant model update, you need a process to evaluate whether that update triggers a new conformity assessment. Build this into your change management and release processes, not as an afterthought.
What Good Looks Like vs. What We Actually See
Based on what practitioners are encountering across organizations preparing for August 2026, here is an honest picture of where businesses stand.
What good looks like:
- Technical documentation is living documentation, maintained as part of the development workflow, not generated retroactively
- Risk management is integrated into sprint cycles and release gates, not a separate annual exercise
- Testing evidence is version-controlled alongside the model itself
- Post-market monitoring is automated where possible, with clear escalation paths for anomalies
- The declaration of conformity is reviewed by both legal and technical staff before signing
What we actually see in most organizations:
- Technical documentation attempted retroactively, with significant gaps where development decisions were not recorded at the time
- Risk management treated as a document rather than a process — a risk register that was filled in once and never updated
- Testing limited to standard ML metrics (accuracy, F1 score) without the regulatory-specific testing for bias, robustness, and foreseeable misuse
- No process for evaluating whether model updates trigger a new conformity assessment
- Legal staff drafting declarations of conformity without full visibility into the underlying technical evidence
Where to Start If You Are Behind
If your conformity assessment work has not started or is materially incomplete, here is the prioritized sequence:
- Confirm your risk classification — You cannot plan a conformity assessment without knowing which procedure applies and what Annex III categories are in scope.
- Inventory your AI systems — Most organizations do not have a complete picture of all AI systems they operate. The AI systems inventory guide covers how to build one.
- Gap-assess your technical documentation — What do you have versus what Article 11 requires? Be ruthless. "We could reconstruct this from our commit history" is not the same as documentation.
- Establish your QMS structure — Even if you do not have a formal QMS, document your current process and map it against Article 17. Identify the gaps.
- Plan and execute testing — Prioritize bias and robustness testing if you have not done it. These are the areas most likely to require rework if gaps are found.
- Engage legal counsel to review draft declarations — Do not treat the declaration of conformity as a boilerplate document. Have qualified legal counsel familiar with the AI Act review it.
If you want a structured way to track progress across all these dimensions, AktAI's compliance dashboard maps your documentation and assessment status against the Article 11 and Article 9 requirements, so you can see gaps without manually cross-referencing regulatory text.
The Timeline Pressure Is Real
August 2, 2026 is approximately five months away. A conformity assessment for a complex high-risk AI system — done properly — takes two to four months of sustained effort for an organization that is reasonably well-prepared. Organizations that are starting from scratch are looking at a compressed, intensive process.
The risk of cutting corners is not abstract. Market surveillance authorities have been explicit that they will request technical documentation and assessment records as part of enforcement investigations. A declaration of conformity unsupported by real assessment work is a liability, not a shield.
Start the assessment process now. Document everything as you go. And if you identify gaps in your high-risk AI system that cannot be remediated in time, have an honest conversation about whether you can continue operating that system after August — or whether you need to temporarily suspend it until compliance is achieved. That is a difficult business decision, but it is a better outcome than enforcement action.
Further reading: The EU AI Act deadlines timeline shows all key compliance milestones. The AI risk assessment guide covers the Article 9 risk management system in depth. The compliance documentation best practices guide addresses technical documentation structure and maintenance.