How to Conduct an AI Risk Assessment Under the EU AI Act
Step-by-step guide to conducting an AI risk assessment that satisfies Article 9 of the EU AI Act. Covers methodology, ISO alignment, hazard identification, and practical documentation templates.
Risk Assessment Is Not the Same as Risk Classification
Before diving into methodology, a critical distinction needs to be made. One of the most common mistakes organizations make when approaching the EU AI Act is conflating risk classification with risk assessment. They are related but fundamentally different activities.
Risk classification answers the question: "What risk category does this AI system fall into under the EU AI Act?" This is a legal determination based on the system's purpose and domain. A credit scoring system is high-risk because Annex III says so. A spam filter is minimal risk. Classification is binary for each category — you either fall within a high-risk use case or you do not.
Risk assessment answers the question: "What specific risks does this particular AI system pose, how likely are they, and how severe would the consequences be?" This is an analytical process that examines your actual system, its actual deployment context, and its actual potential for harm.
Classification tells you what rules apply. Risk assessment tells you what you need to do about the risks. You need both, but they happen at different stages and serve different purposes.
Use the classification tool first to determine your system's risk category. Then come back here for the risk assessment methodology that Article 9 requires for high-risk systems.
What Article 9 Actually Requires
Article 9 of the EU AI Act mandates that providers of high-risk AI systems establish, implement, document, and maintain a risk management system. The regulation is specific about what this system must include:
- Identification and analysis of known and reasonably foreseeable risks that the high-risk AI system can pose to health, safety, or fundamental rights when used in accordance with its intended purpose.
- Estimation and evaluation of the risks that may emerge when the system is used in accordance with its intended purpose and under conditions of reasonably foreseeable misuse.
- Evaluation of risks based on data gathered from the post-market monitoring system (Article 72).
- Adoption of appropriate and targeted risk management measures to address identified risks.
The risk management system must be a continuous, iterative process that runs throughout the entire lifecycle of the AI system — from design through development, deployment, and ongoing operation. It is not a one-time exercise.
Article 9 also specifies that residual risks (those that remain after mitigation) must be judged acceptable when weighed against the benefits of the system. When residual risks remain, deployers must be informed about them and given guidance on how to manage them in their specific context.
Aligning with ISO 31000 and ISO 42001
You do not need to invent a risk assessment framework from scratch. Two international standards provide well-established methodologies that align with the AI Act's requirements.
ISO 31000: Risk Management
ISO 31000 is the foundational risk management standard. It provides a generic framework applicable to any type of risk in any organization. The EU AI Act's risk management requirements map directly onto ISO 31000's structure:
- Scope, context, and criteria → Defining the AI system's intended purpose, deployment context, and affected stakeholders
- Risk identification → Identifying hazards the AI system could cause or contribute to
- Risk analysis → Assessing the likelihood and severity of identified risks
- Risk evaluation → Deciding which risks need treatment based on your risk criteria
- Risk treatment → Implementing mitigation measures
- Monitoring and review → Ongoing post-deployment monitoring
- Communication and consultation → Keeping stakeholders informed
If your organization already uses ISO 31000, extending it to cover AI systems is straightforward. The categories of risk are different (bias, opacity, reliability failures, fundamental rights impacts), but the process structure is the same.
ISO/IEC 42001: AI Management Systems
ISO/IEC 42001 is more specific. Published in 2023, it provides requirements for establishing, implementing, maintaining, and continually improving an AI management system within an organization. It directly addresses many AI Act requirements including risk assessment, impact assessment, and ongoing monitoring.
ISO/IEC 42001 includes an AI risk assessment process in Clause 6.1.2 that requires organizations to identify risks associated with the development, provision, or use of AI systems, analyze the likelihood and consequences, and determine the risk levels. It also requires an AI impact assessment (Clause 6.1.4) to evaluate the potential consequences of AI system use on individuals, groups, and society.
For organizations pursuing ISO/IEC 42001 certification, the certification evidence will substantially support demonstrating AI Act compliance. The standards are not identical in scope, but there is significant overlap in their requirements for risk management.
The Step-by-Step Risk Assessment Methodology
Here is a practical, eight-step methodology that satisfies Article 9 and aligns with both ISO standards. This is designed for real-world application, not academic theory.
Step 1: Define the Scope and Context
Before you can assess risks, you must precisely define what you are assessing and the context in which it operates.
Document the following:
- The AI system's intended purpose — what it does, what decisions it supports or makes, and what outcomes it is designed to produce
- The deployment context — where and how the system is used, by whom, and under what conditions
- The affected stakeholders — who is impacted by the system's outputs (directly and indirectly)
- The regulatory context — which regulations apply beyond the AI Act (GDPR, sector-specific rules)
- The system boundaries — what is part of the AI system and what is part of the broader operational process
Common mistake: Defining the scope too broadly or too narrowly. Too broad and the assessment becomes unmanageable. Too narrow and you miss risks at the interfaces between the AI system and its operational context. Focus on the AI system and its immediate interaction with users and affected persons.
Use the AI systems inventory to ensure you have a complete and accurate record of the system you are assessing. Incomplete system documentation is the leading cause of inadequate risk assessments.
Step 2: Identify Stakeholders and Rights at Risk
Map every group of people who could be affected by the AI system. For each group, identify the fundamental rights that could be impacted.
The EU AI Act is particularly concerned with impacts on:
- Non-discrimination and equality (Charter of Fundamental Rights, Article 21) — Could the system treat people differently based on protected characteristics?
- Human dignity (Article 1) — Could the system undermine human dignity through dehumanizing treatment?
- Privacy and data protection (Articles 7 and 8) — Does the system process personal data, and could it compromise privacy?
- Freedom of expression (Article 11) — Could the system restrict or chill free expression?
- Right to an effective remedy (Article 47) — Can affected persons challenge the system's decisions?
- Rights of the child (Article 24) — Could children be disproportionately affected?
For a financial services AI system, you might identify: loan applicants (non-discrimination, access to financial services), employees using the system (working conditions), and third parties whose data might be processed (privacy).
Step 3: Identify Hazards
A hazard is anything about the AI system that could cause harm. This is the most creative and critical step in the process — you must think broadly about what could go wrong.
Categories of AI hazards to consider:
Accuracy and reliability failures:
- Incorrect predictions or classifications
- Performance degradation over time (data drift, concept drift)
- Failure to perform under unusual or edge-case conditions
- Inconsistent outputs for similar inputs
Bias and discrimination:
- Training data bias reflecting historical discrimination
- Proxy discrimination through seemingly neutral features
- Differential performance across demographic groups
- Feedback loops that amplify existing inequalities
Transparency and explainability failures:
- Inability to explain individual decisions
- Users not understanding the system's limitations
- Affected persons not knowing AI is involved in decisions about them
- Overreliance on AI outputs due to perceived objectivity
Security and robustness:
- Vulnerability to adversarial attacks
- Data poisoning in training or operational data
- Unauthorized access to the AI system or its outputs
- Failure modes when the system encounters unexpected inputs
Autonomy and oversight failures:
- Automation bias (humans rubber-stamping AI outputs)
- Insufficient human authority to override the system
- System operating outside its intended scope
- Missing fallback procedures when the system fails
Techniques for hazard identification:
- Brainstorming sessions with cross-functional teams (developers, domain experts, compliance, affected stakeholders)
- Review of incident databases and known failures of similar AI systems
- Red-teaming exercises where testers deliberately try to cause failures
- Analysis of the training data for gaps, biases, and quality issues
- Scenario analysis exploring "what if" situations including edge cases and adversarial conditions
Step 4: Analyze Likelihood and Severity
For each identified hazard, assess two dimensions: how likely is it to occur, and how severe would the consequences be if it did?
Likelihood scale (example):
| Level | Description | Indicative Frequency |
| -------------- | ----------------------------------------- | ----------------------- |
| Rare | Could happen but would be exceptional | Less than once per year |
| Unlikely | Not expected but possible | Once per year |
| Possible | Could occur during the system's lifecycle | Monthly |
| Likely | Expected to occur multiple times | Weekly |
| Almost certain | Will occur regularly | Daily or more |
Severity scale (example):
| Level | Description | Impact on Individuals |
| ------------ | ---------------------------------------- | ---------------------------------------------------------------------- |
| Negligible | Minor inconvenience, easily remedied | Slight delay, minor frustration |
| Minor | Noticeable impact, remediable | Temporary service disruption, minor financial impact |
| Moderate | Significant impact, partially remediable | Material financial loss, extended service denial |
| Major | Serious harm, difficult to remedy | Significant financial loss, rights violation, discriminatory treatment |
| Catastrophic | Severe harm, potentially irreversible | Fundamental rights violation, life-altering consequences |
Important: Calibrate these scales to your context. A "moderate" severity for a spam filter is different from a "moderate" severity for a medical diagnostic system. The EU AI Act's focus on fundamental rights means that any risk of discrimination, privacy violation, or denial of essential services should be rated at least "major" in severity.
Step 5: Evaluate and Prioritize Risks
Combine likelihood and severity into an overall risk level. A risk matrix is the standard tool.
Risks in the "high" or "critical" zone require mandatory treatment. Risks in the "medium" zone should be treated where reasonably practicable. Risks in the "low" zone should be monitored but may be accepted.
Critical point from Article 9: The regulation requires that residual risks (after mitigation) be judged acceptable when considered individually and in combination. You cannot look at each risk in isolation — you must also consider whether the cumulative effect of multiple medium-level risks creates an unacceptable overall risk profile.
Step 6: Design and Implement Mitigation Measures
For each risk that requires treatment, identify one or more mitigation measures. The AI Act favors measures that eliminate or reduce risks at their source (design-level changes) over measures that manage risks after they materialize (operational controls).
Hierarchy of risk mitigation for AI systems:
- Elimination: Remove the hazard entirely. If a feature creates unacceptable bias risk, remove the feature.
- Reduction: Reduce likelihood or severity through design changes. Use debiasing techniques, add confidence thresholds, implement graceful degradation.
- Human oversight: Add human review, intervention capabilities, and override authority. Ensure the human has the tools and information to exercise effective oversight.
- Procedural controls: Implement operational procedures that manage residual risks. Training, escalation processes, monitoring dashboards.
- Information: Inform deployers and affected persons about residual risks so they can take appropriate precautions.
For each mitigation measure, document:
- What risk it addresses
- How it works
- Who is responsible for implementation and maintenance
- How its effectiveness will be measured
- When it will be reviewed
Step 7: Document Everything
Article 9(7) requires that the risk management system be documented. This is not optional, and the documentation must be sufficient for competent authorities to understand your risk management approach and verify its adequacy.
Your risk assessment documentation should include:
- Scope and context definition (Step 1)
- Stakeholder and rights mapping (Step 2)
- Complete hazard register (Step 3)
- Likelihood and severity assessments with justification (Step 4)
- Risk evaluation results and prioritization rationale (Step 5)
- Mitigation measures with implementation status and effectiveness evidence (Step 6)
- Residual risk assessment with acceptability justification
- Names and roles of individuals who conducted and approved the assessment
- Date of the assessment and scheduled review date
The compliance documentation tools can help structure this documentation in a format that satisfies regulatory expectations.
Step 8: Monitor, Review, and Iterate
Risk assessment is not a one-time event. Article 9(1) specifies that the risk management system must be "a continuous iterative process planned and run throughout the entire lifecycle." This means:
- Regular scheduled reviews — At minimum, conduct a full review annually and whenever significant changes are made to the AI system (new training data, model updates, deployment context changes).
- Trigger-based reviews — Conduct an immediate review when incidents occur, when new risks are identified through monitoring, when the regulatory landscape changes, or when stakeholder concerns are raised.
- Post-market monitoring data — Article 9(2)(c) requires that your risk management system evaluate risks based on data gathered from the post-market monitoring system. As your AI system operates in the real world, the data it generates must feed back into your risk assessment.
- Update and re-assess — When new risks are identified or existing risks change, go back through the relevant steps of the methodology and update your documentation.
Use the gap analysis tool periodically to check whether your risk management documentation remains complete and current.
Common Mistakes in AI Risk Assessment
Mistake 1: Treating It as a Checkbox Exercise
Some organizations produce a risk assessment document to satisfy the regulation, then file it away. This defeats the purpose. The risk management system must be a living process that actively influences how the AI system is designed, deployed, and maintained. If your risk assessment has never led to a change in how your AI system operates, something is wrong.
Mistake 2: Only Assessing Technical Risks
AI risk assessment under the EU AI Act is not a technical exercise alone. The regulation cares about impacts on fundamental rights, which requires input from legal, ethical, and domain experts — not just data scientists and engineers. A technically excellent model can still pose unacceptable risks if it is deployed in a context that affects vulnerable populations or restricts access to essential services.
Mistake 3: Ignoring Reasonably Foreseeable Misuse
Article 9 explicitly requires assessing risks under conditions of "reasonably foreseeable misuse," not just intended use. If your chatbot is designed for customer service but could be used by employees for personal advice, that is foreseeable misuse. If your classification system is designed for one demographic but gets applied to a different population, that is foreseeable misuse. You must identify and assess these scenarios.
Mistake 4: Assessing the Algorithm Without the Context
The risk profile of an AI system depends enormously on its deployment context. The same facial recognition algorithm poses very different risks when used to unlock a phone (convenience feature, limited impact) versus when used for law enforcement identification (fundamental rights, potential for false arrest). Always assess the complete system-in-context, not just the algorithm.
Mistake 5: Skipping the Cumulative Risk Assessment
Multiple individually acceptable risks can combine to create an unacceptable overall risk profile. If your credit scoring system has a minor bias risk, a minor transparency risk, a minor reliability risk, and a minor privacy risk, the combination may pose a major risk to the individuals it assesses. Article 9 requires considering risks "individually and in combination."
Proportionality: Scaling for Your Organization
The EU AI Act applies the principle of proportionality. A multinational bank deploying AI to millions of customers needs a more sophisticated risk assessment than a 20-person company using a single AI tool for internal process optimization.
However, proportionality does not mean cutting corners. It means scaling the depth and formality of your process to match the scale and impact of your AI use. A smaller organization can use simpler tools and shorter documents, but must still cover all the substantive requirements.
For SMBs with limited resources, focus on:
- Using existing risk management processes as a starting point
- Prioritizing the highest-risk AI systems first
- Leveraging templates and structured tools rather than building from scratch
- Seeking external expertise for the initial assessment, then maintaining it internally
Start with a free assessment to understand the scope of your AI risk management obligations. For organizations with AI systems that might be high-risk, our classification guide will help you determine which systems need the full Article 9 treatment.
From Assessment to Action
A completed risk assessment is the foundation for all other AI Act compliance activities. It informs your technical documentation (Article 11), your transparency measures (Article 13), your human oversight design (Article 14), and your post-market monitoring plan (Article 72).
The organizations that get risk assessment right will find the rest of the compliance journey manageable. The methodology above gives you a clear, repeatable process that satisfies Article 9, aligns with international standards, and produces documentation that regulators will expect to see.
Start today. The August 2026 deadline does not care whether you are ready.