Perform AI Risk Assessments
Plain English Translation
Clause 6.1.2 of ISO/IEC 42001 requires organizations to establish and document a formalized AI risk assessment process. This methodology must produce consistent, valid, and comparable results, identifying risks that could prevent the organization from achieving its AI objectives. By analyzing the likelihood and potential consequences of these risks, organizations can evaluate them against predefined criteria and prioritize them for treatment.
Technical Implementation
Use the tabs below to select your organization size.
Required Actions (startup)
- Create an AI model risk assessment checklist and a basic spreadsheet-based risk register.
- Assess key AI systems manually to determine the likelihood and impact of common risks like bias, data poisoning, and system failure.
Required Actions (scaleup)
An AI risk assessment in ISO/IEC 42001:2023 is a formalized process used to identify, analyze, and evaluate uncertainties that could prevent an organization from achieving its intended AI objectives, ensuring safe and responsible AI development or use.
You perform it by applying a documented, repeatable methodology aligned with your AI policy. This involves identifying risks, analyzing their potential consequences and realistic likelihood, determining the risk levels, and comparing them against your established risk criteria to prioritize them for treatment.
The methodology must mandate steps for risk identification, risk analysis (including consequence and likelihood assessment), and risk evaluation. Crucially, it must be designed so that repeated assessments produce consistent, valid, and comparable results.
The standard does not mandate a specific scoring model. Organizations should establish risk criteria that accurately distinguish acceptable from non-acceptable risks based on their context, often utilizing a quantitative or qualitative matrix weighing likelihood against severity of consequences.
AI risk assessments should be performed at planned intervals or whenever significant changes are proposed or occur to the AI system, its operational context, or organizational objectives, as required by Clause 8.2.
Organizations must retain documented information about the AI risk assessment process itself (e.g., a methodology document or SOP) and the results of the assessments (e.g., an AI risk register and individual risk assessment reports). Tools like WatchDog Security's Compliance Center can help organize evidence by requirement and reduce gaps when preparing for audits.
An AI risk assessment evaluates a broad spectrum of risks preventing the achievement of AI objectives, such as model performance, fairness, and safety. A DPIA is specifically focused on assessing risks to the rights and freedoms of individuals regarding the processing of personal data.
Third-party and vendor risks are assessed by applying the organization's standard AI risk assessment methodology to external supply chain components, evaluating the likelihood and consequences of vendor failure, data breaches, or non-compliant AI model behavior.
These phenomena are identified as specific risk sources during the assessment process. They are analyzed for their likelihood and impact on operations or users, and then prioritized for risk treatment, such as implementing continuous monitoring, human oversight, or robust evaluation metrics.
Organizations often utilize custom AI risk assessment templates or an AI model risk assessment checklist tailored to their operational context. Mapping established frameworks like the NIST AI RMF to the ISO 42001 risk assessment process is a common approach to building these templates.
An AI risk assessment process often fails when scoring is inconsistent, ownership is unclear, or evidence is scattered across documents. Tools like WatchDog Security's Risk Register can centralize AI risks, standardize likelihood/impact scoring, track treatment plans, and produce board-ready reporting that supports repeatable assessments.
Audit readiness typically breaks down when risk decisions, approvals, and updates are not traceable to specific AI systems and changes. Tools like WatchDog Security's Compliance Center can help by mapping assessments to ISO/IEC 42001 requirements, highlighting gaps, and streamlining evidence collection so results remain consistent and reviewable.
"The organization shall define and establish an AI risk assessment process that: a) is informed by and aligned with the AI policy (see 5.2) and AI objectives (see 6.2)... b) is designed such that repeated AI risk assessments can produce consistent, valid and comparable results; c) identifies risks that aid or prevent achieving its AI objectives; d) analyses the AI risks to: 1) assess the potential consequences... 2) assess, where applicable, the realistic likelihood... 3) determine the levels of risk; e) evaluates the AI risks to: 1) compare the results of the risk analysis with the risk criteria... 2) prioritize the assessed risks for risk treatment."
| Version | Date | Author | Description |
|---|---|---|---|
| 1.0.0 | 2026-02-23 | WatchDog Security GRC Team | Initial publication |