Plan Actions to Address Risks and Opportunities
Plain English Translation
Clause 6.1.1 of ISO/IEC 42001:2023 requires organizations to systematically identify and plan actions to address risks and opportunities related to their AI management system (AIMS). This process establishes AI risk criteria to distinguish acceptable from non-acceptable risks, ensuring the AIMS achieves its intended outcomes, prevents undesired effects, and drives continual improvement.
Technical Implementation
Use the tabs below to select your organization size.
Required Actions (startup)
Required Actions (scaleup)
- Formalize an AIMS risks and opportunities register aligned with internal context and interested party requirements.
- Integrate AI risk treatment actions directly into standard AI development and operational workflows.
ISO/IEC 42001:2023 clause 6.1.1 requirements mandate that organizations consider their context and stakeholder needs to determine risks and opportunities. They must establish AI risk criteria, plan actions to address these factors, integrate the actions into AIMS processes, and evaluate their effectiveness.
You identify them by analyzing the organization's internal and external context, stakeholder expectations, and the specific domain and intended use of the AI system. These findings should be documented in an AIMS risks and opportunities register template or a centralized risk tracking system.
AI risks relate specifically to the development, deployment, or operational use of the AI system itself, such as bias, safety, or data poisoning. AIMS risks encompass broader management system challenges, such as failing to secure leadership buy-in, lacking resources, or broader organizational compliance failures.
To perform an AI risk assessment for ISO/IEC 42001:2023, you define AI risk criteria to distinguish acceptable from non-acceptable risks, evaluate potential threats against these thresholds, and document the findings alongside mitigations in an AI risk register.
Acceptable planning actions examples include applying appropriate controls from Annex A, integrating risk treatments directly into AIMS processes, avoiding the risk, or formally accepting the risk based on defined criteria. All actions must be documented and evaluated for effectiveness.
Organizations prioritize AI risks by comparing the results of their risk analysis against predefined AI risk criteria to see what falls into non-acceptable territory. An ISO 42001 AI risk treatment plan is then formulated to apply appropriate controls, ensuring the AIMS achieves its intended results.
ISO 42001 audit evidence for clause 6.1.1 includes a documented risk register, formally established AI risk criteria, documentation of planned actions, and proof that these actions are integrated into business processes and regularly evaluated for effectiveness.
To fully address how to document lawful basis in RoPA and privacy notice, organizations must maintain an up-to-date Record of Processing Activities (RoPA) that maps every specific data process to its exact lawful basis, alongside documented LIAs where applicable. Tools like WatchDog Security's Compliance Center can help maintain this mapping as structured evidence and highlight gaps during periodic reviews.
Yes, an existing ERM process can satisfy this clause, provided it is adapted to formally establish criteria for AI-specific risks, explicitly cover the AI management system's scope, and systematically plan actions to address AI-related opportunities.
Article 6 compliance often fails when lawful basis decisions live in emails or spreadsheets and drift from actual processing. Tools like WatchDog Security's Compliance Center can centralize lawful-basis mappings as control evidence, flag missing documentation (e.g., no LIA when using legitimate interests), and support ongoing reviews through structured workflows.
LIAs require consistent documentation of purpose, necessity, and balancing tests, plus a clear approval trail for audit readiness. Tools like WatchDog Security's Risk Register can track each LIA as a risk decision with owners, review dates, and linked mitigations, while WatchDog Security's Policy Management can manage the underlying templates and capture approvals and attestations.
"When planning for the AI management system, the organization shall consider the issues referred to in 4.1 and the requirements referred to in 4.2 and determine the risks and opportunities that need to be addressed to: — give assurance that the AI management system can achieve its intended result(s); — prevent or reduce undesired effects; — achieve continual improvement."
"The organization shall establish and maintain AI risk criteria that support: — distinguishing acceptable from non-acceptable risks; — performing AI risk assessments; — conducting AI risk treatment; — assessing AI risk impacts."
| Version | Date | Author | Description |
|---|---|---|---|
| 1.0.0 | 2026-02-23 | WatchDog Security GRC Team | Initial publication |