WikiFrameworksISO/IEC 42001:2023Plan Actions to Address Risks and Opportunities

Plan Actions to Address Risks and Opportunities

Updated: 2026-02-23

Plain English Translation

Clause 6.1.1 of ISO/IEC 42001:2023 requires organizations to systematically identify and plan actions to address risks and opportunities related to their AI management system (AIMS). This process establishes AI risk criteria to distinguish acceptable from non-acceptable risks, ensuring the AIMS achieves its intended outcomes, prevents undesired effects, and drives continual improvement.

Executive Takeaway

Organizations must formally plan how to address both AI-specific risks and broader management system opportunities to ensure reliable and compliant AI governance.

ImpactHigh
ComplexityMedium

Why This Matters

  • Establishes clear criteria for acceptable versus non-acceptable AI risks, protecting the organization from unintended ethical, legal, or operational consequences.
  • Ensures ISO 42001 risk management strategies are integrated directly into the organization's overarching business processes.

What “Good” Looks Like

  • Maintaining a comprehensive Record of Processing Activities (RoPA) that explicitly maps every data processing activity to its corresponding lawful basis, with periodic reviews to ensure it stays aligned to system and vendor changes (tools like WatchDog Security's Compliance Center can help track mappings and evidence gaps).
  • Conducting and formally documenting a Legitimate Interests Assessment (LIA) whenever relying on legitimate interests as the primary lawful basis, including ownership, approvals, and review cadence (tools like WatchDog Security's Risk Register can help manage LIA records and decision evidence).

ISO/IEC 42001:2023 clause 6.1.1 requirements mandate that organizations consider their context and stakeholder needs to determine risks and opportunities. They must establish AI risk criteria, plan actions to address these factors, integrate the actions into AIMS processes, and evaluate their effectiveness.

You identify them by analyzing the organization's internal and external context, stakeholder expectations, and the specific domain and intended use of the AI system. These findings should be documented in an AIMS risks and opportunities register template or a centralized risk tracking system.

AI risks relate specifically to the development, deployment, or operational use of the AI system itself, such as bias, safety, or data poisoning. AIMS risks encompass broader management system challenges, such as failing to secure leadership buy-in, lacking resources, or broader organizational compliance failures.

To perform an AI risk assessment for ISO/IEC 42001:2023, you define AI risk criteria to distinguish acceptable from non-acceptable risks, evaluate potential threats against these thresholds, and document the findings alongside mitigations in an AI risk register.

Acceptable planning actions examples include applying appropriate controls from Annex A, integrating risk treatments directly into AIMS processes, avoiding the risk, or formally accepting the risk based on defined criteria. All actions must be documented and evaluated for effectiveness.

Organizations prioritize AI risks by comparing the results of their risk analysis against predefined AI risk criteria to see what falls into non-acceptable territory. An ISO 42001 AI risk treatment plan is then formulated to apply appropriate controls, ensuring the AIMS achieves its intended results.

ISO 42001 audit evidence for clause 6.1.1 includes a documented risk register, formally established AI risk criteria, documentation of planned actions, and proof that these actions are integrated into business processes and regularly evaluated for effectiveness.

To fully address how to document lawful basis in RoPA and privacy notice, organizations must maintain an up-to-date Record of Processing Activities (RoPA) that maps every specific data process to its exact lawful basis, alongside documented LIAs where applicable. Tools like WatchDog Security's Compliance Center can help maintain this mapping as structured evidence and highlight gaps during periodic reviews.

Both standards utilize the harmonized structure for management systems. To align ISO 42001 with ISO 27001 risk management, organizations can leverage their existing ISMS risk frameworks and extend them to cover AI-specific risk criteria, such as fairness, human oversight, and AI transparency.

Yes, an existing ERM process can satisfy this clause, provided it is adapted to formally establish criteria for AI-specific risks, explicitly cover the AI management system's scope, and systematically plan actions to address AI-related opportunities.

Article 6 compliance often fails when lawful basis decisions live in emails or spreadsheets and drift from actual processing. Tools like WatchDog Security's Compliance Center can centralize lawful-basis mappings as control evidence, flag missing documentation (e.g., no LIA when using legitimate interests), and support ongoing reviews through structured workflows.

LIAs require consistent documentation of purpose, necessity, and balancing tests, plus a clear approval trail for audit readiness. Tools like WatchDog Security's Risk Register can track each LIA as a risk decision with owners, review dates, and linked mitigations, while WatchDog Security's Policy Management can manage the underlying templates and capture approvals and attestations.

ISO-42001 Clause 6.1.1

"When planning for the AI management system, the organization shall consider the issues referred to in 4.1 and the requirements referred to in 4.2 and determine the risks and opportunities that need to be addressed to: — give assurance that the AI management system can achieve its intended result(s); — prevent or reduce undesired effects; — achieve continual improvement."

ISO-42001 Clause 6.1.1

"The organization shall establish and maintain AI risk criteria that support: — distinguishing acceptable from non-acceptable risks; — performing AI risk assessments; — conducting AI risk treatment; — assessing AI risk impacts."

VersionDateAuthorDescription
1.0.02026-02-23WatchDog Security GRC TeamInitial publication