WikiFrameworksISO/IEC 42001:2023Manage Nonconformities and Corrective Actions

Manage Nonconformities and Corrective Actions

Updated: 2026-02-23

Plain English Translation

ISO/IEC 42001 Clause 10.2 requires organizations to systematically manage nonconformities within their AI management system. When an issue is identified, organizations must react to control it, evaluate root causes to prevent recurrence, and implement a formal corrective action plan for AI governance. Additionally, the standard mandates reviewing the effectiveness of these actions and retaining documented information as evidence for audits.

Executive Takeaway

Organizations must establish a formal process to investigate, resolve, and prevent the recurrence of nonconformities within the AI management system.

ImpactHigh
ComplexityMedium

Why This Matters

  • Prevents recurring AI failures and compliance breaches by addressing root causes rather than just symptoms.
  • Provides a structured mechanism for continuous improvement, essential for maintaining ISO/IEC 42001 certification.

What “Good” Looks Like

  • A centralized nonconformity and corrective action tracker that documents root cause analyses, assigned owners, and resolution deadlines (tools like WatchDog Security's Compliance Center can help standardize tracking and evidence collection).
  • A formal review process to verify and measure the effectiveness of implemented corrective actions.

ISO/IEC 42001 clause 10.2 requirements dictate that when a nonconformity occurs, the organization must react to control and correct it, deal with the consequences, and evaluate the need for action to eliminate root causes. It also requires implementing necessary actions, reviewing their effectiveness, making changes to the AI management system if needed, and keeping documented information as evidence.

A nonconformity is defined as the non-fulfilment of a requirement within the AI management system. This can include failing to follow established AI policies, missing regulatory obligations, or when AI models perform outside of acceptable risk thresholds established in the AI risk treatment plan.

Organizations should use established methodologies like the 5 Whys or Ishikawa diagrams to perform root cause analysis for AI nonconformities. The goal is to evaluate the nonconformity, determine its underlying causes, and identify if similar nonconformities exist or could potentially occur elsewhere in the system.

To demonstrate how to document corrective actions for ISO 42001 audits, organizations must provide documented information showing the nature of the nonconformities, subsequent actions taken, and the results of any corrective action. A robust nonconformity and corrective action tracker is typically expected by auditors. Tools like WatchDog Security's Compliance Center can help keep evidence attached to each nonconformity record and maintain a consistent audit trail across corrective action steps.

The standard does not specify an exact timeframe, but requires that corrective actions be appropriate to the effects of the nonconformities encountered. Severe issues posing high risks to safety, privacy, or fundamental rights require immediate action, while administrative nonconformities might have a longer resolution window.

To evaluate effectiveness of corrective actions, organizations must review the implemented changes after an appropriate period to ensure the root cause was eliminated and the issue has not recurred. This often involves re-auditing the affected process, reviewing updated AI performance metrics, or inspecting newly generated logs.

A correction is an immediate action taken to control and fix a detected nonconformity. A corrective action goes deeper to eliminate the root cause to prevent recurrence. Preventive action refers to proactive measures taken to eliminate the causes of potential nonconformities before they occur, which is now generally covered under the risk management processes in ISO 42001 rather than a dedicated clause.

Corrective actions for AI incidents or model drift should follow a structured CAPA process for AI management systems. Once the immediate incident is contained, teams must investigate why the monitoring failure or drift occurred, update training data or algorithms, revise the AI system impact assessment, and monitor the retrained model to verify the effectiveness of the fix.

Organizations should maintain a centralized nonconformity management process utilizing shared tracking tools or GRC platforms. This ensures visibility across data science, engineering, and compliance teams, and allows for the assignment of specific corrective action responsibilities to relevant third-party suppliers when their components cause nonconformities.

Examples of AI nonconformities and corrective actions include finding unauthorized biases in training data requiring a corrective action to implement automated data quality screening, or failing to conduct a required AI system impact assessment which necessitates retraining staff and updating mandatory approval workflows to prevent recurrence.

Managing nonconformities at scale often breaks down when ownership, due dates, and evidence are scattered across tools. Tools like WatchDog Security's Compliance Center can centralize nonconformity records, map them to ISO/IEC 42001 Clause 10.2 expectations, and streamline evidence collection so audit-ready documentation is consistent and easy to retrieve.

Corrective actions frequently fail when action items are not risk-ranked, assigned, or verified for effectiveness after implementation. Tools like WatchDog Security's Risk Register can help track corrective actions with owners and target dates, link each action to the underlying risk and root cause, and support follow-up reviews that document whether the issue recurred or was effectively eliminated.

ISO-42001 Clause 10.2

"When a nonconformity occurs, the organization shall: a) react to the nonconformity and as applicable: 1) take action to control and correct it; 2) deal with the consequences; b) evaluate the need for action to eliminate the cause(s) of the nonconformity, so that it does not recur or occur elsewhere, by: 1) reviewing the nonconformity; 2) determining the causes of the nonconformity; 3) determining if similar nonconformities exist or can potentially occur; c) implement any action needed; d) review the effectiveness of any corrective action taken; e) make changes to the AI management system, if necessary. Corrective actions shall be appropriate to the effects of the nonconformities encountered. Documented information shall be available as evidence of: — the nature of the nonconformities and any subsequent actions taken; — the results of any corrective action."

VersionDateAuthorDescription
1.0.02026-02-23WatchDog Security GRC TeamInitial publication