Execute AI Risk Assessments
Plain English Translation
Clause 8.2 of ISO/IEC 42001 requires organizations to actively perform AI risk assessments based on the methodology established during the system's planning phase. To maintain compliance, an AI risk assessment must be conducted at planned intervals or whenever significant changes to the AI system or its environment occur. Organizations must also retain documented information of these assessments to serve as audit evidence and to inform ongoing AI risk management activities.
Technical Implementation
Use the tabs below to select your organization size.
Required Actions (startup)
- Perform basic risk assessments prior to launching new AI models or features.
- Maintain a simple spreadsheet or document tracking identified AI risks and mitigation plans.
Required Actions (scaleup)
- Standardize the AI risk assessment methodology with defined scoring for impact and likelihood.
- Establish formal triggers for reassessment, such as changes in model architecture, training data shifts, or new deployment contexts.
It requires organizations to perform AI risk assessments using the process defined in Clause 6.1.2. These assessments must occur at planned intervals or when significant changes happen, and the results must be retained as documented information.
Assessments must be performed at planned intervals that the organization defines, as well as on an ad-hoc basis whenever significant changes are proposed or occur in the AI system or its operating environment.
Significant changes include modifications to the AI system's intended purpose, updates to the core algorithm, the introduction of substantially different training data, or changes in the deployment environment that could introduce new safety, privacy, or security risks.
It should evaluate risks aligned with organizational objectives, which typically include security vulnerabilities (like data poisoning), privacy impacts, fairness (unwanted bias), safety, transparency, and operational risks like model drift or performance degradation.
Organizations must retain documented information of the results, typically in the form of risk assessment reports, an updated risk register, and meeting minutes from risk review boards showing how risks were analyzed and prioritized.
You can align and integrate AI risk assessments with existing enterprise or ISO 27001 risk management processes, provided the integrated methodology explicitly accounts for AI-specific risk sources, such as machine learning data quality and autonomous decision-making.
Effective AI risk assessments require cross-functional collaboration. Participants should include data scientists to understand model mechanics, security teams (CISO) for threat modeling, compliance/privacy officers for legal constraints, and domain experts to evaluate societal or business impacts.
Organizations must evaluate the risks of external dependencies by assessing the vendor's data practices, model robustness, and transparency. This often involves reviewing vendor security assessments, SOC 2 reports, or requiring specific contractual guarantees regarding AI performance and bias.
The standard does not prescribe a specific scoring framework. Organizations are free to use quantitative or qualitative matrices, as long as the methodology produces consistent, valid, and comparable results aligned with the risk criteria defined by top management.
Assessments are kept current by implementing continuous monitoring processes that track AI performance and system health. When monitoring thresholds are breached, or during scheduled periodic reviews, the findings feed back into a new iteration of the AI risk assessment.
Clause 8.2 expects risk assessments to happen at planned intervals and when significant changes occur, which can be hard to coordinate across teams and models. Tools like WatchDog Security's Compliance Center can help by mapping the assessment workflow to ISO/IEC 42001 requirements, tracking due dates, and organizing audit-ready evidence (e.g., risk assessment reports and risk register updates) in one place.
Auditors typically look for consistent scoring, clear ownership, and traceable treatment decisions across all identified AI risks. Tools like WatchDog Security's Risk Register can support this by standardizing likelihood/impact scoring, assigning owners, tracking mitigation status over time, and producing board-level views that show how AI risks are assessed and treated at planned intervals.
"The organization shall perform AI risk assessments in accordance with 6.1.2 at planned intervals or when significant changes are proposed or occur. The organization shall retain documented information of the results of all AI risk assessments."
| Version | Date | Author | Description |
|---|---|---|---|
| 1.0.0 | 2026-02-23 | WatchDog Security GRC Team | Initial publication |