System documentation and information for users
Plain English Translation
ISO 42001 Annex A.8.2 requires organizations to determine and provide necessary information to users of an AI system. This means offering clear AI system documentation and user guidance that explains the system's intended purpose, how it works, its accuracy, and its limitations. By transparently documenting this information, organizations ensure users can safely operate the system, perform necessary human oversight, and avoid foreseeable misuse.
Technical Implementation
Use the tabs below to select your organization size.
Required Actions (startup)
- Draft basic user manuals outlining the intended purpose, scope, and limitations of the AI system.
- Include clear notifications to users when they are interacting with an AI system.
Required Actions (scaleup)
- Implement standardized AI model cards detailing performance metrics, accuracy, and uncertainty.
- Develop user-facing documentation for human oversight, including how and when to override the system.
Required Actions (enterprise)
- Maintain role-specific documentation portals for operators, administrators, and end-users.
- Automate the communication of model updates, version changes, and release notes to all relevant stakeholders.
ISO/IEC 42001 Annex A.8.2 requires organizations to determine and provide the necessary information to users of the AI system. This includes ensuring users understand they are interacting with AI, its intended purpose, and how to use it safely.
Organizations should include the system's intended purpose, how to interact with it, technical requirements, performance accuracy, and known limitations. It should also cover potential benefits and harms, particularly those identified during the impact assessment.
The detail should match the complexity of the AI system, the expertise of the user, and the specific impacts of the system. Documentation must clearly explain how to override the system, necessary human oversight, and foreseeable misuses.
While ISO 42001 does not explicitly mandate a specific format like a model card, adopting an AI model card template for compliance is a widely accepted best practice. It effectively consolidates system performance, limitations, and intended use into an accessible format for users.
Organizations should provide clear metrics on accuracy, acceptable error rates, and performance limitations within the user documentation. This transparency helps users understand the reliability of the system's outputs and when to apply human judgment.
Documentation must specify the required human oversight capabilities, processes, and tools. It should clearly state how and when humans can intervene or override AI decisions to prevent unintended harms or misuse.
Organizations should maintain procedures for communicating updates and changes in how the system works. This includes providing release notes, updated educational materials, and revised claims about the system's benefits or performance to all affected users.
Auditors will look for published user manuals, model cards, release notes, and integrated UI warnings as evidence. They will check that criteria for determining what information is provided is documented and considers the expertise of the user and potential system impacts. Tools like WatchDog Security's Compliance Center can help map Annex A.8.2 to evidence requests and keep supporting documentation organized for review.
AI system transparency documentation for users must be accessible and appropriate for the target audience. Technical details may be provided to system administrators, while general users receive simplified interaction instructions, limitations, and accessibility features.
Annex A.8.2 directly supports the transparency requirements found in modern AI and privacy regulations by ensuring users are informed when interacting with AI and understand its logic and limits. Fulfilling this control helps demonstrate compliance with broader legal obligations regarding automated decision-making.
User documentation often spans model cards, user manuals, release notes, and acceptable-use guidance, and it can drift out of sync as systems change. Tools like WatchDog Security's Policy Management can centralize these documents with version control and acceptance tracking so updates are reviewed, approved, and acknowledged by relevant user groups.
Auditors typically want to see current user-facing documentation, a controlled update process, and proof that the right audiences received key notices (e.g., limitations, warnings, changes). Tools like WatchDog Security's Compliance Center can help map this control to evidence requests and organize supporting artifacts (model cards, manuals, release notes) to reduce gaps and duplication during audits.
| Version | Date | Author | Description |
|---|---|---|---|
| 1.0.0 | 2026-02-23 | WatchDog Security GRC Team | Initial publication |