AI system requirements and specification
Plain English Translation
Organizations must clearly define and document the requirements for any new artificial intelligence system or when making significant enhancements to an existing one. This AI requirements specification acts as a foundational blueprint, detailing the business rationale, operational goals, data needs, and system boundaries. Maintaining robust AI system requirements documentation ensures all development aligns with strategic objectives and provides a benchmark to test the system against before deployment.
Technical Implementation
Use the tabs below to select your organization size.
Required Actions (startup)
- Create a basic specification document detailing the system's purpose, intended users, and primary data sources.
- Ensure the rationale for building the AI is documented and approved before coding begins.
Required Actions (scaleup)
- Adopt a formal AI system specification document template incorporating functional, security, and performance metrics.
- Implement a change request process to document material enhancements to existing models.
Required Actions (enterprise)
- Integrate an AI requirements traceability matrix within enterprise product management tools to link specs to CI/CD tests.
- Mandate thorough documentation of data constraints, bias thresholds, and hardware requirements across the entire AI lifecycle.
ISO/IEC 42001:2023 A.6.2.2 AI system requirements and specification requires organizations to explicitly define and document the requirements for developing new AI systems or materially enhancing existing ones. This includes documenting the business case, training methodology, data requirements, and life cycle boundaries.
When determining how to document AI system requirements, start with the rationale (e.g., business case or customer request), describe the intended use, and detail how the model will be trained. It should cover the entire life cycle and include specific parameters for data acquisition and operational constraints.
Acceptable ISO 42001 evidence for AI system lifecycle requirements includes approved AI system specification documents, business cases, change request logs for system enhancements, and documentation showing clear requirements for data, functionality, and risk mitigation.
Functional and non-functional requirements for AI systems define what the system does and how well it performs. Functional requirements cover inputs, outputs, and specific tasks, while non-functional requirements dictate scalability, security, latency, and requirements for AI model monitoring and performance thresholds.
To establish how to define intended use and limitations for an AI system, clearly articulate the specific operational context and target demographic the system is approved for. Explicitly document known edge cases, hardware constraints, and scenarios where the AI should not be applied.
ISO 42001 Annex A.6.2.2 requirements apply equally to material enhancements. Any significant update to an existing AI system must undergo the same specification process as a new build, ensuring that the updated requirements and rationale are documented and formally approved.
Organizations should use an AI requirements traceability matrix to connect each stated requirement to its corresponding risk assessment entry, implemented security control, and final validation test. This ensures all specifications are verified and no compliance gaps exist. Tools like WatchDog Security's Risk Register can help maintain requirement-to-risk linkages and track treatment decisions, while WatchDog Security's Compliance Center can help organize supporting evidence for audits.
Security and privacy requirements for AI systems must outline secure data handling practices, role-based access control, privacy-by-design elements, and specific data governance rules detailing how training data is acquired, conditioned, and eventually disposed of.
AI system requirements should span the entire AI system life cycle. They must be revisited and updated whenever the deployed AI system fails to operate as intended, or when new information, technologies, or regulations emerge that require the system's specifications to be improved.
Common nonconformities include failing to document the initial business case, lacking an AI system specification document template leading to inconsistent documentation, and failing to update the system requirements when material enhancements are made in production.
Teams often struggle to keep AI requirements consistent across product, engineering, security, and compliance. Tools like WatchDog Security's Policy Management can standardize an AI system specification template, track approvals/version history, and capture attestations so the requirements stay controlled and audit-ready as the system evolves.
Auditors typically look for clear linkage between stated requirements, identified risks, and implemented controls or tests. Tools like WatchDog Security's Compliance Center and WatchDog Security's Risk Register can help map requirements to control evidence and risk treatments, keeping a single place to show how specifications were reviewed, validated, and updated after material changes.
| Version | Date | Author | Description |
|---|---|---|---|
| 1.0.0 | 2026-02-23 | WatchDog Security GRC Team | Initial publication |