Automated Decision-Making and Profiling Handling
Plain English Translation
GDPR Article 22 protects individuals from being subject to decisions based solely on automated processing, including profiling, if those decisions produce legal or similarly significant effects. Organizations can only use automated decision-making when it is necessary for a contract, authorized by law, or based on explicit consent. When using these systems, organizations must implement safeguards, including the right for data subjects to obtain human intervention, express their point of view, and contest the automated decision.
Technical Implementation
Use the tabs below to select your organization size.
Required Actions (startup)
- Map existing automated decision processes to understand where user data is utilized.
- Provide simple manual fallback options for contested decisions.
Required Actions (scaleup)
- Implement systemic flagging for automated decision-making.
- Build self-serve portals allowing users to easily request human review and express their views.
Required Actions (enterprise)
- Integrate automated decision tracking with comprehensive DPIAs.
- Regularly audit algorithmic decision logic for fairness and legal compliance.
- Maintain detailed logs of contested decisions and human intervention outcomes.
GDPR Article 22 restricts automated decision-making and profiling. It applies when an organization makes a decision based solely on automated processing that produces legal or similarly significant effects on the data subject.
A decision is solely automated if there is no meaningful human involvement in the decision-making process. If a human merely rubber-stamps an algorithmic output without independent assessment, it still counts as a solely automated decision under GDPR rules.
A legal effect impacts a person's legal status or rights, such as cancellation of a contract. Similarly significant effects include actions that significantly influence a person's circumstances, behavior, or opportunities, such as automatic refusal of an online credit application or e-recruiting practices.
Automated decisions are permitted if they are necessary for entering into or performing a contract, explicitly authorized by EU or Member State law, or based on the data subject's explicit consent.
Yes, under GDPR Article 22, if automated decision-making is based on a contract or explicit consent, the organization must provide the right to human intervention, allowing the data subject to express their point of view and contest the decision.
Meaningful human review requires the human reviewer to have the authority and capability to change the decision. The reviewer must carefully consider all relevant data, including the data subject's provided context, rather than blindly applying the automated recommendation.
Organizations must provide a clear and accessible mechanism for users to contest an automated decision. The process must trigger a prompt review by an authorized human who evaluates the original logic and any new information provided by the data subject. Tools like WatchDog Security's Compliance Center can help teams track these procedural requirements and collect supporting evidence (e.g., timestamps, reviewer assignment, and outcomes) for audit readiness.
Organizations must provide meaningful information about the logic involved in the automated decision-making process. They must also explain the significance and the envisaged consequences of such processing for the data subject.
Special category data can only be used if the data subject has given explicit consent or if processing is necessary for substantial public interest. Additional safeguards must be strictly enforced to protect the data subject's rights and freedoms.
To document compliance, organizations should maintain a detailed Data Protection Impact Assessment for AI decisions and keep comprehensive logs in a data subject request log. Providing audit evidence includes demonstrating that clear explicit consent was obtained and human review processes are active. Tools like WatchDog Security's Compliance Center can help centralize DPIA records and evidence collection, and tools like WatchDog Security's Risk Register can help document decision-related risks, assigned owners, and treatment plans over time.
GDPR Article 22 requires a clear path for individuals to request human intervention and contest automated decisions. Tools like WatchDog Security's Compliance Center can help track control requirements, centralize evidence (e.g., DPIAs and decision rationale), and flag gaps where a human-in-the-loop process or user-facing mechanism is missing.
Automated decision-making often triggers DPIA expectations and ongoing risk management for fairness, explainability, and user rights. Tools like WatchDog Security's Risk Register can capture risks tied to profiling systems, document treatments and owners, and support board-level reporting while linking the risks back to the GDPR Article 22 control and supporting artifacts.
"1. The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her. 2. Paragraph 1 shall not apply if the decision: (a) is necessary for entering into, or performance of, a contract between the data subject and a data controller; (b) is authorised by Union or Member State law... or (c) is based on the data subject's explicit consent. 3. In the cases referred to in points (a) and (c) of paragraph 2, the data controller shall implement suitable measures to safeguard the data subject's rights and freedoms and legitimate interests, at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision."
| Version | Date | Author | Description |
|---|---|---|---|
| 1.0.0 | 2026-02-23 | WatchDog Security GRC Team | Initial publication |