Article: 14.4(b) Official Requirement
Human oversight measures shall enable the individuals to whom human oversight is assigned to override, or not to use, the output produced by the high-risk AI system. How Katyar Addresses This Requirement Katyar delivers a traceable, auditable override mechanism through its Human-in-the-Loop (HITL) approval workflow. The control is considered satisfied when real override behavior has been demonstrated in production or testing data. Evaluation Criteria
Katyar considers the control satisfied when:
- At least one human-in-the-loop approval request has been explicitly denied (overridden) in the last 30 days.
- Number of HITL events with status = ‘denied’
- Associated policy ID(s) that triggered the approval request
- Reason/comment provided by the human approver (mandatory field)
- Approver identity and exact timestamp of the denial
- Original AI-suggested action vs. final overridden outcome
- Clear Deny Action — Prominent “Deny” button in Slack, Teams, dashboard, or custom approval UI
- Mandatory Justification — Approvers must provide a reason/comment for every denial (stored permanently in audit trail)
- Immediate Interruption — Denial instantly stops the agent workflow and prevents tool execution
- Custom Error Feedback — Agent receives a structured error message explaining the denial (configurable)
- Audit-Ready Logging — Every override is cryptographically signed with:
- Original prompt/tool call
- AI output that was overridden
- Human decision (deny + comment)
- Approver identity
- Timestamp (millisecond precision)
- Visibility in Dashboard — Denials appear in real-time event stream, Approvals tab, and Compliance activity feed
- Ensure at least one approval policy is active that can realistically be denied (e.g. high-value refund, destructive database query, bulk sensitive action).
- Run test scenarios or live agent usage that triggers the approval flow.
- Have a human reviewer explicitly deny at least one request (do not auto-approve everything).
- Verify the denial appears in:
- Approvals tab (with comment)
- Audit logs (search for status=‘denied’)
- Compliance dashboard → EU-14.2 card
- Optionally simulate 3–5 denials to build stronger evidence for auditors.
Regulators will look for:
- Proof that overrides have actually occurred (not just theoretical capability)
- Clear documentation of why each override was made (justification comments)
- Evidence that the override was effective (AI action was stopped, no tool call executed)
- Traceability linking the denied AI output to the human decision
- Reasonable frequency of overrides in high-risk scenarios (demonstrates the mechanism is used in practice)
Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act)
→ Article 14 – Human oversight
