EU AI Act Reference
Article 9 Official Requirement
High-risk AI systems shall be designed and developed with a risk management system that operates continuously throughout the entire lifecycle of the system. The risk management system must consist of a planned, iterative process that is regularly and systematically updated. It shall include the following steps: (a) identification and analysis of known and foreseeable risks
(b) estimation and evaluation of those risks
(c) evaluation of other risks that may emerge
(d) adoption of appropriate and targeted risk management measures
(e) testing of the high-risk AI system
(f) monitoring, reporting and documentation of serious incidents and malfunctions Purpose of the Requirement
This requirement forms the foundational safety and accountability mechanism for high-risk AI systems. It compels organisations to proactively and continuously identify, assess, mitigate and monitor risks across the full lifecycle — preventing foreseeable harm, minimising residual risk to an acceptable level, and maintaining safety throughout deployment and operation. Without an effective risk management system, it is impossible to demonstrate due diligence, traceability or compliance when incidents occur or regulatory authorities request evidence. How Katyar Addresses This Requirement Katyar implements a living, intent-aware risk management system through its granular policy engine, MCP tool hub, semantic firewall, and continuous monitoring loop — creating a dynamic, evolvable set of targeted mitigation measures that adapt to real system usage and emerging risks. Evaluation Criteria
Katyar considers the risk management system requirement satisfied when:
- At least 3 distinct, enabled policies exist that address different tools or action types (e.g., one policy covering database writes, one covering payment refunds, one covering external API calls).
- Total number of active/enabled policies
- Number of unique tools targeted by policies
- Diversity of rule types (e.g., require approval, deny, mask PII, require context)
- Coverage across high-risk categories (financial transactions, data access, external communication, infrastructure changes, etc.)
- Policy creation / revision timestamps (demonstrating ongoing iteration and systematic updating)
-
Granular Policy Engine
Policies can target specific tools, methods, parameters, amounts, users, time windows or agent risk levels. -
MCP Tool Hub Integration
Tools are automatically discovered and schema-enforced — preventing invalid or high-risk parameter combinations before execution. -
Semantic Firewall & Guardrails
Real-time detection and mitigation of prompt injection, jailbreak attempts, PII leakage and secrets exfiltration. -
Versioned & Iterative Controls
Policies can be created, edited, versioned, tested and rolled back in real time via dashboard or CLI. -
Audit & Continuous Feedback Loop
Every policy decision, guardrail detection and human-in-the-loop action is cryptographically signed and logged — providing direct input for ongoing risk assessment. -
Proactive Recommendations
The dashboard automatically suggests new or updated policies based on observed gaps (e.g. “No policy covers bulk email → elevated risk of spam/phishing”).
- Identify your high-risk tools, actions and scenarios (e.g. database operations, payment processing, email sending, code deployment, external API calls).
- Create at least 3 separate policies, each targeting a different tool or risk vector:
- Example 1: Require approval for payment refunds > $500
- Example 2: Deny destructive SQL commands (DROP, DELETE without WHERE)
- Example 3: Mask PII before returning database query results
- Enable the policies in the dashboard.
- Execute agent scenarios that trigger each policy (generate real events).
- Verify in the Compliance dashboard that ≥ 3 distinct policies are recognised and active.
- Periodically review and extend the policy set as new tools, risks or use cases emerge.
- Evidence of multiple, targeted mitigation controls — not a single generic policy
- Coverage across different tools, actions and risk categories
- Ongoing iteration — visible through policy creation and revision timestamps
- Traceability — clear mapping from identified risks to specific mitigation measures
- Demonstration that risks are actively reduced — blocked events, masked outputs, escalated approvals
Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act)
→ Article 9 – Risk management system
