Article: 13 Clause Description
High-risk AI systems shall be designed and developed in such a way as to ensure sufficient transparency to enable deployers to interpret the system’s output and use it appropriately.
High-risk AI systems shall be accompanied by instructions for use in an appropriate digital format or otherwise that include concise, complete and clear information that is relevant, accessible and comprehensible to deployers. Why This Control Exists
Transparency is essential for responsible deployment of high-risk AI. Without clear insight into how outputs are generated, deployers may misinterpret results, over-rely on incorrect decisions, or fail to detect limitations, biases or errors. This control protects end-users, supports contestability of decisions, and enables safe and ethical use — especially when outputs affect health, safety, fundamental rights or legal status. How Katyar Helps Achieve Compliance Katyar automatically captures and exposes full metadata and context for every agent interaction, making outputs interpretable and decisions traceable. Evaluation Criteria
The control is considered satisfied when:
- More than 90% of logged events in the last 30 days contain complete metadata (input prompt, context, tool details, output, latency, risk score, policy decision).
- Percentage of events with full metadata completeness
- Required fields present: timestamp, agent_id, session_id, user prompt/input, tool/method, arguments, response/output, context metadata, outcome, latency_ms
- Sample event payloads demonstrating transparency
-
Complete Event & Trace Logging
Every decision includes original prompt, full context dictionary, tool call details, raw & processed output, and metadata. -
Structured & Searchable Audit Logs
Events are stored with millisecond precision; dashboard provides instant search, filtering and deep inspection. -
Input → Output Traceability
Clear chain from user input → agent reasoning (if available) → final output and any policy/HITL modification. -
Export-Ready Transparency
One-click CSV/JSON export of full event logs — suitable for deployer review or regulatory submission. -
Deployer-Friendly Dashboard
Real-time event stream with expandable details for non-technical users to understand system behavior.
- Ensure agents are onboarded via the SDK (
katyar.init()or session creation) — un-onboarded agents do not contribute to transparency score. - Include meaningful context in tool calls (e.g., user_id, session_id, channel, confidence score).
- Use structured tool responses rather than plain strings when possible.
- Generate regular activity to maintain recent events.
- Check the Compliance dashboard → EU-13.1 card to verify >90% completeness.
- Export recent logs periodically and review for any missing fields.
- High metadata completeness across recent events (>90% preferred)
- Traceability from input → output
- Accessibility of logs for deployers (searchable, exportable)
- Relevance of captured data for explaining decisions
- Consistency even under error or high-load conditions
Read the full text of Article 13 in the consolidated EU AI Act:
Article 13 - Transparency to deployers and users
