Skip to main content
Framework: EU AI Act
Article: 13
Clause Description
High-risk AI systems shall be designed and developed in such a way as to ensure sufficient transparency to enable deployers to interpret the system’s output and use it appropriately.
High-risk AI systems shall be accompanied by instructions for use in an appropriate digital format or otherwise that include concise, complete and clear information that is relevant, accessible and comprehensible to deployers.
Why This Control Exists
Transparency is essential for responsible deployment of high-risk AI. Without clear insight into how outputs are generated, deployers may misinterpret results, over-rely on incorrect decisions, or fail to detect limitations, biases or errors. This control protects end-users, supports contestability of decisions, and enables safe and ethical use — especially when outputs affect health, safety, fundamental rights or legal status.
How Katyar Helps Achieve Compliance Katyar automatically captures and exposes full metadata and context for every agent interaction, making outputs interpretable and decisions traceable. Evaluation Criteria
The control is considered satisfied when:
  • More than 90% of logged events in the last 30 days contain complete metadata (input prompt, context, tool details, output, latency, risk score, policy decision).
Evidence Captured
  • Percentage of events with full metadata completeness
  • Required fields present: timestamp, agent_id, session_id, user prompt/input, tool/method, arguments, response/output, context metadata, outcome, latency_ms
  • Sample event payloads demonstrating transparency
Key Katyar Capabilities Supporting This Control
  • Complete Event & Trace Logging
    Every decision includes original prompt, full context dictionary, tool call details, raw & processed output, and metadata.
  • Structured & Searchable Audit Logs
    Events are stored with millisecond precision; dashboard provides instant search, filtering and deep inspection.
  • Input → Output Traceability
    Clear chain from user input → agent reasoning (if available) → final output and any policy/HITL modification.
  • Export-Ready Transparency
    One-click CSV/JSON export of full event logs — suitable for deployer review or regulatory submission.
  • Deployer-Friendly Dashboard
    Real-time event stream with expandable details for non-technical users to understand system behavior.
Recommended Actions to Strengthen Compliance
  1. Ensure agents are onboarded via the SDK (katyar.init() or session creation) — un-onboarded agents do not contribute to transparency score.
  2. Include meaningful context in tool calls (e.g., user_id, session_id, channel, confidence score).
  3. Use structured tool responses rather than plain strings when possible.
  4. Generate regular activity to maintain recent events.
  5. Check the Compliance dashboard → EU-13.1 card to verify >90% completeness.
  6. Export recent logs periodically and review for any missing fields.
What Auditors Typically Look For
  • High metadata completeness across recent events (>90% preferred)
  • Traceability from input → output
  • Accessibility of logs for deployers (searchable, exportable)
  • Relevance of captured data for explaining decisions
  • Consistency even under error or high-load conditions
Katyar exceeds the Article 13 minimum by providing structured, real-time, export-ready transparency — turning a regulatory obligation into a practical tool for debugging, trust-building, and responsible AI operations. Official Reference
Read the full text of Article 13 in the consolidated EU AI Act:
Article 13 - Transparency to deployers and users