
EU AI Act: governance support for productive AI
Practical guidance for roles, checkpoints, traceability and monitoring - so that operations scale safely. No legal advice.
- Governance in operations: approvals, change logs, audit trail
- Traceability from conversations to actions (as part of the integrations)
- Human-in-the-loop as a standard principle for special cases
- Particularly relevant for public/municipal areas of application (with clear boundaries)

What we deliver (and what we don't)
- Governance support in operations (process, controls, traceability)
- No legal advice and no formal classification of your system
- Goal: pragmatic implementation that enables operations instead of blocking them
Where governance is particularly important
- Service/support via chat, telephone, messenger, email
- Public environment with increased requirements for transparency and permissibility
- Processes with system actions (ticketing, appointments, routing) because "completion" requires governance
There are legal restrictions (e.g. no voice recognition, no recognition after session end). We explain the permissible framework in the setup.

Governance in the company - the building blocks.
1) Roles & responsibilities
Currently, the admin can manage, train, control and intervene (team-capable). Further differentiated roles are planned.
- Admin/team operation: training, control, intervention
- Responsibilities are defined in operation and release processes
2) Control points (approvals & workflows)
- Release workflows for changes to knowledge/rules
- Checkpoints before going live (standard testing, reviews)
3) Traceability
Traceability is possible up to a certain point - depending on integrations and involvement (e.g. target systems/tickets/CRM).
- Conversation/transcript → Decision/rule → Action → (if integrated) Entry in target system
- Change logs for knowledge changes
4) Monitoring & reporting
- Basic reporting: conversations, times, days, channels, performance (use-case-dependent, growing)
- "Not-knowledge"/uncertainties can be communicated separately to the company (e.g. by email)
- Internal alerting/thresholds for quality assurance
When things get critical: Handover instead of risk
- Handover via ticket or telephone forwarding (depending on setup)
- Transfer with context/summary for quick processing
- Actions can be "approval required" if target systems support this/are connected
What we clarify in the governance check
- Use case & purpose (what is the specialist allowed to do, what not?)
- Permitted data types / data economy (public framework)
- Roles/responsibilities & approvals
- Control points (human-in-the-loop, escalation rules)
- Reporting/KPIs & review process
- Traceability via integrations (how far the chain can be mapped)
Frequently asked questions
Do you provide EU AI Act legal advice?
No. We provide governance support in operations (controls, traceability, monitoring) and help with pragmatic implementation on a day-to-day basis.
Can we understand why the AI did something?
Via transcripts, change logs and - depending on the integration - the chain up to the action in the target system. We clarify the degree of traceability in the demo.
How do you reduce risk in the public environment?
Through clear limits per use case, data economy, human-in-the-loop and verifiable operating processes.
Ready to test your first digital specialist?
30-minute demo → Delivery usually within 48 hours. → 14-day free trial (can be canceled).