What DQI Enforce is being built to do
DQI Enforce is being designed as a reverse proxy positioned between users, applications and the AI services they call. The intended pattern is that interactions pass through the governance layer and are evaluated against policies the organisation has defined: prompt content, sensitivity classification, source application, user role, output content and other dimensions policy authors choose to encode.
When a policy rule is matched, DQI Enforce is intended to apply one of several controls: allow with logging, redact and pass through, require human review, escalate to a named approver, or block outright. Each decision is designed to be recorded with the policy that was triggered, the reason, the user, the application, the timestamp and the final outcome.
Why prompt-level guardrails are not enough
Most AI guardrails today live inside the prompt or inside the model. That has two structural problems. First, prompt-level controls are bypassable: a determined or careless user can work around them, and there is no independent record of what happened. Second, model-level controls are owned by the model vendor, not by the organisation using the model, which means they cannot be tied to the organisation's own policy, risk register or audit trail.
DQI Enforce is designed to sit outside the model. It is intended to be owned by the organisation, configured to the organisation's policies and able to produce evidence the organisation can act on. The model can change; the governance layer should remain stable.
Policy as code
Policies in DQI Enforce are planned as structured rules, not free text. A policy specifies what to detect, the conditions under which it applies, and the control to enforce. Policies can be aligned to internal standards, regulatory obligations such as the EU AI Act, sector-specific rules, and organisation-defined risk thresholds. The target design is for policies to be versioned, reviewable and exportable.
Logging, escalation and human review
The intended operating model is that every interaction produces a log entry, whether or not a policy was triggered. When a policy is triggered, the system records the matched rule, the action taken and the resolution. Where human review is required, DQI Enforce is designed to route the interaction to a named reviewer or queue, capture the reviewer's decision, and tie that decision back to the original log entry.
Compliance reporting and audit evidence
DQI Enforce is being developed to produce structured exports for internal assurance, audit and regulatory reporting. Reports are intended to show policy coverage, exceptions raised, interventions made, escalation outcomes and trends over time. This is the evidence layer that a static AI policy document cannot produce on its own.
How it fits with the rest of the platform
DQI Assess identifies where governance gaps exist. DQI Enforce is being built to close those gaps in operation. DQI Integrate is being built to help ensure the data flowing into AI is itself governed and trusted. Together they cover the chain from data to outcome.