Governance frameworks aren’t enough
The global AI governance conversation is producing good frameworks — principles, standards, commitments. What it isn’t producing is the machinery that makes those frameworks work inside actual institutions. That machinery is the hard part.
Principles and standards
This is where the global conversation is. Summits, working groups, multi-stakeholder consultations. “AI should be safe, fair, transparent, accountable.” The EU AI Act, the OECD AI Principles, ISO 42001, NIST AI RMF.
This work matters. Institutions need shared language and agreed principles. Without them, governance is arbitrary.
But principles on paper don’t stop an AI agent from sending an unauthorised email at 2am.
Compliance and audit
This is where most governance tools live. SOC 2, ISO compliance, audit trails, evidence collection. “Did the organisation follow its controls?” Reviewed quarterly. Evidence reconstructed after the fact.
This work also matters. Organisations need to demonstrate compliance to regulators and stakeholders.
But checking what happened three months ago doesn’t prevent the mistake that’s happening right now.
The gap between layers
Layer 1 says “we should be accountable.” Layer 2 checks whether we were. Neither operates at the moment of action — when the AI agent is about to commit, the employee is about to send, the team is about to publish. That’s the gap where governance failures live.
Governance infrastructure
This is what’s missing. Infrastructure that makes governance happen at the moment of action. Not principles (though it needs them). Not compliance (though it produces the evidence). Something between — the machinery that connects what the institution says it will do to what it actually does.
This machinery needs to:
Enforce at the moment of action
Check constraints before the action happens, not three months later.
Work for both humans and AI
The same rules apply whether the action is taken by an agent, a person, or a person using AI tools.
Handle disagreement
Rules without contestation are just autocracy with better documentation. People need a legitimate way to challenge constraints.
Detect conflicts
When two rules contradict, surface the conflict before it causes a failure.
Build institutional memory
When a dispute is resolved, the resolution becomes precedent that informs future cases.
Measure continuously
Governance health isn’t a quarterly survey. It’s a signal computed from every check, escalation, and decision.
Adapt without losing integrity
Rules can version, exceptions can be granted, constraints can be amended — all with full audit trail.
What we built
Constellation is this infrastructure. Institutional governance that operates at the moment of action — checking constraints before they’re violated, routing escalations to the right authority, recording every decision, and providing a formal process for contestation.
It doesn’t replace frameworks or compliance. It’s the layer between them — the machinery that connects what the institution says to what actually happens.
7
Constraint types
3
Dispute resolution layers
5
GCI dimensions
<200ms
Constraint check
Bridging the gap
If you’re working on AI governance frameworks, standards, or policy — that work is essential and we support it. Constellation is the implementation layer those frameworks need.
If you’re running an institution that needs to comply with those frameworks — Constellation turns framework requirements into enforceable constraints that operate at the speed of your AI systems.
If you’re building or deploying AI agents — Constellation gives your agents the institutional awareness they need to act within boundaries, without slowing them down.
See what governance infrastructure looks like
Start with the health check to see where your governance stands. Or explore how the architecture works.