How institutions use Constellation to move faster with more confidence.
Research institute · Solo operator · AI-assisted governance research
How IRSA Institute governs research integrity with AI agents and a team of one
Real usage. IRSA is the first institution running on Constellation.
The challenge
IRSA Institute is a governance research organisation that produces working papers, policy explainers, and diagnostic instruments on institutional coordination. It publishes original theory — the Governance Coordination Cost framework, Constitutional Settlement model, and R-Index instrument — each with different publication standards, peer review requirements, and citation obligations.
With a single operator using AI agents extensively for research, writing, and operational decisions, the risk was clear: an AI-drafted paper could make claims that exceed the evidence base, a public communication could misrepresent preliminary findings as conclusions, or a partnership commitment could conflict with the institute’s independence mandate.
Without encoded governance, the operator was the single point of failure for every institutional boundary. That doesn’t scale — and it doesn’t survive a bad day.
1
Operator, multiple AI agents
100%
Governance in one person’s head
What they built
IRSA deployed Constellation as its own governance infrastructure. The institute has decision records, commitment registers, a knowledge graph, and constraint sets — all encoded and enforced. The MCP server runs in every Claude Code session, checking constraints at the moment of action — before an email is sent, before a commitment is made, before a publication is finalised.
Key constraints encoded: communications require board approval for public statements, expenditures over $10,000 need director sign-off, no data sharing without consent, partnership commitments over $50k require executive approval, and a 48-hour cooling period after board decisions before announcements.
The GCI diagnostic runs weekly, tracking governance health across five dimensions. Research publications are checked against existing commitments and the institute’s stated methodology standards before release.
Results
7
Constraint types enforced
Real-time
Constraint checking via MCP
Every consequential action — publishing, spending, committing, communicating — is now checked against institutional constraints before execution. The constraint engine has caught boundary crossings that would have gone unnoticed: an AI drafting a partnership email that exceeded the institute’s authority, a research publication timeline that conflicted with an existing commitment.
The Forum layer provides formal contestation — any constraint can be challenged, evidence submitted, rulings issued, and precedents set. This means governance isn’t just top-down rules; it’s a living system that can be questioned and refined.
“Constellation is the governance infrastructure I was writing papers about. Now I run my own institution on it. The constraint engine doesn’t slow me down — it means I can move faster because the boundaries are structural, not remembered.”
— Roshan Ghadamian, Founder, IRSA Institute
The key insight
A single-operator institution is the hardest governance case. There’s no committee to catch mistakes, no second pair of eyes. Constellation makes governance structural rather than personal — boundaries hold even when the human is tired, distracted, or moving fast with AI agents across multiple research streams simultaneously.
Software platform · Self-governance · Dogfooding
How Constellation governs its own development using Constellation
Real usage. The governance platform that governs itself.
The challenge
Building a governance platform with AI assistance creates a recursive problem: the AI agents writing the code need the same governance constraints the platform is designed to enforce. Without it, an AI-assisted coding session could introduce a feature that violates the platform’s own architectural commitments, make a public communication that misrepresents capabilities, or commit to a partnership that exceeds operational authority.
Every Claude Code session building Constellation runs the Constellation MCP server. The question was: does the governance layer actually work when the stakes are real and the pace is fast?
How it works
The MCP server connects to the Constellation API, loading constraints from the database with a 5-minute cache and circuit breaker for resilience. Every time an AI agent attempts a consequential action — publishing content, making a commitment, sharing data, spending money — the constraint engine evaluates it against 7 constraint types: authority, threshold, prohibition, timing, domain-topic, audience, and sequence.
Actions that pass proceed silently. Actions that trigger constraints surface an explanation — which constraint, why it applies, who to escalate to. Actions that violate prohibitions are flagged immediately. Every evaluation is traced, creating an automatic audit log.
The system never blocks. It informs. The human always decides. But the decision is now made with full institutional context, not from memory.
What we’ve learned
<200ms
Constraint check latency
The most important finding: governance at the moment of action is invisible when it’s working. You don’t notice the constraint checks passing. You only notice when something is flagged — and by then, the flag has saved you from a mistake you wouldn’t have caught otherwise.
The circuit breaker pattern means the system degrades gracefully. If the API is down, hardcoded constraints still apply. If the cache is stale, it serves stale data while refreshing in the background. Governance never stops because infrastructure fails.
The key insight
Dogfooding a governance platform is the ultimate test. If the constraint engine adds friction, you feel it immediately — because you’re the one being governed. The fact that we run it in every session and don’t turn it off is the strongest evidence that it works.
Donation platform · AI-built product · Financial governance
How Elevate Gift governs financial operations and AI-assisted development across a 200+ API platform
Real usage. Elevate Gift runs on Constellation to govern donations, grants, and platform operations.
The challenge
Elevate Gift is a donation platform with a dual-invoice advertising model — brands fund campaigns that split between media spend and tax-deductible donations, creating perpetual endowments for causes. The platform handles real money: Stripe payments, grant disbursements, recoverable grant agreements, and a Karma social currency system.
The entire platform is built and operated with AI agents. With 200+ API routes, 79 database models, and financial operations running through every layer, the governance challenge was acute: an AI agent could deploy a pricing change that misrepresents tax deductibility, approve a grant disbursement without trustee sign-off, or modify the Karma economy in ways that break audit compliance.
As a not-for-profit adjacent platform handling charitable donations, the compliance obligations are real. ACNC reporting, grant lifecycle management, and donor trust all depend on governance that can’t rely on a single operator remembering every rule.
200+
API routes handling money
79
Database models, one operator
Real $
Donations, grants, payments
What they built
Elevate deployed Constellation to govern financial operations, development decisions, and compliance boundaries. The MCP server runs in every coding session, checking constraints before any consequential action touches the platform.
Key constraints encoded:
THRESHOLD “Grant disbursements over $10,000 require trustee approval”
PROHIBITION “No direct mutation of Karma wallets — all operations through KarmaService”
DOMAIN_TOPIC “No communications implying tax advice”
SEQUENCE “Schema migrations require build verification before deployment”
AUTHORITY “Stripe webhook configuration changes require director sign-off”
The constraint engine understands the difference between development actions (safe to proceed) and financial operations (must check). AI agents can build features freely but can’t modify payment flows, grant approval logic, or compliance reporting without explicit governance checks.
Results
0
Unauthorized financial changes
100%
Compliance audit trail
AI-built
Entire platform, governed
The platform processes real donations, manages grant lifecycles with trustee approvals, and runs a Karma economy — all with full audit trail coverage. The constraint engine has caught issues that manual review would have missed: a schema migration that would have broken grant reporting, a Karma operation that bypassed the rate limiting service, and an API change that would have exposed donor data.
Every financial decision, every compliance-sensitive change, and every data-handling operation is traced. When ACNC audit time comes, the export is one API call — not a scramble through git logs.
“We build the entire platform with AI agents. Constellation means we can move at AI speed without worrying that a coding session will accidentally modify grant approval logic or break compliance reporting. The governance is invisible until you need it.”
— Elevate Gift engineering team
The key insight
When AI agents build and operate a financial platform, governance isn’t optional — it’s infrastructure. The constraint engine doesn’t slow development down. It means you can deploy faster because the guardrails are structural. You don’t need to manually review every commit that touches payments. You encode the rules once and they hold across every session, every agent, every deployment.
B2B startup · 20+ AI agents · Go-to-market
How a GTM team stopped their AI SDR from making promises the company couldn’t keep
Composite scenario based on patterns reported by companies deploying 10-20+ AI agents across sales, marketing, and operations.
The pattern
A B2B startup deploys 20 AI agents across their go-to-market: AI SDRs for outbound, conversational agents for inbound, an AI marketing coordinator, and various automation agents for enrichment, scheduling, and follow-up. Results are immediate — pipeline doubles, deal volume increases, 60,000+ personalised emails sent.
Then the problems start. The AI SDR offers a prospect a speaking slot at the annual conference — without authority. An agent sends a pricing discount that exceeds the approved range. A marketing agent publishes content touching on a topic the legal team has flagged as off-limits. A partnership email commits to deliverables that engineering hasn’t scoped.
The team is spending 30-40 hours per week manually reviewing agent output. Every morning starts with “what did the agents do overnight?” They’re using Zapier, Salesforce, and copy-paste to stitch context between agents. There is no formal authority architecture.
30-40 hrs
Weekly agent babysitting
20+
Agents, no shared constraints
3
Unauthorized commitments/month
The constellation approach
Instead of reviewing every agent output manually, the team encodes their existing rules as constraints:
PROHIBITION “No commitments to conference speaking slots without events team approval”
THRESHOLD “Discounts over 15% require VP Sales approval”
DOMAIN_TOPIC “No content touching: litigation, M&A, political”
SEQUENCE “Partnership commitments require completed legal review within last 7 days”
AUDIENCE “Unaudited financial projections: board-only”
The MCP server plugs into each agent’s tool chain. Before any agent takes a consequential action, the constraint engine evaluates it. Passes are silent. Violations surface immediately with context. Escalations route to the right authority level automatically.
Setup takes hours, not weeks. The agents keep working at scale. The humans stop babysitting and start governing.
Expected outcomes
~70%
Less manual review time
0
Unauthorized commitments
Minutes
Time to deploy per agent
The agents don’t get worse. They get governed. The pipeline keeps growing. The unauthorized promises stop. And the team goes from 30 hours of babysitting to 5 hours of governance review — focused on the genuinely hard judgement calls that constraints can’t automate.
“We were spending more time managing agents than managing revenue. Now the constraint layer handles the boundaries and we handle the strategy. The agents still work 24/7 — they just can’t promise things we can’t deliver.”
— Composite quote from GTM leaders deploying AI agent fleets
The key insight
Multi-agent systems don’t need more human reviewers. They need architectural governance — encoded constraints that check every action at the moment it happens. The agents are faster than you. They work 24/7. You can’t keep up by watching. You keep up by encoding the rules.