Human-in-the-Loop Governance
A governance model where human approval is required for certain AI agent actions — appropriate for high-stakes decisions but unsustainable as the sole governance mechanism.
Human-in-the-loop (HITL) governance requires human approval before AI agents can take certain actions. This is the most common current approach to AI governance.
HITL is appropriate for: - High-stakes decisions (financial transactions above thresholds, external communications, irreversible actions) - Novel situations the AI hasn't encountered before - Actions that could affect many people
HITL is inappropriate as the sole governance mechanism because: - It doesn't scale (AI agents can generate requests faster than humans can review them) - It creates bottlenecks (agents wait for approval while humans are unavailable) - It provides false assurance (humans rubber-stamp approvals when overwhelmed) - It defeats the purpose of AI automation (if every action needs approval, why use an agent?)
The solution is not to eliminate HITL, but to combine it with structural enforcement. Low-risk actions that fall within established constraints proceed automatically. High-risk actions that cross boundaries escalate for human review. This is progressive trust.
How Constellation handles this
Constellation combines structural enforcement with human-in-the-loop for a sustainable governance model. Routine actions within constraints proceed automatically; boundary cases and high-risk actions escalate for human review.