Human-in-the-Loop¶
Orion's LangGraph pipeline supports optional HITL interrupt gates between every node, allowing humans to review and approve or reject generated content before it advances.
How HITL Works¶
When enable_hitl=True is passed to build_content_graph(), interrupt gate nodes are inserted after each processing node:
graph TD
S["strategist_node"] --> SG["strategist_review\n(HITL Gate)"]
SG -->|approved| C["creator_node"]
SG -->|rejected| END1["END\n(pipeline fails)"]
C --> CG["creator_review\n(HITL Gate)"]
CG -->|approved| A["analyst_node"]
CG -->|rejected| END2["END"]
A --> AG["analyst_review\n(HITL Gate)"]
AG -->|"approved + iterations left"| S
AG -->|"rejected / max iterations"| END3["END"]
Each HITL gate uses LangGraph's interrupt() function to pause execution and present a review payload to the human reviewer.
Review Payloads¶
Strategist Review¶
Presented after script generation and self-critique:
{
"stage": "strategist",
"instruction": "Review the generated script and critique. Approve to proceed to visual prompt extraction, or reject with feedback.",
"script": {
"hook": "Did you know AI agents can now...",
"body": "The landscape of AI is changing...",
"cta": "Follow for more AI insights",
"visual_cues": ["futuristic cityscape", "robot hands typing"]
},
"critique": {
"score": 0.85,
"feedback": "Strong hook, body could be more specific..."
}
}
Creator Review¶
Presented after visual prompt extraction:
{
"stage": "creator",
"instruction": "Review the visual prompts. Approve to finalise content, or reject with feedback.",
"visual_prompts": {
"prompts": [
{ "scene": 1, "prompt": "Cinematic shot of futuristic cityscape..." },
{ "scene": 2, "prompt": "Close-up of robot hands typing..." }
]
},
"script_summary": {
"hook": "Did you know AI agents can now...",
"cta": "Follow for more AI insights"
}
}
Analyst Review¶
Presented after performance analysis:
{
"stage": "analyst",
"instruction": "Review the performance analysis and improvement suggestions. Approve to cycle back for improvements, or reject to finalise as-is.",
"performance_summary": "Content scores well on engagement...",
"improvement_suggestions": [
{ "type": "hook", "suggestion": "Make the opening more provocative" }
],
"analyst_score": 0.78,
"iteration_count": 0,
"max_iterations": 3
}
Resuming a Paused Pipeline¶
When a pipeline is paused at an HITL gate, resume it via the Director API:
# Approve
curl -X POST http://localhost:8000/api/v1/director/api/v1/content/resume \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{
"thread_id": "thread-uuid",
"approved": true
}'
# Reject with feedback
curl -X POST http://localhost:8000/api/v1/director/api/v1/content/resume \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{
"thread_id": "thread-uuid",
"approved": false,
"feedback": "Hook needs to be more engaging"
}'
# Approve
resp = httpx.post(
"http://localhost:8000/api/v1/director/api/v1/content/resume",
headers={"Authorization": f"Bearer {token}"},
json={"thread_id": "thread-uuid", "approved": True},
)
# Reject with feedback
resp = httpx.post(
"http://localhost:8000/api/v1/director/api/v1/content/resume",
headers={"Authorization": f"Bearer {token}"},
json={
"thread_id": "thread-uuid",
"approved": False,
"feedback": "Hook needs to be more engaging",
},
)
Decision Tracking¶
HITL decisions are accumulated in the hitl_decisions state key using LangGraph's Annotated[list, operator.add] reducer pattern:
# After multiple gates, state contains:
state["hitl_decisions"] = [
{"stage": "strategist", "approved": True, "feedback": None},
{"stage": "creator", "approved": True, "feedback": None},
{"stage": "analyst", "approved": False, "feedback": "Finalize as-is"},
]
Feedback Loop¶
When the analyst HITL gate is approved:
- The graph checks
iteration_countagainstmax_iterations(default: 3) - If iterations remain, the graph routes back to
strategist_node - The strategist receives
improvement_suggestionsfrom the analyst iteration_countis incremented- A new script is generated incorporating the feedback