kaxil opened a new issue, #62872:
URL: https://github.com/apache/airflow/issues/62872
## Human-in-the-loop approval
Pause execution for human review of LLM-generated output before proceeding.
### What
Two complementary patterns for human oversight of AI-generated content (SQL,
decisions, actions):
### Pattern A: Embedded HITL
Built into any LLM operator via `require_approval=True`. Uses Airflow's
deferral mechanism to pause the task until a human approves.
```python
quality_check = LLMDataQualityOperator(
task_id="quality_analysis",
prompt="Generate quality validation queries",
require_approval=True,
approval_timeout=timedelta(hours=2),
)
```
### Pattern B: Separate ApprovalOperator
Standalone operator between generate and execute steps. Supports
`allow_modifications=True` so humans can edit LLM output before approving.
```python
generate = LLMSQLQueryOperator(task_id="generate", prompt="...",
llm_conn_id="...")
approve = ApprovalOperator(
task_id="approve",
body="{{ ti.xcom_pull(task_ids='generate') }}",
allow_modifications=True,
timeout=timedelta(hours=2),
)
execute = SQLExecuteQueryOperator(
task_id="execute",
sql="{{ ti.xcom_pull(task_ids='approve') }}",
)
generate >> approve >> execute
```
### Design Considerations
- Configurable timeout and escalation policies
- Integrates with Airflow's deferral mechanism (task defers, triggerer
monitors for approval)
- Approval UI in Airflow webserver (approve/reject/edit)
- Audit trail: who approved, when, what modifications were made
- AgentOperator can use `require_approval_for` to gate specific tool calls
(e.g., writes, sends)
### Phase
Phase 4 (Governance)
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]