terrynie123 opened a new issue #18621:
URL: https://github.com/apache/airflow/issues/18621
### Describe the issue with documentation
I have a dag file like this:
```python
from airflow import DAG
from datetime import datetime, timedelta
import amqp
import json
from airflow.operators.bash import BashOperator
from airflow.utils.dates import days_ago
import time
args = {
'owner': 'airflow',
'depends_on_past': False,
'start_date': datetime(2020, 2, 24),
'email': ['[email protected]'],
'email_on_failure': False,
'email_on_retry': False,
'retries': 1,
'retry_delay': timedelta(minutes=5),
}
conn = amqp.Connection(host="localhost:5672", userid="xxxx",
password="xxxx", virtual_host="/")
conn.connect()
channelx=conn.channel()
channelx.queue_declare(queue="airflow", durable=True)
def create_dag(dag_id, schedule_interval, command):
with DAG(
dag_id=dag_id,
default_args=args,
start_date=days_ago(2),
schedule_interval=schedule_interval,
) as dag:
task = BashOperator(
task_id="sdynamic.test." + dag_id,
bash_command=command,
default_args=args,
dag=dag,
)
return dag
def consumer(message):
_json = json.loads(message.body.decode('utf-8'))
_dag = dag.create_dag(_json["dag_id"], _json["schedule_interval"],
_json["command"])
globals()[_json["dag_id"]] = _dag
channelx.basic_ack(message.delivery_tag)
start_time = time.time()
channelx.basic_consume("airflow", callback=consumer)
while time.time() - start_time < 20:
conn.drain_events()
conn.close()
```
this code can create dag successfully. but when trigger this dag, it will
throw a exception:

### How to solve the problem
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of
Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]