xiaolonghuster commented on issue #51569:
URL: https://github.com/apache/airflow/issues/51569#issuecomment-3266123156

   > Hi,
   > 
   > I'm facing the same issue with Airflow 3.0.1 — tasks randomly stay in the 
queued state forever, even though the system has available resources and 
workers.
   > 
   > Environment:
   > 
   > * EXECUTOR: LocalExecutor
   > * AIRFLOW__CORE__EXECUTOR=LocalExecutor
   > * AIRFLOW__CORE__PARALLELISM=64
   > * AIRFLOW__CORE__DAG_CONCURRENCY=32
   > * AIRFLOW__CORE__MAX_ACTIVE_TASKS_PER_DAG=32
   > * AIRFLOW__DATABASE__SQL_ALCHEMY_CONN=...
   > * AIRFLOW__DATABASE__SQL_ALCHEMY_POOL_SIZE=20
   > * AIRFLOW__DATABASE__SQL_ALCHEMY_MAX_OVERFLOW=30
   > * AIRFLOW__SCHEDULER__SCHEDULER_HEARTBEAT_SEC=10
   > * AIRFLOW__SCHEDULER__TASK_INSTANCE_HEARTBEAT_SEC=5
   > * AIRFLOW__DAG_PROCESSOR__DAG_FILE_PROCESSOR_TIMEOUT=120
   > * AIRFLOW__LOGGING__LOGGING_LEVEL=DEBUG
   >   Behavior:
   > 
   > Even with only 2–3 simple DAGs active (some running, others queued), tasks 
get stuck indefinitely in the queued state.
   > 
   > There’s no indication in the logs of why they’re not picked up.
   > 
   > I often need to manually kill or clear jobs for queued tasks to proceed.
   > 
   > There’s no CPU/memory pressure, and parallelism limits aren’t hit.
   
   @LeCastorFou I encountered the same problem, did you solve it?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@airflow.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to