larsjeh opened a new issue #14744:
URL: https://github.com/apache/airflow/issues/14744


   
   **Apache Airflow version**: 2.0.1
   **Kubernetes version (if you are using kubernetes)** (use `kubectl version`):
   **Environment**:
   
   - **Cloud provider or hardware configuration**: AWS ECS
   - **OS** (e.g. from /etc/os-release): Debian GNU/Linux 9 (stretch)
   - **Kernel** (e.g. `uname -a`):  Linux 4d41276694bd 
4.14.62-65.117.amzn1.x86_64 #1 SMP Fri Aug 10 20:03:52 UTC 2018 x86_64 GNU/Linux
   - **Install tools**:
   - **Others**: Python 3.7
   
   **What happened**:
   This bug is comparable to #13151, but only for the `shutdown` state instead 
of the `failed` state.
   
   `DAG <dag_name> already has 1 active runs, not queuing any tasks for run 
2020-12-17 08:05:00+00:00`
   
   A bit of digging revealed that this DAG had task instances associated with 
it that are in the shutdown state. As soon as I forced the task instances that 
are in the shutdown state into the failed state, the tasks would be scheduled.
   
   **What you expected to happen**:
   I expected the task instances in the DAG to be scheduled, because the DAG 
did not actually exceed the number of `max_active_runs`.
   
   **How to reproduce it**:
   I think the best approach to reproduce it is as follows:
   
   - Create a DAG and set max_active_runs to 1.
   - Ensure the DAG has ran successfully a number of times, such that it has 
some history associated with it.
   - Set one historical task instance to the shutdown state by directly 
updating it in the DB
   
   
   **Anything else we need to know**:
   A workaround is to set the tasks to failed, which will allow the scheduler 
to proceed.
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to