dennyac commented on issue #9610:
URL: https://github.com/apache/airflow/issues/9610#issuecomment-706394744


   We're experiencing the same issue across Operators/Sensors (Airflow 1.10.11, 
CeleryExecutor with Redis backend)
   
   For the two jobs of the same task that gets enqueued an hour apart, the 
first job continues to run and the logs don't appear in the UI. The second job 
completes immediately because its dependencies (The first job enqueued which is 
still running is a dependency) are not met and you see "Task is not able to be 
run" in the UI logs. 
   
   If the first job fails or completes (success/fails), the task status gets 
updated accordingly and the logs will then be added to Airflow UI.
   
   If the first job doesn't complete (noticed cases where the job just hangs 
and not sure how worker restarts impact this), the task attempt will remain in 
the running state. In this scenario, **task execution timeout isn't honored**, 
so the task can run for a really long time.
   
   Unknowns - 
   - Why is job being enqueued twice?
   - Why airflow isn't honoring task execution timeout in this scenario?
   
   The latter unknown is causing issues for us as tasks end up running for 
hours, and we have to manually intervene and restart the task.
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to