vivek-zeta opened a new issue, #24731:
URL: https://github.com/apache/airflow/issues/24731

   ### Apache Airflow version
   
   2.2.2
   
   ### What happened
   
   We are using celery executor with Redis as broker.
   We are using default settings for celery.
   We are trying to test below case.
   - What happens when Redis or Worker Pod goes down?
   
   Observations:
   - We tried killing Redis Pod when one task was in `queued` state.
   - We observed that task which was in `queued` state stays in `queued` state 
even after Redis Pod comes up.
   - From Airflow UI we tried clearing task and run again. But still it was 
getting stuck at `queued` state only.
   - Task message was received at `celery worker` . But worker is not starting 
executing the task.
   - Let us know if we can try changing any celery or airflow config to avoid 
this issue.
   
   
   - Please help here to avoid this case. As this is very critical if this 
happens in production.
   
   ### What you think should happen instead
   
   Task must not get stuck at `queued` state and it should start executing.
   
   ### How to reproduce
   
   While task is in queued state. Kill the Redis pod.
   
   ### Operating System
   
   k8s
   
   ### Versions of Apache Airflow Providers
   
   _No response_
   
   ### Deployment
   
   Other 3rd-party Helm chart
   
   ### Deployment details
   
   _No response_
   
   ### Anything else
   
   _No response_
   
   ### Are you willing to submit PR?
   
   - [ ] Yes I am willing to submit a PR!
   
   ### Code of Conduct
   
   - [X] I agree to follow this project's [Code of 
Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to