Is there any indication of which Celery version this is fixed in?

I've just started using Airflow and upon setting a start date for a task to the 
start of the year I've got 7 tasks which are stuck in queued and don't seem to 
be anywhere. (I suspect if I restarted airflow worker process on the only node 
these might come back?)

(Using Celery 4.0.2, redis 2.10.5 and Airflow 1.8.1)

> On 13 Jul 2017, at 21:58, Dan Davydov <[email protected]> wrote:
> 
> Airbnb switched to running the scheduler without restarts about a month
> ago, and it is working well. The one issue you might hit is if you are
> using a version of Celery with a bug (where workers will pull off tasks
> even if they don't have space to run them), tasks can potentially be lost
> once they are sent to your Celery backend. With restarts these tasks are
> automatically resent.
> 
> On Thu, Jul 13, 2017 at 1:38 PM, manish ranjan <[email protected]>
> wrote:
> 
>> Hi All,
>> 
>> We have recently started to use airflow to make our data engineering more
>> robust.
>> 1. I wanted to know if there are good practices around scheduling a daily/
>> bi-weekly/ weekly restart of the airflow server and scheduler? We are using
>> just the instance and not cluster.
>> 2.  Also wanted to know the troubles I may run into if someone is ready to
>> share based on experience?
>> 
>> ~Manish
>> 

Reply via email to