In my testing of 1.10.4rc3, I discovered that we were getting hit by a
process leak bug (which Ash has since fixed in 1.10.4rc4). This process
leak was minimal impact for most users, but was exacerbated in our case by
using "run_duration" to restart the scheduler every 10 minutes.

To mitigate that issue while remaining on the RC, we removed the use of
"run_duration", since it is deprecated as of master anyways:
https://github.com/apache/airflow/blob/master/UPDATING.md#remove-run_duration

Unfortunately, testing on our cluster (1.10.4rc3 plus no "run_duration")
has revealed that while the process leak issue was mitigated, that we're
now facing issues with orphaned tasks. These tasks are marked as
"scheduled" by the scheduler, but _not_ successfully queued in Celery even
after multiple scheduler loops. Around ~24h after last restart, we start
having enough stuck tasks that the system starts paging and requires a
manual restart.

Rather than generic "scheduler instability", this specific issue was one of
the reasons why we'd originally added the scheduler restart. But it appears
that on master, the orphaned task detection code still only runs on
scheduler start despite removing "run_duration":
https://github.com/apache/airflow/blob/master/airflow/jobs/scheduler_job.py#L1328

Rather than immediately filing an issue I wanted to inquire a bit more
about why this orphan detection code is only run on scheduler start,
whether it would be safe to send in a PR to run it more often (maybe a
tunable parameter?), and if there's some other configuration issue with
Celery (in our case, backed by AWS Elasticache) that would cause us to see
orphaned tasks frequently.

Reply via email to