[
https://issues.apache.org/jira/browse/AIRFLOW-1249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Dan Davydov updated AIRFLOW-1249:
---------------------------------
Description:
Running tasks from backfills with unmet conditions are stuck running. E.g.
create a DAG and pause it, mark the dagrun for a specific date as failed in the
UI, and then start a backfill for that date. Note that the task is stuck in the
running or queued state with no start/end date set. I believe this occurs for
the Celery executor but it may affect other/all executors.
This can e.g. cause pools to fill up.
This query cleans up the tasks as a workaround:
{code}
DELETE ti FROM task_instance ti
JOIN dag_run dr
ON ti.execution_date = dr.execution_date AND
ti.dag_id = dr.dag_id
JOIN dag dg
ON dr.dag_id=dg.dag_id
WHERE ISNULL(ti.start_date) and (ti.state="queued" or
ti.state="running")
and dr.state="failed" and dg.is_paused="1";
{code}
was:
Running tasks from backfills with unmet conditions are stuck running. E.g.
create a DAG and pause it, mark the dagrun for a specific date as failed in the
UI, and then start a backfill for that date. Note that the task is stuck in the
running or queued state with no start/end date set. I believe this occurs for
the Celery executor but it may affect other/all executors.
This can e.g. cause pools to fill up.
> Running tasks from backfills with unmet conditions are stuck running
> --------------------------------------------------------------------
>
> Key: AIRFLOW-1249
> URL: https://issues.apache.org/jira/browse/AIRFLOW-1249
> Project: Apache Airflow
> Issue Type: Bug
> Reporter: Dan Davydov
>
> Running tasks from backfills with unmet conditions are stuck running. E.g.
> create a DAG and pause it, mark the dagrun for a specific date as failed in
> the UI, and then start a backfill for that date. Note that the task is stuck
> in the running or queued state with no start/end date set. I believe this
> occurs for the Celery executor but it may affect other/all executors.
> This can e.g. cause pools to fill up.
> This query cleans up the tasks as a workaround:
> {code}
> DELETE ti FROM task_instance ti
> JOIN dag_run dr
> ON ti.execution_date = dr.execution_date AND
> ti.dag_id = dr.dag_id
> JOIN dag dg
> ON dr.dag_id=dg.dag_id
> WHERE ISNULL(ti.start_date) and (ti.state="queued" or
> ti.state="running")
> and dr.state="failed" and dg.is_paused="1";
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)