We literally have a cron job that restarts the scheduler every 30 min. Num
runs didn't work consistently in rc4, sometimes it would restart itself and
sometimes we'd end up with a few zombie scheduler processes and things
would get stuck. Also running locally, without celery.
On Mar 24, 2017 16:02
dag (with the little
tool-tip saying it's not in the DagBag but is in the metadata). Looking at
the airflow DB it is indeed is_active=t.
On Tue, Mar 21, 2017 at 4:28 PM, Vijay Ramesh wrote:
> Not sure if this is actually worth filing a bug report about, but we were
> running the 1.8.0-rc
Not sure if this is actually worth filing a bug report about, but we were
running the 1.8.0-rc4 release. This past Friday I upgraded to the -rc5
release (after the -rc4 package got pulled and was breaking our chef
cookbooks). Everything went smoothly, but a handful of DAGs reappeared in
the web-ser
Commented on your Jira ticket but the instances do exist in the
task_instance table, they just have a nested dag_id.
Main dag_id (where the subdag operator task_instance shows up):
airflow=> select * from task_instance where dag_id =
'signatures_by_country' order by execution_date desc limit 1 \x
sterday's was still technically "running").
Any thoughts/advice? (I also added
https://github.com/apache/incubator-airflow/pull/2109 to fix the formatting
of that error message)
Thanks,
- Vijay Ramesh
I can figure, with tasks
backing up on pools it never even gets to queue them but marks the dag run
itself as failed? Has anybody experienced this or have any ideas what I
might be doing wrong?
Thanks!
- Vijay Ramesh
have no task instances
created, and show the "Dag runs are deadlocked" message in the scheduler
log?
And second, is it correct that backfilling a DAG with a subdag won't
actually create task instances for the subdag?
Thanks!
- Vijay Ramesh