trlopes1974 commented on issue #39717: URL: https://github.com/apache/airflow/issues/39717#issuecomment-2270628845
just got one... <html> <body> <!--StartFragment--> Dagrun Running | Task instance's dagrun was not in the 'running' state but in the state 'failed'. -- | -- Task Instance State | Task is in the 'failed' state. <!--EndFragment--> </body> </html> external_executor_id a01b358f-ad34-4b16-81b9-fd69218f613e does not exist in flower / celery look at the timestamps in the logs:   there is a gap of 10minutes betwwen the Start ( dummy task) and the dispatch_restores task. And this behaviour is recurrent, (the 10m gap) and in the task log: 'attempt=1.log.SchedulerJob.log' (tkapp) ttauto@slautop02$ cat attempt\=1.log.SchedulerJob.log [2024-08-05T21:02:15.585+0100] {event_scheduler.py:40} WARNING - Marking task instance <TaskInstance: CSDISPATCHER_OTHERS.dispatch_restores scheduled__2024-08-05T19:47:00+00:00 [queued]> stuck in queued as failed. If the task instance has available retries, it will be retried. [2024-08-05T21:02:16.750+0100] {scheduler_job_runner.py:843} ERROR - Executor reports task instance <TaskInstance: CSDISPATCHER_OTHERS.dispatch_restores scheduled__2024-08-05T19:47:00+00:00 [queued]> finished (failed) although the task says it's queued. (Info: None) Was the task killed externally? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
