GitHub user rcrchawla added a comment to the discussion: Airflow task failed 
but spark kube app is running

Hi @shaealh ,
This is the only error I see in api server logs there is no such operational 
error.
One thing I notice in worker logs  
this is the task
2026-03-10 02:14:46.785312 [info     ] [0e6ea763-20c8-458c-bfe0-5660de056a59] 
Executing workload in Celery: token='eyJ***' 
ti=TaskInstance(id=UUID('019cd54c-28b0-7e18-9a7b-71ba469bf545'), 
task_id='stg_intl_user_useroptins.get_count', dag_id='staging-userservice', 
run_id='manual__2026-03-10T00:00:00+00:00', try_number=1, map_index=-1, 
pool_slots=1, queue='default', priority_weight=96, executor_config=None, 
parent_context_carrier={}, context_carrier={}, queued_dttm=None) 
dag_rel_path=PurePosixPath('staging_userservice.py') 
bundle_info=BundleInfo(name='dags-folder', version=None) 
log_path='dag_id=staging-userservice/run_id=manual__2026-03-10T00:00:00+00:00/task_id=stg_intl_user_useroptins.get_count/attempt=1.log'
 type='ExecuteTask' [airflow.providers.celery.executors.celery_executor_utils]

2026-03-10 02:44:02.200313 [warning  ] Failed to send heartbeat. Will be 
retried [supervisor] failed_heartbeats=1 max_retries=3 
ti_id=UUID('019cd54c-28b0-7e18-9a7b-71ba469bf545')
Γò¡ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇ
 Traceback (most recent call last) 
ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓò«
 
Γöé 
/home/airflow/.local/lib/python3.12/site-packages/httpx/_transports/default.py:101
 in            Γöé
Γöé map_httpcore_exceptions            

this is the worker logs which we see the error exact at 02:44 am

API server logs @02:45 am UTC
2026-03-10 02:45:23 [debug    ] Processing heartbeat           
hostname=airflow-worker-1.airflow-worker.de-services.svc.cluster.local 
pid=155023 ti_id=019cd518-d7c9-7e7e-bde2-efc6322e36a3
[2026-03-10T02:45:23.575+0000] {exceptions.py:77} ERROR - Error with id 9zBmdizJ
and the error that I have shared in the earlier thread and also I see below 
error 
2026-03-10 02:45:23 [debug    ] Processing heartbeat           
hostname=airflow-worker-1.airflow-worker.de-services.svc.cluster.local 
pid=155023 ti_id=019cd518-d7c9-7e7e-bde2-efc6322e36a3
[2026-03-10T02:45:23.575+0000] {exceptions.py:77} ERROR - Error with id 9zBmdizJ

Airflow logs show below only there is no error 
[2026-03-08, 08:16:29] INFO - Waiting for container 'spark-kubernetes-driver' 
state to be completed: 
source="airflow.providers.cncf.kubernetes.utils.pod_manager.PodManager"
[2026-03-08, 08:16:30] INFO - Waiting for container 'spark-kubernetes-driver' 
state to be completed: 
source="airflow.providers.cncf.kubernetes.utils.pod_manager.PodManager"
[2026-03-08, 08:16:31] INFO - Waiting for container 'spark-kubernetes-driver' 
state to be completed: 
source="airflow.providers.cncf.kubernetes.utils.pod_manager.PodManager"
[2026-03-08, 08:16:32] INFO - Waiting for container 'spark-kubernetes-driver' 
state to be completed: 
source="airflow.providers.cncf.kubernetes.utils.pod_manager.PodManager"
[2026-03-08, 08:16:33] INFO - Waiting for container 'spark-kubernetes-driver' 
state to be completed: 
source="airflow.providers.cncf.kubernetes.utils.pod_manager.PodManager"
[2026-03-08, 08:16:34] INFO - Waiting for container 'spark-kubernetes-driver' 
state to be completed: 
source="airflow.providers.cncf.kubernetes.utils.pod_manager.PodManager"
[2026-03-08, 08:16:35] INFO - Waiting for container 'spark-kubernetes-driver' 
state to be completed: 
source="airflow.providers.cncf.kubernetes.utils.pod_manager.PodManager"
[2026-03-08, 08:16:36] INFO - Waiting for container 'spark-kubernetes-driver' 
state to be completed: 
source="airflow.providers.cncf.kubernetes.utils.pod_manager.PodManager"
[2026-03-08, 08:16:37] INFO - Waiting for container 'spark-kubernetes-driver' 
state to be completed: 
source="airflow.providers.cncf.kubernetes.utils.pod_manager.PodManager"
[2026-03-08, 08:16:38] INFO - Waiting for container 'spark-kubernetes-driver' 
state to be completed: 
source="airflow.providers.cncf.kubernetes.utils.pod_manager.PodManager"
[2026-03-08, 08:16:39] INFO - Waiting for container 'spark-kubernetes-driver' 
state to be completed: 
source="airflow.providers.cncf.kubernetes.utils.pod_manager.PodManager"
[2026-03-08, 08:16:40] INFO - Waiting for container 'spark-kubernetes-driver' 
state to be completed: 
source="airflow.providers.cncf.kubernetes.utils.pod_manager.PodManager"
[2026-03-08, 08:16:41] INFO - Waiting for container 'spark-kubernetes-driver' 
state to be completed: 
source="airflow.providers.cncf.kubernetes.utils.pod_manager.PodManager"
[2026-03-08, 08:16:42] INFO - Waiting for container 'spark-kubernetes-driver' 
state to be completed: 
source="airflow.providers.cncf.kubernetes.utils.pod_manager.PodManager"
[2026-03-08, 08:16:43] INFO - Waiting for container 'spark-kubernetes-driver' 
state to be completed: 
source="airflow.providers.cncf.kubernetes.utils.pod_manager.PodManager"
[2026-03-08, 08:16:44] INFO - Waiting for container 'spark-kubernetes-driver' 
state to be completed: 
source="airflow.providers.cncf.kubernetes.utils.pod_manager.PodManager"
[2026-03-08, 08:16:45] INFO - Waiting for container 'spark-kubernetes-driver' 
state to be completed: 
source="airflow.providers.cncf.kubernetes.utils.pod_manager.PodManager"
[2026-03-08, 08:16:46] INFO - Waiting for container 'spark-kubernetes-driver' 
state to be completed: 
source="airflow.providers.cncf.kubernetes.utils.pod_manager.PodManager
and after that task got failed with no error recorded in the logs

GitHub link: 
https://github.com/apache/airflow/discussions/63298#discussioncomment-16092145

----
This is an automatically sent email for [email protected].
To unsubscribe, please send an email to: [email protected]

Reply via email to