GitHub user BCantos17 edited a discussion: [CNCF_Kubernetes] Does restarting an 
airflow task on SparkKubernetesOperator kill the current Spark Task running?

Using the CNCF_Kubernetes provider with the 
[SparkKubernetesOperator](https://github.com/apache/airflow/blob/main/providers/cncf/kubernetes/src/airflow/providers/cncf/kubernetes/operators/spark_kubernetes.py#L46)
 and I've noticed whenever I restart a task the current spark job continues to 
run. This is not ideal but I am not entirely sure what's the issue. I see the 
Spark Operator does indeed have a [delete job 
function](https://github.com/apache/airflow/blob/main/providers/cncf/kubernetes/src/airflow/providers/cncf/kubernetes/operators/custom_object_launcher.py#L350)
 but I am assuming this needs to be called by the airflow task, so how can I 
get the task to call this when I restart the task? Any insights on this would 
be greatly appreciated 

Edit: looking into the airflow logs I see this

> failed to create containerd task: failed to create shim task: OCI runtime 
> create failed: runc create failed: unable to start container process: unable 
> to apply cgroup configuration: unable to start unit "cri-containerd-<some 
> hash>.scope

Seems like a kubernetes issue but still not clear on whats going on



GitHub link: https://github.com/apache/airflow/discussions/47916

----
This is an automatically sent email for [email protected].
To unsubscribe, please send an email to: [email protected]

Reply via email to