auyer commented on issue #37681:
URL: https://github.com/apache/airflow/issues/37681#issuecomment-1984210977
I had a different take.
In my case, I only allow one task at a time. Sometimes after a failure, even
when Airflow considered the task to be in error state, the Spark Task would
still live on.
One thing I did to deal with this, is to force the deletion old spark
application with the same name. Airflow will always generate the same Spark
Application name, and I run this code before every execution (with Python
Operator).
```python
from kubernetes import client, config
def DeleteSparkApplicationByName(name, **context):
try:
config.load_incluster_config()
except:
try:
config.load_kube_config()
except Exception as e:
raise (e)
coa = client.CustomObjectsApi()
group = "sparkoperator.k8s.io"
version = "v1beta2"
namespace = "spark-operator"
plural = "SparkApplications".lower()
obj = None
try:
obj = coa.get_namespaced_custom_object(group, version, namespace,
plural, name)
except:
pass
if obj:
coa.delete_namespaced_custom_object(group, version, namespace,
plural, name)
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]