SameerMesiah97 commented on code in PR #61110:
URL: https://github.com/apache/airflow/pull/61110#discussion_r2755860968
##########
providers/cncf/kubernetes/src/airflow/providers/cncf/kubernetes/operators/spark_kubernetes.py:
##########
@@ -248,23 +248,33 @@ def find_spark_job(self, context, exclude_checked: bool =
True):
self._build_find_pod_label_selector(context,
exclude_checked=exclude_checked)
+ ",spark-role=driver"
)
- pod_list = self.client.list_namespaced_pod(self.namespace,
label_selector=label_selector).items
+ # since we did not specify a resource version, we make sure to get the
latest data
+ # we make sure we get only running or pending pods.
+ field_selector = self._get_field_selector()
+ pod_list = self.client.list_namespaced_pod(
+ self.namespace, label_selector=label_selector,
field_selector=field_selector
+ ).items
pod = None
if len(pod_list) > 1:
# When multiple pods match the same labels, select one
deterministically,
- # preferring a Running pod, then creation time, with name as a
tie-breaker.
+ # preferring Succeeded, then Running (while not in terminating)
then Pending pod
+ # as if another pod was created, it will be in either the
terminating status or a terminal phase,
+ # if it is in terminating, it will have a deletion_timestamp on
the pod.
+ # pending pods need to also be selected, as what if a driver pod
just failed and a new pod is
+ # created, we do not want the task to fail.
Review Comment:
I would tighten this comment like this:
```
# When multiple pods match the same labels, select one deterministically.
# Prefer Succeeded, then Running (excluding terminating), then Pending.
# Terminating pods can be identified via deletion_timestamp.
# Pending pods are included to handle recent driver restarts without failing
the task.
```
Right now, it sounds a bit unclear and rambly.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]