holdenk commented on a change in pull request #27568: [SPARK-30821][K8S]Handle
container failure in executor pods with multiple containers
URL: https://github.com/apache/spark/pull/27568#discussion_r389207127
##########
File path:
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/scheduler/cluster/k8s/ExecutorPodsSnapshot.scala
##########
@@ -59,6 +60,14 @@ object ExecutorPodsSnapshot extends Logging {
case "pending" =>
PodPending(pod)
case "running" =>
+ // A pod will be considered running as long as there is at least one
running container,
+ // so we need to check if there are any failed containers.
+ if (pod.getSpec.getRestartPolicy.equals("Never") &&
+ pod.getStatus.getContainerStatuses.stream
+ .map[ContainerStateTerminated](cs => cs.getState.getTerminated)
+ .anyMatch(t => t != null && t.getExitCode != 0)) {
+ PodFailed(pod)
+ }
PodRunning(pod)
Review comment:
So there is no return on PodFailed it will just get computed and then thrown
away and replaced with pod running. Does the unit test fail without this change?
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]