Marcelo Masiero Vanzin created SPARK-29950:
----------------------------------------------

             Summary: Deleted excess executors can connect back to driver in 
K8S with dyn alloc on
                 Key: SPARK-29950
                 URL: https://issues.apache.org/jira/browse/SPARK-29950
             Project: Spark
          Issue Type: Improvement
          Components: Kubernetes
    Affects Versions: 3.0.0
            Reporter: Marcelo Masiero Vanzin


{{ExecutorPodsAllocator}} currently has code to delete excess pods that the K8S 
server hasn't started yet, and aren't needed anymore due to downscaling.

The problem is that there is a race between K8S starting the pod and the Spark 
code deleting it. This may cause the pod to connect back to Spark and do a lot 
of initialization, sometimes even being considered for task allocation, just to 
be killed almost immediately.

This doesn't cause any problems that I could detect in my tests, but wastes 
resources, and causes logs to contains misleading messages about the executor 
being killed. It would be nice to avoid that.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to