Hi Shivang,
This sounds like classic Spark-on-Kubernetes behavior:
-
Executors do not necessarily shut down immediately after finishing their
tasks, unless:
- Dynamic resource allocation is enabled.
- And Spark knows it can safely scale down
-
The Driver pod manages the whole lifecycle. If you
Hi Team,
We are using spark java 3.5.3 and we have a requirement to run a batch of
millions of transactions. i.e. I am running batch of 1M with 640 executors
and each executor is having 8 cores and 16 GB memory running under
kubernetes cluster.
But our conversation is that one executor is don