We are running Spark 2.3 on a Kubernetes cluster. We have set the following spark configuration options
"spark.executor.memory": "7g", "spark.driver.memory": "2g", "spark.memory.fraction": "0.75" WHat we see is a) In the SPark UI, 5G has been allocated to each executor, which makes sense because we set spark.memory.fraction=0.75 b) Kubernetes reports the pod memory usage as 7.6G WHen we run a lot of jobs on the Kubernetes cluster, Kubernetes starts killing the executor pods, because it thinks that the pod is misbehaving. We logged into a running pod, and ran the top command, and most of the 7.6G is being allocated to the executor's java process Why is Spark taking 7.6G instead of 7 G? Where is the 600MB being allocated to? Is there some configuration that controls how much of the executor memory gets allocated to Permgen vs the memory that gets allocated to the heap? ________________________________________________________ The information contained in this e-mail is confidential and/or proprietary to Capital One and/or its affiliates and may only be used solely in performance of work or services for Capital One. The information transmitted herewith is intended only for use by the individual or entity to which it is addressed. If the reader of this message is not the intended recipient, you are hereby notified that any review, retransmission, dissemination, distribution, copying or other use of, or taking of any action in reliance upon this information is strictly prohibited. If you have received this communication in error, please contact the sender and delete the material from your computer.