by offheap storage from Spark
that won’t be accounted for in just the heap size.
Hope this helps,
-Matt Cheah
From: Jayesh Lalwani
Date: Thursday, August 2, 2018 at 12:35 PM
To: "user@spark.apache.org"
Subject: Spark on Kubernetes: Kubernetes killing executors
We are running Spark 2.3 on a Kubernetes cluster. We have set the following
spark configuration options
"spark.executor.memory": "7g",
"spark.driver.memory": "2g",
"spark.memory.fraction": "0.75"
WHat we see is
a) In the SPark UI, 5G has been allocated to each executor, which makes
sense