We have been measuring jvm heap memory usage in our spark app, by taking
periodic sampling of jvm heap memory usage and saving it in our metrics db.
we do this by spawning a thread in the spark app and measuring the jvm heap
memory usage every 1 min.
Is it a fair assumption to conclude that if the max heap usage found by this
method is less than the executor memory allotted, then we can safely tune
down the executor memory to max heap usage (approx).

Related to this - when we specify executor-memory as 'X'gb, is all of this
X'gb being allotted from jvm heap memory? are there any other parameters we
need to take into account before concluding that max jvm heap usage is the
max memory requirement of the executor across jobs for the spark app?





--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-executor-memory-and-jvm-heap-memory-usage-metric-tp28391.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to