Hi,

Just to throw few zlotys to the conversation, I believe that Spark
Standalone does not enforce any memory checks to limit or even kill
executors beyond requested memory (like YARN). I also found that
memory does not have much of use while scheduling tasks and CPU
matters only.

My understanding of `spark.memory.offHeap.enabled` is `false` is that
it does not disable off heap memory used in Java NIO for buffers in
shuffling, RPC, etc. so the memory is always (?) more than you request
for mx using executor-memory.

Pozdrawiam,
Jacek Laskowski
----
https://medium.com/@jaceklaskowski/
Mastering Apache Spark 2.0 https://bit.ly/mastering-apache-spark
Follow me at https://twitter.com/jaceklaskowski


On Sun, Jan 22, 2017 at 9:57 AM, StanZhai <m...@zhaishidan.cn> wrote:
> Hi all,
>
> We just upgraded our Spark from 1.6.2 to 2.1.0.
>
> Our Spark application is started by spark-submit with config of
> `--executor-memory 35G` in standalone model, but the actual use of memory up
> to 65G after a full gc(jmap -histo:live $pid) as follow:
>
> test@c6 ~ $ ps aux | grep CoarseGrainedExecutorBackend
> test      181941 181 34.7 94665384 68836752 ?   Sl   09:25 711:21
> /home/test/service/jdk/bin/java -cp
> /home/test/service/hadoop/share/hadoop/common/hadoop-lzo-0.4.20-SNAPSHOT.jar:/home/test/service/hadoop/share/hadoop/common/hadoop-lzo-0.4.20-SNAPSHOT.jar:/home/test/service/spark/conf/:/home/test/service/spark/jars/*:/home/test/service/hadoop/etc/hadoop/
> -Xmx35840M -Dspark.driver.port=47781 -XX:+PrintGCDetails
> -XX:+PrintGCDateStamps -Xloggc:./gc.log -verbose:gc
> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url
> spark://coarsegrainedschedu...@xxx.xxx.xxx.xxx:47781 --executor-id 1
> --hostname test-192 --cores 36 --app-id app-20170122092509-0017 --worker-url
> spark://Worker@test-192:33890
>
> Our Spark jobs are all sql.
>
> The exceed memory looks like off-heap memory, but the default value of
> `spark.memory.offHeap.enabled` is `false`.
>
> We didn't find the problem in Spark 1.6.x, what causes this in Spark 2.1.0?
>
> Any help is greatly appreicated!
>
> Best,
> Stan
>
>
>
> --
> View this message in context: 
> http://apache-spark-developers-list.1001551.n3.nabble.com/Executors-exceed-maximum-memory-defined-with-executor-memory-in-Spark-2-1-0-tp20697.html
> Sent from the Apache Spark Developers List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe e-mail: dev-unsubscr...@spark.apache.org
>

---------------------------------------------------------------------
To unsubscribe e-mail: dev-unsubscr...@spark.apache.org

Reply via email to