CentOS 7.1,
Linux version 3.10.0-229.el7.x86_64 (buil...@kbuilder.dev.centos.org) (gcc
version 4.8.2 20140120 (Red Hat 4.8.2-16) (GCC) ) #1 SMP Fri Mar 6 11:36:42
UTC 2015


Michael Allman-2 wrote
> Hi Stan,
> 
> What OS/version are you using?
> 
> Michael
> 
>> On Jan 22, 2017, at 11:36 PM, StanZhai <

> mail@

> > wrote:
>> 
>> I'm using Parallel GC.
>> rxin wrote
>>> Are you using G1 GC? G1 sometimes uses a lot more memory than the size
>>> allocated.
>>> 
>>> 
>>> On Sun, Jan 22, 2017 at 12:58 AM StanZhai <
>> 
>>> mail@
>> 
>>> > wrote:
>>> 
>>>> Hi all,
>>>> 
>>>> 
>>>> 
>>>> We just upgraded our Spark from 1.6.2 to 2.1.0.
>>>> 
>>>> 
>>>> 
>>>> Our Spark application is started by spark-submit with config of
>>>> 
>>>> `--executor-memory 35G` in standalone model, but the actual use of
>>>> memory
>>>> up
>>>> 
>>>> to 65G after a full gc(jmap -histo:live $pid) as follow:
>>>> 
>>>> 
>>>> 
>>>> test@c6 ~ $ ps aux | grep CoarseGrainedExecutorBackend
>>>> 
>>>> test      181941  181 34.7 94665384 68836752 ?   Sl   09:25 711:21
>>>> 
>>>> /home/test/service/jdk/bin/java -cp
>>>> 
>>>> 
>>>> /home/test/service/hadoop/share/hadoop/common/hadoop-lzo-0.4.20-SNAPSHOT.jar:/home/test/service/hadoop/share/hadoop/common/hadoop-lzo-0.4.20-SNAPSHOT.jar:/home/test/service/spark/conf/:/home/test/service/spark/jars/*:/home/test/service/hadoop/etc/hadoop/
>>>> 
>>>> -Xmx35840M -Dspark.driver.port=47781 -XX:+PrintGCDetails
>>>> 
>>>> -XX:+PrintGCDateStamps -Xloggc:./gc.log -verbose:gc
>>>> 
>>>> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url
>>>> 
>>>> spark://
>> 
>>> CoarseGrainedScheduler@.xxx
>> 
>>> :47781 --executor-id 1
>>>> 
>>>> --hostname test-192 --cores 36 --app-id app-20170122092509-0017
>>>> --worker-url
>>>> 
>>>> spark://Worker@test-192:33890
>>>> 
>>>> 
>>>> 
>>>> Our Spark jobs are all sql.
>>>> 
>>>> 
>>>> 
>>>> The exceed memory looks like off-heap memory, but the default value of
>>>> 
>>>> `spark.memory.offHeap.enabled` is `false`.
>>>> 
>>>> 
>>>> 
>>>> We didn't find the problem in Spark 1.6.x, what causes this in Spark
>>>> 2.1.0?
>>>> 
>>>> 
>>>> 
>>>> Any help is greatly appreicated!
>>>> 
>>>> 
>>>> 
>>>> Best,
>>>> 
>>>> Stan
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> --
>>>> 
>>>> View this message in context:
>>>> http://apache-spark-developers-list.1001551.n3.nabble.com/Executors-exceed-maximum-memory-defined-with-executor-memory-in-Spark-2-1-0-tp20697.html
>>>> 
>>>> Sent from the Apache Spark Developers List mailing list archive at
>>>> Nabble.com <http://nabble.com/>.
>>>> 
>>>> 
>>>> 
>>>> ---------------------------------------------------------------------
>>>> 
>>>> To unsubscribe e-mail: 
>> 
>>> dev-unsubscribe@.apache
>> 
>>>> 
>>>> 
>>>> 
>>>> 
>> 
>> 
>> 
>> 
>> 
>> --
>> View this message in context:
>> http://apache-spark-developers-list.1001551.n3.nabble.com/Executors-exceed-maximum-memory-defined-with-executor-memory-in-Spark-2-1-0-tp20697p20707.html
>> <http://apache-spark-developers-list.1001551.n3.nabble.com/Executors-exceed-maximum-memory-defined-with-executor-memory-in-Spark-2-1-0-tp20697p20707.html>
>> Sent from the Apache Spark Developers List mailing list archive at
>> Nabble.com <http://nabble.com/>.
>> 
>> ---------------------------------------------------------------------
>> To unsubscribe e-mail: 

> dev-unsubscribe@.apache

>  <mailto:

> dev-unsubscribe@.apache

> >





--
View this message in context: 
http://apache-spark-developers-list.1001551.n3.nabble.com/Executors-exceed-maximum-memory-defined-with-executor-memory-in-Spark-2-1-0-tp20697p20833.html
Sent from the Apache Spark Developers List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe e-mail: dev-unsubscr...@spark.apache.org

Reply via email to