[
https://issues.apache.org/jira/browse/SPARK-16083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15341832#comment-15341832
]
Sudhakar Thota commented on SPARK-16083:
----------------------------------------
1. JVM is not allowed to use more than 1g. Please take a look.
>From the content in the file :
>spark-spark-org.apache.spark.deploy.history.HistoryServer-1-testbic1on5l.out.4
------------------
Spark Command:
/usr/jdk64/java-1.8.0-openjdk-1.8.0.45-28.b13.el6_6.x86_64/bin/java
-Diop.version=4.1.0.0 -cp
/usr/iop/current/hadoop-client/lib/*:/usr/iop/current/hadoop-client/*:/usr/iop/current/hive-client/lib/*:/usr/iop/4.1.0.0/spark/sbin/../conf/:/usr/iop/4.1.0.0/spark/lib/spark-assembly-1.5.1_IBM_1-hadoop2.7.1-IBM-11.jar:/usr/iop/4.1.0.0/spark/lib/datanucleus-rdbms-3.2.9.jar:/usr/iop/4.1.0.0/spark/lib/datanucleus-api-jdo-3.2.6.jar:/usr/iop/4.1.0.0/spark/lib/datanucleus-core-3.2.10.jar:/usr/iop/current/hadoop-client/conf/
-Dspark.history.ui.port=18080
-Dspark.history.fs.logDirectory=hdfs://localhost:8020/iop/apps/4.1.0.0/spark/logs/history-server
-Xms1g -Xmx1g org.apache.spark.deploy.history.HistoryServer
------
2. The memory is collected for that process using top comand and is captured in
the file: 27814.004.000.sparkhistoryservermemory.log.1
-------
Fri Jun 17 12:05:01 CDT 2016
top - 12:05:02 up 124 days, 20:44, 4 users, load average: 0.02, 0.05, 0.01
Tasks: 1 total, 0 running, 1 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.5%us, 0.3%sy, 0.0%ni, 99.0%id, 0.2%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 47.137G total, 35.886G used, 11.251G free, 510.980M buffers
Swap: 4063.996M total, 647.672M used, 3416.324M free, 4338.496M cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2758416 spark 20 0 8095m 1.8g 34m S 0.0 3.9 3:05.89 /usr/jdk64/java-1
------
3. The dump I could upload shows only 55M heap.
Sorry, I could not upload the other dumps due to upload size limitations. They
show it grow up to 850M which is perfectly legal. But the memory is lost in
some thing other than heap.
> spark HistoryServer memory increases until gets killed by OS.
> -------------------------------------------------------------
>
> Key: SPARK-16083
> URL: https://issues.apache.org/jira/browse/SPARK-16083
> Project: Spark
> Issue Type: Bug
> Components: Spark Submit
> Affects Versions: 1.5.1
> Environment: RHEL-6, IOP-4.1.0.0, 10 node cluster
> Reporter: Sudhakar Thota
> Attachments: 27814.004.000.sparkhistoryservermemory.log.1,
> 27814004000_spark_heap_2016-06-17_12-05.hprof.gz,
> spark-spark-org.apache.spark.deploy.history.HistoryServer-1-testbic1on5l.out.4
>
>
> Spark HistoryServer process consuming memory over few days finally getting
> killed by operating system. Heapdump analysis of jmap dumps with IBM
> HeapAnalyzer, found that total heap size is 800M and process size is 11G.
> HistoryServer is started with 1G. ("history-server -Xms1g -Xmx1g
> org.apache.spark.deploy.history.HistoryServer")
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]