[
https://issues.apache.org/jira/browse/SPARK-10132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sean Owen resolved SPARK-10132.
-------------------------------
Resolution: Invalid
I don't think any of this suggests a problem in Spark though, right? You just
ran out of memory.
I'm provisionally closing this since there is no detail about Spark here, a
reproduction, or argument that there is a memory leak. It can be reopened if
this detail is provided.
> daemon crash caused by memory leak
> ----------------------------------
>
> Key: SPARK-10132
> URL: https://issues.apache.org/jira/browse/SPARK-10132
> Project: Spark
> Issue Type: Bug
> Components: Spark Core
> Affects Versions: 1.3.1, 1.4.1
> Environment: 1. Cluster: 7 Redhat notes cluster, each has 32 cores
> 2. OS type: Red Hat Enterprise Linux Server release 7.1 (Maipo)
> 3. Java version: tried both Oracle jdk 1.6 and 1.7
> java version "1.6.0_13"
> Java(TM) SE Runtime Environment (build 1.6.0_13-b03)
> Java HotSpot(TM) 64-Bit Server VM (build 11.3-b02, mixed mode)
> java version "1.7.0"
> Java(TM) SE Runtime Environment (build 1.7.0-b147)
> Java HotSpot(TM) 64-Bit Server VM (build 21.0-b17, mixed mode)
> 4. JVM Option on spark-env.sh,
> Notes: SPARK_DAEMON_MEMORY was set to 300M to speed up the crash process
> SPARK_DAEMON_JAVA_OPTS="-Xloggc:/root/spark/oracle_gclog"
> SPARK_DAEMON_MEMORY=300m
> Reporter: ZemingZhao
> Priority: Critical
> Attachments: oracle_gclog, xqjmap.all, xqjmap.live
>
>
> constantly submit short batch workload onto spark.
> spark master and worker will crash casued by memory leak.
> according to the gclog and jmap info, this leak should be related to Akka but
> cannot find the root cause by now.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]