[ 
https://issues.apache.org/jira/browse/SPARK-16667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Owen updated SPARK-16667:
------------------------------
    Target Version/s:   (was: 1.6.0)

> Spark driver executor dont release unused memory
> ------------------------------------------------
>
>                 Key: SPARK-16667
>                 URL: https://issues.apache.org/jira/browse/SPARK-16667
>             Project: Spark
>          Issue Type: Bug
>          Components: GraphX, Spark Core
>    Affects Versions: 1.6.0
>         Environment: Ubuntu wily 64 bits
> java 1.8
> 3 slaves(4GB) 1 master(2GB) virtual machines in Vmware over i7 4th generation 
> with 16 gb RAM)
>            Reporter: Luis Angel Hernández Acosta
>
> I'm running spark app in standalone cluster. My app create sparkContext and 
> make many calculation with graphx over the time. To calculate, my app create 
> new java thread and wait for it's ending signal. Betwenn any calculation, 
> memory grows 50mb-100mb. I make a thread to be sure that any object created 
> for calculate is destryed after calculate's end, but memory still growing. I 
> tray stoping the sparkContext and all executor memory allocated by app is 
> freed but my driver's memory still growing same 50m-100mb.
> My graph calculaiton include hdfs seralization of rdd and load graph from hdfs
> Spark env:
> export SPARK_MASTER_IP=master
> export SPARK_WORKER_CORES=4
> export SPARK_WORKER_MEMORY=2919m
> export SPARK_WORKER_INSTANCES=1
> export SPARK_DAEMON_MEMORY=256m
> export SPARK_WORKER_OPTS="-Dspark.worker.cleanup.enabled=true 
> -Dspark.worker.cleanup.interval=10"
> That are my only configurations



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to