Hi,

 I have a three node cluster with 30G of Memory. I am trying to analyzing
the data of 200MB and running out of memory every time. This is the command
I am using 

Driver Memory = 10G
Executor memory=10G

sc <- sparkR.session(master =
"spark://ip-172-31-6-116:7077",sparkConfig=list(spark.executor.memory="10g",spark.app.name="Testing",spark.driver.memory="14g",spark.executor.extraJavaOption="-Xms2g
-Xmx5g -XX:MaxPermSize=1024M",spark.driver.extraJavaOption="-Xms2g -Xmx5g
-XX:MaxPermSize=1024M",spark.cores.max="2"))


[D 16:43:51.437 NotebookApp] 200 GET
/api/contents?type=directory&_=1477289197671 (123.176.38.226) 7.96ms            
                           
Exception in thread "broadcast-exchange-0" java.lang.OutOfMemoryError: Java
heap space                                                          
        at
org.apache.spark.sql.execution.joins.LongToUnsafeRowMap.append(HashedRelation.scala:539)
                                             
        at
org.apache.spark.sql.execution.joins.LongHashedRelation$.apply(HashedRelation.scala:803)
                                             
        at
org.apache.spark.sql.execution.joins.HashedRelation$.apply(HashedRelation.scala:105)
                                                 
        at
org.apache.spark.sql.execution.joins.HashedRelationBroadcastMode.transform(HashedRelation.scala:816)
                                 
        at
org.apache.spark.sql.execution.joins.HashedRelationBroadcastMode.transform(HashedRelation.scala:812)
                                 
        at
org.apache.spark.sql.execution.exchange.BroadcastExchangeExec$$anonfun$relationFuture$1$$anonfun$apply$1.apply(BroadcastExchangeExec.
scala:90)                                                                       
                                                                
        at
org.apache.spark.sql.execution.exchange.BroadcastExchangeExec$$anonfun$relationFuture$1$$anonfun$apply$1.apply(BroadcastExchangeExec.
scala:72)                                                                       
                                                                
        at
org.apache.spark.sql.execution.SQLExecution$.withExecutionId(SQLExecution.scala:94)
                                                  
        at
org.apache.spark.sql.execution.exchange.BroadcastExchangeExec$$anonfun$relationFuture$1.apply(BroadcastExchangeExec.scala:72)
        
        at
org.apache.spark.sql.execution.exchange.BroadcastExchangeExec$$anonfun$relationFuture$1.apply(BroadcastExchangeExec.scala:72)
        
        at
scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
                                                
        at
scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)     
                                                     
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
                                                     
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
                                                     
        at java.lang.Thread.run(Thread.java:745)                                
                                                                
                                                                                
                                                                



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/JAVA-heap-space-issue-tp27950.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to