[jira] [Commented] (SPARK-26405) OOM

2018-12-23 Thread Hyukjin Kwon (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-26405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16728122#comment-16728122
 ] 

Hyukjin Kwon commented on SPARK-26405:
--

BTW, please don't report a JIRA with the title like OOM. I had no idea of 
what's the JIRA about.

> OOM
> ---
>
> Key: SPARK-26405
> URL: https://issues.apache.org/jira/browse/SPARK-26405
> Project: Spark
>  Issue Type: Bug
>  Components: Java API, Scheduler, Shuffle, Spark Core, Spark Submit
>Affects Versions: 2.2.0
>Reporter: lu
>Priority: Major
>
> Heap memory overflow occurred in the user portrait analysis, and the data 
> volume analyzed was about 10 million records
> spark work memory:4G
> using RestSubmissionClient to submit the job
> boht the driver memory and executor memory :4g
> total executor cores: 6
> spark cores:2
> the cluster size:3
>  
> INFO worker.WorkerWatcher: Connecting to worker 
> spark://Worker@192.168.44.181:45315
> Exception in thread "broadcast-exchange-3" java.lang.OutOfMemoryError: Not 
> enough memory to build and broadcast the table to all worker nodes. As a 
> workaround, you can either disable broadcast by setting 
> spark.sql.autoBroadcastJoinThreshold to -1 or increase the spark driver 
> memory by setting spark.driver.memory to a higher value
>  at 
> org.apache.spark.sql.execution.exchange.BroadcastExchangeExec$$anonfun$relationFuture$1$$anonfun$apply$1.apply(BroadcastExchangeExec.scala:102)
>  at 
> org.apache.spark.sql.execution.exchange.BroadcastExchangeExec$$anonfun$relationFuture$1$$anonfun$apply$1.apply(BroadcastExchangeExec.scala:73)
>  at 
> org.apache.spark.sql.execution.SQLExecution$.withExecutionId(SQLExecution.scala:103)
>  at 
> org.apache.spark.sql.execution.exchange.BroadcastExchangeExec$$anonfun$relationFuture$1.apply(BroadcastExchangeExec.scala:72)
>  at 
> org.apache.spark.sql.execution.exchange.BroadcastExchangeExec$$anonfun$relationFuture$1.apply(BroadcastExchangeExec.scala:72)
>  at 
> scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
>  at 
> scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  at java.lang.Thread.run(Thread.java:745)
> Exception in thread "main" java.lang.reflect.InvocationTargetException
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at org.apache.spark.deploy.worker.DriverWrapper$.main(DriverWrapper.scala:58)
>  at org.apache.spark.deploy.worker.DriverWrapper.main(DriverWrapper.scala)
> Caused by: java.util.concurrent.TimeoutException: Futures timed out after 
> [300 seconds]
>  at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
>  at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
>  at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:201)
>  at 
> org.apache.spark.sql.execution.exchange.BroadcastExchangeExec.doExecuteBroadcast(BroadcastExchangeExec.scala:123)
>  at 
> org.apache.spark.sql.execution.InputAdapter.doExecuteBroadcast(WholeStageCodegenExec.scala:248)
>  at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$executeBroadcast$1.apply(SparkPlan.scala:127)
>  at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$executeBroadcast$1.apply(SparkPlan.scala:127)
>  at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:138)
>  at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>  at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:135)
>  at 
> org.apache.spark.sql.execution.SparkPlan.executeBroadcast(SparkPlan.scala:126)
>  at 
> org.apache.spark.sql.execution.joins.BroadcastHashJoinExec.prepareBroadcast(BroadcastHashJoinExec.scala:98)
>  at 
> org.apache.spark.sql.execution.joins.BroadcastHashJoinExec.codegenInner(BroadcastHashJoinExec.scala:197)
>  at 
> org.apache.spark.sql.execution.joins.BroadcastHashJoinExec.doConsume(BroadcastHashJoinExec.scala:82)
>  at 
> org.apache.spark.sql.execution.CodegenSupport$class.consume(WholeStageCodegenExec.scala:155)
>  at 
> org.apache.spark.sql.execution.ProjectExec.consume(basicPhysicalOperators.scala:36)
>  at 
> org.apache.spark.sql.execution.ProjectExec.doConsume(basicPhysicalOperators.scala:68)
>  at 
> org.apache.spark.sql.execution.CodegenSupport$class.consume(WholeStageCodegenExec.scala:155)
>  at 
> org.apache.spark.sql.execution.FilterExec.consume(basicPhysicalOperators.scala:88)
>  at 
> 

[jira] [Commented] (SPARK-26405) OOM

2018-12-20 Thread Liang-Chi Hsieh (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-26405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16725691#comment-16725691
 ] 

Liang-Chi Hsieh commented on SPARK-26405:
-

I think the exception message is somehow clear:

{code}
java.lang.OutOfMemoryError: Not enough memory to build and broadcast the table 
to all worker nodes. As a workaround, you can either disable broadcast by 
setting spark.sql.autoBroadcastJoinThreshold to -1 or increase the spark driver 
memory by setting spark.driver.memory to a higher value
{code}

It is unable to build the broadcast data due the memory limit. It should not be 
a bug.



> OOM
> ---
>
> Key: SPARK-26405
> URL: https://issues.apache.org/jira/browse/SPARK-26405
> Project: Spark
>  Issue Type: Bug
>  Components: Java API, Scheduler, Shuffle, Spark Core, Spark Submit
>Affects Versions: 2.2.0
>Reporter: lu
>Priority: Major
>
> Heap memory overflow occurred in the user portrait analysis, and the data 
> volume analyzed was about 10 million records
> spark work memory:4G
> using RestSubmissionClient to submit the job
> boht the driver memory and executor memory :4g
> total executor cores: 6
> spark cores:2
> the cluster size:3
>  
> INFO worker.WorkerWatcher: Connecting to worker 
> spark://Worker@192.168.44.181:45315
> Exception in thread "broadcast-exchange-3" java.lang.OutOfMemoryError: Not 
> enough memory to build and broadcast the table to all worker nodes. As a 
> workaround, you can either disable broadcast by setting 
> spark.sql.autoBroadcastJoinThreshold to -1 or increase the spark driver 
> memory by setting spark.driver.memory to a higher value
>  at 
> org.apache.spark.sql.execution.exchange.BroadcastExchangeExec$$anonfun$relationFuture$1$$anonfun$apply$1.apply(BroadcastExchangeExec.scala:102)
>  at 
> org.apache.spark.sql.execution.exchange.BroadcastExchangeExec$$anonfun$relationFuture$1$$anonfun$apply$1.apply(BroadcastExchangeExec.scala:73)
>  at 
> org.apache.spark.sql.execution.SQLExecution$.withExecutionId(SQLExecution.scala:103)
>  at 
> org.apache.spark.sql.execution.exchange.BroadcastExchangeExec$$anonfun$relationFuture$1.apply(BroadcastExchangeExec.scala:72)
>  at 
> org.apache.spark.sql.execution.exchange.BroadcastExchangeExec$$anonfun$relationFuture$1.apply(BroadcastExchangeExec.scala:72)
>  at 
> scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
>  at 
> scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  at java.lang.Thread.run(Thread.java:745)
> Exception in thread "main" java.lang.reflect.InvocationTargetException
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at org.apache.spark.deploy.worker.DriverWrapper$.main(DriverWrapper.scala:58)
>  at org.apache.spark.deploy.worker.DriverWrapper.main(DriverWrapper.scala)
> Caused by: java.util.concurrent.TimeoutException: Futures timed out after 
> [300 seconds]
>  at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
>  at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
>  at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:201)
>  at 
> org.apache.spark.sql.execution.exchange.BroadcastExchangeExec.doExecuteBroadcast(BroadcastExchangeExec.scala:123)
>  at 
> org.apache.spark.sql.execution.InputAdapter.doExecuteBroadcast(WholeStageCodegenExec.scala:248)
>  at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$executeBroadcast$1.apply(SparkPlan.scala:127)
>  at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$executeBroadcast$1.apply(SparkPlan.scala:127)
>  at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:138)
>  at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>  at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:135)
>  at 
> org.apache.spark.sql.execution.SparkPlan.executeBroadcast(SparkPlan.scala:126)
>  at 
> org.apache.spark.sql.execution.joins.BroadcastHashJoinExec.prepareBroadcast(BroadcastHashJoinExec.scala:98)
>  at 
> org.apache.spark.sql.execution.joins.BroadcastHashJoinExec.codegenInner(BroadcastHashJoinExec.scala:197)
>  at 
> org.apache.spark.sql.execution.joins.BroadcastHashJoinExec.doConsume(BroadcastHashJoinExec.scala:82)
>  at 
> org.apache.spark.sql.execution.CodegenSupport$class.consume(WholeStageCodegenExec.scala:155)
>  at 
>