Can you add more details like are you using rdds/datasets/sql ..; are you
doing group by/ joins ; is your input splittable?
btw, you can pass the config the same way you are passing memryOverhead:
e.g.
--conf spark.default.parallelism=1000 or through spark-context in code
Regards,
Sushrut Ikhar
[
Hi All,
Any updates on this?
On Wednesday 28 September 2016 12:22 PM, Sushrut Ikhar wrote:
Try with increasing the parallelism by repartitioning and also you may
increase - spark.default.parallelism
You can also try with decreasing num-executor cores.
Basically, this happens when the executor
:
Thanks Sushrut for the reply.
Currently I have not defined spark.default.parallelism property.
Can you let me know how much should I set it to?
Regards,
Aditya Calangutkar
On Wednesday 28 September 2016 12:22 PM, Sushrut Ikhar wrote:
Try with increasing the parallelism by repartitioning and
I have a spark job which runs fine for small data. But when data
increases it gives executor lost error.My executor and driver memory are
set at its highest point. I have also tried increasing--conf
spark.yarn.executor.memoryOverhead=600but still not able to fix the
problem. Is there any other
> at
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:70)
> at
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
> at org.apache.spark.scheduler.Task.run(Task.scala:70)
> at
> org.apache.
org.apache.spark.scheduler.Task.run(Task.scala:70)
at
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-executor-lost-because-of-GC-overhead-limit-exceeded-even-though-using-20-executors-using
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Spark-executor-lost-because-of-time-out-even-after-setting-quite-long-time-out-value-1000-seconds-tp24289.html
> Sent from the Apache Spark User List mailing list archive at Nabble.
tions=-XX:MaxPermSize=512M" --driver-java-options
-XX:MaxPermSize=512m --driver-memory 4g --master yarn-client
--executor-memory 25G --executor-cores 8 --num-executors 5 --jars
/path/to/spark-job.jar
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-exe
al Message-
> *From: *S. Zhou [myx...@yahoo.com.INVALID]
> *Sent: *Wednesday, December 03, 2014 06:30 PM Eastern Standard Time
> *To: *user@spark.apache.org
> *Subject: *Spark executor lost
>
> We are using Spark job server to submit spark jobs (our spark version is
> 0.
nId " to get the logs from the data nodes.
Sent with Good (www.good.com)
-Original Message-
From: S. Zhou [myx...@yahoo.com.INVALID]
Sent: Wednesday, December 03, 2014 06:30 PM Eastern Standard Time
To: user@spark.apache.org
Subject: Spark executor lost
We are using Spark job ser
m: *S. Zhou [myx...@yahoo.com.INVALID]
> *Sent: *Wednesday, December 03, 2014 06:30 PM Eastern Standard Time
> *To: *user@spark.apache.org
> *Subject: *Spark executor lost
>
> We are using Spark job server to submit spark jobs (our spark version is
> 0.91). After running the s
Sent with Good (www.good.com)
-Original Message-
From: S. Zhou [myx...@yahoo.com.INVALID<mailto:myx...@yahoo.com.INVALID>]
Sent: Wednesday, December 03, 2014 06:30 PM Eastern Standard Time
To: user@spark.apache.org
Subject: Spark executor lost
We are using Spark job server to submit
We are using Spark job server to submit spark jobs (our spark version is 0.91).
After running the spark job server for a while, we often see the following
errors (executor lost) in the spark job server log. As a consequence, the spark
driver (allocated inside spark job server) gradually loses ex
13 matches
Mail list logo