the "spark.local.dir" to multiple disks, the job will
> failed, the errors are as follow:
> (if I set the spark.local.dir to only 1 dir, the job will succed...)
>
> Exception in thread "main" org.apache.spark.SparkException: Job cancelled
> because S
As long as I set the "spark.local.dir" to multiple disks, the job will
failed, the errors are as follow:
(if I set the spark.local.dir to only 1 dir, the job will succed...)
Exception in thread "main" org.apache.spark.SparkException: Job cancelled
because SparkContext was
ith this error on
> the cli ( we are running spark-sql on a yarn cluster):
>
>
> org.apache.spark.SparkException: Job cancelled because SparkContext was
> shut down
> at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$cleanUpAfterSchedulerStop$1.a
Hi,
Trying to run a query on spark-sql but it keeps failing with this error on
the cli ( we are running spark-sql on a yarn cluster):
org.apache.spark.SparkException: Job cancelled because SparkContext was
shut down
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$cleanUpAfterSchedulerStop
seems yarn kills some of the executors as they request more memory than
expected.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Job-cancelled-because-SparkContext-was-shut-down-tp15189p15216.html
Sent from the Apache Spark User List mailing list archive at
executor to
shut down
[E 140926 01:00:13 base:56] Request failed
14/09/26 01:00:13 INFO YarnClientSchedulerBackend: Stopped
[E 140926 01:00:13 base:57] {'error_msg': ",
org.apache.spark.SparkException: Job cancelled because SparkContext was shut
down, "}
any idea what'