e experience with it.
>
> But just for wondering, what class Spark needs to be loaded at this time?
> From my understanding, the executor already scan the first block data from
> HDFS, and hanging while starting the 2nd block. All the class should be
> already loaded in JVM in this cas
erstanding, the executor already scan the first block data from HDFS, and
hanging while starting the 2nd block. All the class should be already loaded in
JVM in this case.
Thanks
Yong
From: iras...@cloudera.com
Date: Tue, 18 Aug 2015 12:17:56 -0500
Subject: Re: Spark Job Hangs on our production
just looking at the thread dump from your original email, the 3 executor
threads are all trying to load classes. (One thread is actually loading
some class, and the others are blocked waiting to load a class, most likely
trying to load the same thing.) That is really weird, definitely not
somethi
help from this list.
Thanks
Yong
From: java8...@hotmail.com
To: user@spark.apache.org
Subject: RE: Spark Job Hangs on our production cluster
Date: Fri, 14 Aug 2015 15:14:10 -0400
I still want to check if anyone can provide any help related to the Spark 1.2.2
will hang on our production cluster when rea
,
as I want to rule out any problem could related to AVRO, but it will take a
while for me to generate that. But I am not sure if AVRO format could be the
cause.
Thanks for your help.
Yong
From: java8...@hotmail.com
To: user@spark.apache.org
Subject: Spark Job Hangs on our production cluster
Date:
confirm from the Spark UI the executor heap is set as 24G.
>
> Thanks
>
> Yong
>
> --
> From: igor.ber...@gmail.com
> Date: Tue, 11 Aug 2015 23:31:59 +0300
> Subject: Re: Spark Job Hangs on our production cluster
> To: java8...@hotmail.com
&
ject: Re: Spark Job Hangs on our production cluster
To: java8...@hotmail.com
CC: user@spark.apache.org
how do u want to process 1T of data when you set your executor memory to be
2g?look at spark ui, metrics of tasks...if anylook at spark logs on executor
machine under work dir(unless you confi
how do u want to process 1T of data when you set your executor memory to be
2g?
look at spark ui, metrics of tasks...if any
look at spark logs on executor machine under work dir(unless you configured
log4j)
I think your executors are thrashing or spilling to disk. check memory
metrics/swapping
O
Currently we have a IBM BigInsight cluster with 1 namenode + 1 JobTracker + 42
data/task nodes, which runs with BigInsight V3.0.0.2, corresponding with Hadoop
2.2.0 with MR1.
Since IBM BigInsight doesn't come with Spark, so we build Spark 1.2.2 with
Hadoop 2.2.0 + Hive 0.12 by ourselves, and dep