sorry, by "repl" I mean "spark-shell", I guess I'm used to them being used
interchangeably.  From that thread dump, the one thread that isn't stuck is
trying to get classes specifically related to the shell / repl:

   java.lang.Thread.State: RUNNABLE
>         at java.net.SocketInputStream.socketRead0(Native Method)
>         at java.net.SocketInputStream.read(SocketInputStream.java:152)
>         at java.net.SocketInputStream.read(SocketInputStream.java:122)
>         at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
>         at java.io.BufferedInputStream.read1(BufferedInputStream.java:275)
>         at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
>         - locked <0x000000072477d530> (a java.io.BufferedInputStream)
>         at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:689)
>         at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:633)
>         at
> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1324)
>         - locked <0x0000000724772bf8> (a
> sun.net.www.protocol.http.HttpURLConnection)
>         at java.net.URL.openStream(URL.java:1037)
>         at
> org.apache.spark.repl.ExecutorClassLoader.findClassLocally(ExecutorClassLoader.scala:86)
>         at
> org.apache.spark.repl.ExecutorClassLoader.findClass(ExecutorClassLoader.scala:63)
>         at java.lang.ClassLoader.loadClass(ClassLoader.java:425)

...

thats because the repl needs to package up the code for every single line,
and it serves those compiled classes to each executor over http.  This
particular executor seems to be stuck pulling one of those lines compiled
in the repl.  (This is all assuming that the thread dump is the same over
the entire 30 minutes that spark seems to be stuck.)

Yes, the classes should be loaded for the first partition that is
processed. (there certainly could be cases where different classes are
needed for each partition, but it doesn't sound like you are doing anything
that would trigger this.)  But to be clear, in repl mode, there will be
additional classes to be sent with every single job.

Hope that helps a little more ... maybe there was some issue w/ 1.2.2,
though I didn't see anything with a quick search, hopefully you'll have
more luck w/ 1.3.1

On Tue, Aug 18, 2015 at 2:23 PM, java8964 <java8...@hotmail.com> wrote:

> Hi, Imran:
>
> Thanks for your reply. I am not sure what do you mean "repl". Can you be
> more detail about that?
>
> This is only happened when the Spark 1.2.2 try to scan big data set, and
> cannot reproduce if it scans smaller dataset.
>
> FYI, I have to build and deploy Spark 1.3.1 on our production cluster.
> Right now, I cannot reproduce this hang problem on the same cluster for the
> same big dataset. On this point, we will continue trying Spark 1.3.1, hope
> we will have more positive experience with it.
>
> But just for wondering, what class Spark needs to be loaded at this time?
> From my understanding, the executor already scan the first block data from
> HDFS, and hanging while starting the 2nd block. All the class should be
> already loaded in JVM in this case.
>
> Thanks
>
> Yong
>
> ------------------------------
> From: iras...@cloudera.com
> Date: Tue, 18 Aug 2015 12:17:56 -0500
> Subject: Re: Spark Job Hangs on our production cluster
> To: java8...@hotmail.com
> CC: user@spark.apache.org
>
>
> just looking at the thread dump from your original email, the 3 executor
> threads are all trying to load classes.  (One thread is actually loading
> some class, and the others are blocked waiting to load a class, most likely
> trying to load the same thing.)  That is really weird, definitely not
> something which should keep things blocked for 30 min.  It suggest
> something wrong w/ the jvm, or classpath configuration, or a combination.
> Looks like you are trying to run in the repl, and for whatever reason the
> http server for the repl to serve classes is not responsive.  I'd try
> running outside of the repl and see if that works.
>
> sorry not a full diagnosis but maybe this'll help a bit.
>
> On Tue, Aug 11, 2015 at 3:19 PM, java8964 <java8...@hotmail.com> wrote:
>
> Currently we have a IBM BigInsight cluster with 1 namenode + 1 JobTracker
> + 42 data/task nodes, which runs with BigInsight V3.0.0.2, corresponding
> with Hadoop 2.2.0 with MR1.
>
> Since IBM BigInsight doesn't come with Spark, so we build Spark 1.2.2 with
> Hadoop 2.2.0 + Hive 0.12 by ourselves, and deploy it on the same cluster.
>
> The IBM Biginsight comes with IBM jdk 1.7, but during our experience on
> stage environment, we found out Spark works better with Oracle JVM. So we
> run spark under Oracle JDK 1.7.0_79.
>
> Now on production, we are facing a issue we never faced, nor can reproduce
> on our staging cluster.
>
> We are using Spark Standalone cluster, and here is the basic
> configurations:
>
> more spark-env.sh
> export JAVA_HOME=/opt/java
> export PATH=$JAVA_HOME/bin:$PATH
> export HADOOP_CONF_DIR=/opt/ibm/biginsights/hadoop-conf/
> export
> SPARK_CLASSPATH=/opt/ibm/biginsights/IHC/lib/ibm-compression.jar:/opt/ibm/biginsights/hive/lib
> /db2jcc4-10.6.jar
> export
> SPARK_LOCAL_DIRS=/data1/spark/local,/data2/spark/local,/data3/spark/local
> export SPARK_MASTER_WEBUI_PORT=8081
> export SPARK_MASTER_IP=host1
> export SPARK_MASTER_OPTS="-Dspark.deploy.defaultCores=42"
> export SPARK_WORKER_MEMORY=24g
> export SPARK_WORKER_CORES=6
> export SPARK_WORKER_DIR=/tmp/spark/work
> export SPARK_DRIVER_MEMORY=2g
> export SPARK_EXECUTOR_MEMORY=2g
>
> more spark-defaults.conf
> spark.master spark://host1:7077
> spark.eventLog.enabled true
> spark.eventLog.dir hdfs://host1:9000/spark/eventLog
> spark.serializer org.apache.spark.serializer.KryoSerializer
> spark.executor.extraJavaOptions -verbose:gc -XX:+PrintGCDetails
> -XX:+PrintGCTimeStamps
>
> We are using AVRO file format a lot, and we have these 2 datasets, one is
> about 96G, and the other one is a little over 1T. Since we are using AVRO,
> so we also built spark-avro of commit "
> a788c9fce51b0ec1bb4ce88dc65c1d55aaa675b8
> <https://github.com/databricks/spark-avro/tree/a788c9fce51b0ec1bb4ce88dc65c1d55aaa675b8>",
> which is the latest version supporting Spark 1.2.x.
>
> Here is the problem we are facing on our production cluster, even the
> following simple spark-shell commands will hang in our production cluster:
>
> import org.apache.spark.sql.SQLContext
> val sqlContext = new org.apache.spark.sql.hive.HiveContext(sc)
> import com.databricks.spark.avro._
> val bigData = sqlContext.avroFile("hdfs://namenode:9000/bigData/")
> bigData.registerTempTable("bigData")
> bigData.count()
>
> From the console,  we saw following:
> [Stage 0:>
> (44 + 42) / 7800]
>
> no update for more than 30 minutes and longer.
>
> The big dataset with 1T should generate 7800 HDFS block, but Spark's HDFS
> client looks like having problem to read them. Since we are running Spark
> on the data nodes, all the Spark tasks are running as "NODE_LOCAL" on
> locality level.
>
> If I go to the data/task node which Spark tasks hang, and use the JStack
> to dump the thread, I got the following on the top:
>
> 015-08-11 15:38:38
> Full thread dump Java HotSpot(TM) 64-Bit Server VM (24.79-b02 mixed mode):
>
> "Attach Listener" daemon prio=10 tid=0x00007f0660589000 nid=0x1584d
> waiting on condition [0x0000000000000000]
>    java.lang.Thread.State: RUNNABLE
>
> "org.apache.hadoop.hdfs.PeerCache@4a88ec00" daemon prio=10
> tid=0x00007f06508b7800 nid=0x13302 waiting on condition [0x00007f060be94000]
>    java.lang.Thread.State: TIMED_WAITING (sleeping)
>         at java.lang.Thread.sleep(Native Method)
>         at org.apache.hadoop.hdfs.PeerCache.run(PeerCache.java:252)
>         at org.apache.hadoop.hdfs.PeerCache.access$000(PeerCache.java:39)
>         at org.apache.hadoop.hdfs.PeerCache$1.run(PeerCache.java:135)
>         at java.lang.Thread.run(Thread.java:745)
>
> "shuffle-client-1" daemon prio=10 tid=0x00007f0650687000 nid=0x132fc
> runnable [0x00007f060d198000]
>    java.lang.Thread.State: RUNNABLE
>         at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
>         at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
>         at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
>         at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
>         - locked <0x000000067bf47710> (a
> io.netty.channel.nio.SelectedSelectionKeySet)
>         - locked <0x000000067bf374e8> (a
> java.util.Collections$UnmodifiableSet)
>         - locked <0x000000067bf373d0> (a sun.nio.ch.EPollSelectorImpl)
>         at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
>         at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:622)
>         at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:310)
>         at
> io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
>         at java.lang.Thread.run(Thread.java:745)
>
> Meantime, I can confirm our Hadoop/HDFS cluster works fine, as the
> MapReduce jobs also run without any problem, and "Hadoop fs" command works
> fine in the BigInsight.
>
> I attached the jstack output with this email, but I don't know what could
> be the root reason.
> The same Spark shell command works fine, if I point to the small dataset,
> instead of big dataset. The small dataset will have around 800 HDFS blocks,
> and Spark finishes without any problem.
>
> Here are some facts I know:
>
> 1) Since the BigInsight is running on IBM JDK, so I make the Spark run
> under the same JDK, same problem for BigData set.
> 2) I even changed "--total-executor-cores" to 42, which will make each
> executor runs with one core (as we have 42 Spark workers), to avoid any
> multithreads, but still no luck.
> 3) This problem of scanning 1T data hanging is NOT 100% for sure
> happening. Sometime I didn't see it, but more than 50% I will see it if I
> try.
> 4) We never met this issue on our stage cluster, but it has only (1
> namenode + 1 jobtracker + 3 data/task nodes), and the same dataset is only
> 160G on it.
> 5) While the Spark java processing hanging, I didn't see any exception or
> issue on the HDFS data node log.
>
> Does anyone have any clue about this?
>
> Thanks
>
> Yong
>
>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>
>

Reply via email to