I had a cluster running with a streaming driver deployed into it. I shut down 
the cluster using sbin/stop-all.sh. Upon restarting (and restarting, and 
restarting), the master web UI cannot respond to requests. The cluster seems to 
be otherwise functional. Below is the master's log, showing stack traces.


pmogren@streamproc01:~/streamproc/spark-0.9.1-bin-hadoop2$ cat 
/home/pmogren/streamproc/spark-0.9.1-bin-hadoop2/sbin/../logs/spark-pmogren-org.apache.spark.deploy.master.Master-1-streamproc01.outSpark
 Command: /usr/lib/jvm/java-8-oracle-amd64/bin/java -cp 
:/home/pmogren/streamproc/spark-0.9.1-bin-hadoop2/conf:/home/pmogren/streamproc/spark-0.9.1-bin-hadoop2/assembly/target/scala-2.10/spark-assembly_2.10-0.9.1-hadoop2.2.0.jar
 -Dspark.akka.logLifecycleEvents=true -Djava.library.path= -Xms512m -Xmx512m 
-Dspark.streaming.unpersist=true -Djava.net.preferIPv4Stack=true 
-Dsun.io.serialization.extendedDebugInfo=true 
-Dspark.deploy.recoveryMode=ZOOKEEPER 
-Dspark.deploy.zookeeper.url=pubsub01:2181 
org.apache.spark.deploy.master.Master --ip 10.10.41.19 --port 7077 --webui-port 
8080
========================================

log4j:WARN No appenders could be found for logger 
(akka.event.slf4j.Slf4jLogger).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
info.
14/04/11 16:07:55 INFO Master: Using Spark's default log4j profile: 
org/apache/spark/log4j-defaults.properties
14/04/11 16:07:55 INFO Master: Starting Spark master at spark://10.10.41.19:7077
14/04/11 16:07:55 INFO MasterWebUI: Started Master web UI at 
http://10.10.41.19:8080
14/04/11 16:07:55 INFO Master: Persisting recovery state to ZooKeeper
14/04/11 16:07:55 INFO ZooKeeper: Client 
environment:zookeeper.version=3.4.5-1392090, built on 09/30/2012 17:52 GMT
14/04/11 16:07:55 INFO ZooKeeper: Client 
environment:host.name=streamproc01.nexus.commercehub.com
14/04/11 16:07:55 INFO ZooKeeper: Client environment:java.version=1.8.0
14/04/11 16:07:55 INFO ZooKeeper: Client environment:java.vendor=Oracle 
Corporation
14/04/11 16:07:55 INFO ZooKeeper: Client 
environment:java.home=/usr/lib/jvm/jdk1.8.0/jre
14/04/11 16:07:55 INFO ZooKeeper: Client 
environment:java.class.path=:/home/pmogren/streamproc/spark-0.9.1-bin-hadoop2/conf:/home/pmogren/streamproc/spark-0.9.1-bin-hadoop2/assembly/target/scala-2.10/spark-assembly_2.10-0.9.1-hadoop2.2.0.jar
14/04/11 16:07:55 INFO ZooKeeper: Client environment:java.library.path=
14/04/11 16:07:55 INFO ZooKeeper: Client environment:java.io.tmpdir=/tmp
14/04/11 16:07:55 INFO ZooKeeper: Client environment:java.compiler=<NA>
14/04/11 16:07:55 INFO ZooKeeper: Client environment:os.name=Linux
14/04/11 16:07:55 INFO ZooKeeper: Client environment:os.arch=amd64
14/04/11 16:07:55 INFO ZooKeeper: Client environment:os.version=3.5.0-23-generic
14/04/11 16:07:55 INFO ZooKeeper: Client environment:user.name=pmogren
14/04/11 16:07:55 INFO ZooKeeper: Client environment:user.home=/home/pmogren
14/04/11 16:07:55 INFO ZooKeeper: Client 
environment:user.dir=/home/pmogren/streamproc/spark-0.9.1-bin-hadoop2
14/04/11 16:07:55 INFO ZooKeeper: Initiating client connection, 
connectString=pubsub01:2181 sessionTimeout=30000 
watcher=org.apache.spark.deploy.master.SparkZooKeeperSession$ZooKeeperWatcher@744bfbb6
14/04/11 16:07:55 INFO ZooKeeperLeaderElectionAgent: Starting ZooKeeper 
LeaderElection agent
14/04/11 16:07:55 INFO ZooKeeper: Initiating client connection, 
connectString=pubsub01:2181 sessionTimeout=30000 
watcher=org.apache.spark.deploy.master.SparkZooKeeperSession$ZooKeeperWatcher@7f7e6043
14/04/11 16:07:55 INFO ClientCnxn: Opening socket connection to server 
pubsub01.nexus.commercehub.com/10.10.40.39:2181. Will not attempt to 
authenticate using SASL (unknown error)
14/04/11 16:07:55 INFO ClientCnxn: Socket connection established to 
pubsub01.nexus.commercehub.com/10.10.40.39:2181, initiating session
14/04/11 16:07:55 INFO ClientCnxn: Opening socket connection to server 
pubsub01.nexus.commercehub.com/10.10.40.39:2181. Will not attempt to 
authenticate using SASL (unknown error)
14/04/11 16:07:55 WARN ClientCnxnSocket: Connected to an old server; r-o mode 
will be unavailable
14/04/11 16:07:55 INFO ClientCnxn: Session establishment complete on server 
pubsub01.nexus.commercehub.com/10.10.40.39:2181, sessionid = 0x14515d9a11300ce, 
negotiated timeout = 30000
14/04/11 16:07:55 INFO ClientCnxn: Socket connection established to 
pubsub01.nexus.commercehub.com/10.10.40.39:2181, initiating session
14/04/11 16:07:55 WARN ClientCnxnSocket: Connected to an old server; r-o mode 
will be unavailable
14/04/11 16:07:55 INFO ClientCnxn: Session establishment complete on server 
pubsub01.nexus.commercehub.com/10.10.40.39:2181, sessionid = 0x14515d9a11300cf, 
negotiated timeout = 30000
14/04/11 16:07:55 WARN ZooKeeperLeaderElectionAgent: Cleaning up old ZK master 
election file that points to this master.
14/04/11 16:07:55 INFO ZooKeeperLeaderElectionAgent: Leader file disappeared, a 
master is down!
14/04/11 16:07:55 INFO Master: I have been elected leader! New state: RECOVERING
pmogren@streamproc01:~/streamproc/spark-0.9.1-bin-hadoop2$ tail -f 
/home/pmogren/streamproc/spark-0.9.1-bin-hadoop2/sbin/../logs/spark-pmogren-org.apache.spark.deploy.master.Master-1-streamproc01.out
14/04/11 16:07:55 INFO ClientCnxn: Socket connection established to 
pubsub01.nexus.commercehub.com/10.10.40.39:2181, initiating session
14/04/11 16:07:55 INFO ClientCnxn: Opening socket connection to server 
pubsub01.nexus.commercehub.com/10.10.40.39:2181. Will not attempt to 
authenticate using SASL (unknown error)
14/04/11 16:07:55 WARN ClientCnxnSocket: Connected to an old server; r-o mode 
will be unavailable
14/04/11 16:07:55 INFO ClientCnxn: Session establishment complete on server 
pubsub01.nexus.commercehub.com/10.10.40.39:2181, sessionid = 0x14515d9a11300ce, 
negotiated timeout = 30000
14/04/11 16:07:55 INFO ClientCnxn: Socket connection established to 
pubsub01.nexus.commercehub.com/10.10.40.39:2181, initiating session
14/04/11 16:07:55 WARN ClientCnxnSocket: Connected to an old server; r-o mode 
will be unavailable
14/04/11 16:07:55 INFO ClientCnxn: Session establishment complete on server 
pubsub01.nexus.commercehub.com/10.10.40.39:2181, sessionid = 0x14515d9a11300cf, 
negotiated timeout = 30000
14/04/11 16:07:55 WARN ZooKeeperLeaderElectionAgent: Cleaning up old ZK master 
election file that points to this master.
14/04/11 16:07:55 INFO ZooKeeperLeaderElectionAgent: Leader file disappeared, a 
master is down!
14/04/11 16:07:55 INFO Master: I have been elected leader! New state: RECOVERING
14/04/11 16:08:55 ERROR TaskInvocation:
java.lang.NullPointerException
        at 
org.apache.spark.deploy.master.Master$$anonfun$completeRecovery$5.apply(Master.scala:418)
        at 
org.apache.spark.deploy.master.Master$$anonfun$completeRecovery$5.apply(Master.scala:418)
        at 
scala.collection.TraversableLike$$anonfun$filter$1.apply(TraversableLike.scala:264)
        at scala.collection.mutable.HashSet.foreach(HashSet.scala:79)
        at 
scala.collection.TraversableLike$class.filter(TraversableLike.scala:263)
        at scala.collection.AbstractTraversable.filter(Traversable.scala:105)
        at 
org.apache.spark.deploy.master.Master.completeRecovery(Master.scala:418)
        at 
org.apache.spark.deploy.master.Master$$anonfun$receive$1$$anonfun$applyOrElse$1.apply$mcV$sp(Master.scala:160)
        at akka.actor.Scheduler$$anon$11.run(Scheduler.scala:118)
        at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:42)
        at 
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
        at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
        at 
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
        at 
scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
        at 
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
14/04/11 16:09:18 WARN AbstractHttpConnection: /
java.lang.NullPointerException
        at 
org.apache.spark.deploy.master.ui.IndexPage.driverRow(IndexPage.scala:178)
        at 
org.apache.spark.deploy.master.ui.IndexPage$$anonfun$8.apply(IndexPage.scala:62)
        at 
org.apache.spark.deploy.master.ui.IndexPage$$anonfun$8.apply(IndexPage.scala:62)
        at 
org.apache.spark.ui.UIUtils$$anonfun$listingTable$2.apply(UIUtils.scala:134)
        at 
org.apache.spark.ui.UIUtils$$anonfun$listingTable$2.apply(UIUtils.scala:134)
        at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
        at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
        at 
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
        at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34)
        at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
        at scala.collection.AbstractTraversable.map(Traversable.scala:105)
        at org.apache.spark.ui.UIUtils$.listingTable(UIUtils.scala:134)
        at 
org.apache.spark.deploy.master.ui.IndexPage.render(IndexPage.scala:62)
        at 
org.apache.spark.deploy.master.ui.MasterWebUI$$anonfun$4.apply(MasterWebUI.scala:67)
        at 
org.apache.spark.deploy.master.ui.MasterWebUI$$anonfun$4.apply(MasterWebUI.scala:67)
        at org.apache.spark.ui.JettyUtils$$anon$1.handle(JettyUtils.scala:61)
        at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1040)
        at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:976)
        at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
        at 
org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:52)
        at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
        at org.eclipse.jetty.server.Server.handle(Server.java:363)
        at 
org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:483)
        at 
org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:920)
        at 
org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:982)
        at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:635)
        at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
        at 
org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82)
        at 
org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:628)
        at 
org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:52)
        at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
        at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
        at java.lang.Thread.run(Thread.java:744)
14/04/11 16:09:19 WARN AbstractHttpConnection: /favicon.ico
java.lang.NullPointerException
        at 
org.apache.spark.deploy.master.ui.IndexPage.driverRow(IndexPage.scala:178)
        at 
org.apache.spark.deploy.master.ui.IndexPage$$anonfun$8.apply(IndexPage.scala:62)
        at 
org.apache.spark.deploy.master.ui.IndexPage$$anonfun$8.apply(IndexPage.scala:62)
        at 
org.apache.spark.ui.UIUtils$$anonfun$listingTable$2.apply(UIUtils.scala:134)
        at 
org.apache.spark.ui.UIUtils$$anonfun$listingTable$2.apply(UIUtils.scala:134)
        at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
        at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
        at 
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
        at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34)
        at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
        at scala.collection.AbstractTraversable.map(Traversable.scala:105)
        at org.apache.spark.ui.UIUtils$.listingTable(UIUtils.scala:134)
        at 
org.apache.spark.deploy.master.ui.IndexPage.render(IndexPage.scala:62)
        at 
org.apache.spark.deploy.master.ui.MasterWebUI$$anonfun$4.apply(MasterWebUI.scala:67)
        at 
org.apache.spark.deploy.master.ui.MasterWebUI$$anonfun$4.apply(MasterWebUI.scala:67)
        at org.apache.spark.ui.JettyUtils$$anon$1.handle(JettyUtils.scala:61)
        at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1040)
        at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:976)
        at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
        at 
org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:52)
        at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
        at org.eclipse.jetty.server.Server.handle(Server.java:363)
        at 
org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:483)
        at 
org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:920)
        at 
org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:982)
        at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:635)
        at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
        at 
org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82)
        at 
org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:628)
        at 
org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:52)
        at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
        at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
        at java.lang.Thread.run(Thread.java:744)

Reply via email to