darion yaphet created ZEPPELIN-1536:
---------------------------------------

             Summary: Spark 2 SparkListenerBus has already stopped
                 Key: ZEPPELIN-1536
                 URL: https://issues.apache.org/jira/browse/ZEPPELIN-1536
             Project: Zeppelin
          Issue Type: Bug
          Components: Core
    Affects Versions: 0.6.1
            Reporter: darion yaphet


when I using zeppelin with spark 2.0.1 It's failed when startup spark job on 
yarn .

{noformat}

ERROR [2016-10-12 18:47:20,118] ({Yarn application state monitor} 
Logging.scala[logError]:70) - SparkListenerBus has already stopped! Dropping 
event SparkListenerStageCompleted(org.apache.spark.scheduler.StageInfo@7b1d3cb9)
ERROR [2016-10-12 18:47:20,119] ({pool-2-thread-2} Logging.scala[logError]:70) 
- SparkListenerBus has already stopped! Dropping event 
SparkListenerSQLExecutionEnd(0,1476269240118)
ERROR [2016-10-12 18:47:20,120] ({Yarn application state monitor} 
Logging.scala[logError]:70) - SparkListenerBus has already stopped! Dropping 
event 
SparkListenerJobEnd(0,1476269240119,JobFailed(org.apache.spark.SparkException: 
Job 0 cancelled because SparkContext was shut down))
ERROR [2016-10-12 18:47:20,123] ({pool-2-thread-2} Job.java[run]:189) - Job 
failed

org.apache.zeppelin.interpreter.InterpreterException: 
java.lang.reflect.InvocationTargetException
        at 
org.apache.zeppelin.spark.ZeppelinContext.showDF(ZeppelinContext.java:218)
        at 
org.apache.zeppelin.spark.SparkSqlInterpreter.interpret(SparkSqlInterpreter.java:129)
        at 
org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:94)
        at 
org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:341)
        at org.apache.zeppelin.scheduler.Job.run(Job.java:176)
        at 
org.apache.zeppelin.scheduler.ParallelScheduler$JobRunner.run(ParallelScheduler.java:162)
        at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
        at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:483)
        at 
org.apache.zeppelin.spark.ZeppelinContext.showDF(ZeppelinContext.java:214)
        ... 12 more
Caused by: org.apache.spark.SparkException: Job 0 cancelled because 
SparkContext was shut down
        at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$cleanUpAfterSchedulerStop$1.apply(DAGScheduler.scala:818)
        at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$cleanUpAfterSchedulerStop$1.apply(DAGScheduler.scala:816)
        at scala.collection.mutable.HashSet.foreach(HashSet.scala:78)
        at 
org.apache.spark.scheduler.DAGScheduler.cleanUpAfterSchedulerStop(DAGScheduler.scala:816)
        at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onStop(DAGScheduler.scala:1685)
        at org.apache.spark.util.EventLoop.stop(EventLoop.scala:83)
        at org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:1604)
        at 
org.apache.spark.SparkContext$$anonfun$stop$8.apply$mcV$sp(SparkContext.scala:1798)
        at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1287)
        at org.apache.spark.SparkContext.stop(SparkContext.scala:1797)
        at 
org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend$MonitorThread.run(YarnClientSchedulerBackend.scala:108)
        at 
org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:632)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:1890)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:1903)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:1916)
        at 
org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:347)
        at 
org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:39)

{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to