Re: Spark issue moving from local to yarn-client

2019-03-14 Thread Dave Boyd
Ok, this had more information:

> INFO [2019-03-15 02:00:46,364] ({pool-2-thread-3} 
> Logging.scala[logInfo]:54) - Logging events to 
> hdfs:///var/log/spark/applicationHistory/application_1551287663522_0145
> ERROR [2019-03-15 02:00:46,366] ({SparkListenerBus} 
> Logging.scala[logError]:91) - uncaught error in thread 
> SparkListenerBus, stopping SparkContext
> java.lang.NoSuchMethodError: 
> org.json4s.Formats.emptyValueStrategy()Lorg/json4s/prefs/EmptyValueStrategy;
>     at org.json4s.jackson.JsonMethods$class.render(JsonMethods.scala:32)
>     at org.json4s.jackson.JsonMethods$.render(JsonMethods.scala:50)
>     at 
> org.apache.spark.scheduler.EventLoggingListener$$anonfun$logEvent$1.apply(EventLoggingListener.scala:136)
>     at 
> org.apache.spark.scheduler.EventLoggingListener$$anonfun$logEvent$1.apply(EventLoggingListener.scala:136)
>     at scala.Option.foreach(Option.scala:257)
>     at 
> org.apache.spark.scheduler.EventLoggingListener.logEvent(EventLoggingListener.scala:136)
>     at 
> org.apache.spark.scheduler.EventLoggingListener.onBlockManagerAdded(EventLoggingListener.scala:168)
>     at 
> org.apache.spark.scheduler.SparkListenerBus$class.doPostEvent(SparkListenerBus.scala:49)
>     at 
> org.apache.spark.scheduler.LiveListenerBus.doPostEvent(LiveListenerBus.scala:36)
>     at 
> org.apache.spark.scheduler.LiveListenerBus.doPostEvent(LiveListenerBus.scala:36)
>     at 
> org.apache.spark.util.ListenerBus$class.postToAll(ListenerBus.scala:63)
>     at 
> org.apache.spark.scheduler.LiveListenerBus.postToAll(LiveListenerBus.scala:36)
>     at 
> org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(LiveListenerBus.scala:94)
>     at 
> org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(LiveListenerBus.scala:79)
>     at 
> org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(LiveListenerBus.scala:79)
>     at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
>     at 
> org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1.apply$mcV$sp(LiveListenerBus.scala:78)
>     at 
> org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1245)
>     at 
> org.apache.spark.scheduler.LiveListenerBus$$anon$1.run(LiveListenerBus.scala:77)
> ERROR [2019-03-15 02:00:46,367] ({SparkListenerBus} 
> Logging.scala[logError]:91) - throw uncaught fatal error in thread 
> SparkListenerBus
> java.lang.NoSuchMethodError: 
> org.json4s.Formats.emptyValueStrategy()Lorg/json4s/prefs/EmptyValueStrategy;
>     at org.json4s.jackson.JsonMethods$class.render(JsonMethods.scala:32)
>     at org.json4s.jackson.JsonMethods$.render(JsonMethods.scala:50)
>     at 
> org.apache.spark.scheduler.EventLoggingListener$$anonfun$logEvent$1.apply(EventLoggingListener.scala:136)
>     at 
> org.apache.spark.scheduler.EventLoggingListener$$anonfun$logEvent$1.apply(EventLoggingListener.scala:136)
>     at scala.Option.foreach(Option.scala:257)
>     at 
> org.apache.spark.scheduler.EventLoggingListener.logEvent(EventLoggingListener.scala:136)
>     at 
> org.apache.spark.scheduler.EventLoggingListener.onBlockManagerAdded(EventLoggingListener.scala:168)
>     at 
> org.apache.spark.scheduler.SparkListenerBus$class.doPostEvent(SparkListenerBus.scala:49)
>     at 
> org.apache.spark.scheduler.LiveListenerBus.doPostEvent(LiveListenerBus.scala:36)
>     at 
> org.apache.spark.scheduler.LiveListenerBus.doPostEvent(LiveListenerBus.scala:36)
>     at 
> org.apache.spark.util.ListenerBus$class.postToAll(ListenerBus.scala:63)
>     at 
> org.apache.spark.scheduler.LiveListenerBus.postToAll(LiveListenerBus.scala:36)
>     at 
> org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(LiveListenerBus.scala:94)
>     at 
> org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(LiveListenerBus.scala:79)
>     at 
> org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(LiveListenerBus.scala:79)
>     at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
>     at 
> org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1.apply$mcV$sp(LiveListenerBus.scala:78)
>     at 
> org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1245)
>     at 
> org.apache.spark.scheduler.LiveListenerBus$$anon$1.run(LiveListenerBus.scala:77)
>  INFO [2019-03-15 02:00:46,368] ({pool-2-thread-3} 
> Logging.scala[logInfo]:54) - SchedulerBackend is ready for scheduling 
> beginning after waiting maxRegisteredResourcesWaitingTime: 3(ms)
>  INFO [2019-03-15 02:00:46,375] ({stop-spark-context} 
> AbstractConnector.java[doStop]:306) - Stopped 
> ServerConnector@718326a2{HTTP/1.1}{0.0.0.0:55600}
>  INFO [2019-03-15 02:00:46,376] ({stop-spark-context} 
> ContextHandler.java[doStop]:865) - Stopped 
> 

Re: Spark issue moving from local to yarn-client

2019-03-14 Thread Jeff Zhang
This log is zeppelin server log, the root log should be in the spark
interpreter log. The file name is something like this :
zeppelin-interpreter-spark*.log

Dave Boyd  于2019年3月15日周五 上午9:31写道:

> Jeff:
>
> Running a simple spark.version paragraph I sometimes get this:
>
> INFO [2019-03-15 01:12:18,720] ({pool-2-thread-49}
> RemoteInterpreter.java[call]:142) - Open RemoteInterpreter
> org.apache.zeppelin.spark.SparkInterpreter
>  INFO [2019-03-15 01:12:18,721] ({pool-2-thread-49}
> RemoteInterpreter.java[pushAngularObjectRegistryToRemote]:436) - Push local
> angular object registry from ZeppelinServer to remote interpreter group
> spark:shared_process
>  WARN [2019-03-15 01:13:30,593] ({pool-2-thread-49}
> NotebookServer.java[afterStatusChange]:2316) - Job
> 20190207-030535_192412278 is finished, status: ERROR, exception: null,
> result: %text java.lang.IllegalStateException: Spark context stopped while
> waiting for backend
> at
> org.apache.spark.scheduler.TaskSchedulerImpl.waitBackendReady(TaskSchedulerImpl.scala:614)
> at
> org.apache.spark.scheduler.TaskSchedulerImpl.postStartHook(TaskSchedulerImpl.scala:169)
> at org.apache.spark.SparkContext.(SparkContext.scala:567)
> at org.apache.spark.SparkContext.(SparkContext.scala:117)
> at
> org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2336)
> at org.apache.spark.SparkContext.getOrCreate(SparkContext.scala)
> at
> org.apache.zeppelin.spark.Spark2Shims.setupSparkListener(Spark2Shims.java:38)
> at
> org.apache.zeppelin.spark.NewSparkInterpreter.open(NewSparkInterpreter.java:120)
> at
> org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:62)
> at
> org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:69)
> at
> org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:616)
> at org.apache.zeppelin.scheduler.Job.run(Job.java:188)
> at
> org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:140)
> at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
> at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
>
>  INFO [2019-03-15 01:13:30,598] ({pool-2-thread-49}
> VFSNotebookRepo.java[save]:196) - Saving note:2E4D6HQ3F
>  INFO [2019-03-15 01:13:30,600] ({pool-2-thread-49}
> SchedulerFactory.java[jobFinished]:120) - Job 20190207-030535_192412278
> finished by scheduler
> org.apache.zeppelin.interpreter.remote.RemoteInterpreter-spark:shared_process-shared_session
>
> When I run this spark sql paragraph:
>
> // DataStore params to a hypothetical GeoMesa Accumulo table
> val dsParams = Map(
>   "instanceId" -> "oedl",
>   "zookeepers" -> "oedevnode00,oedevnode01,oedevnode02",
>   "user"   -> "oe_user",
>   "password"   -> "XXX",
>   "tableName"  -> "CoalesceSearch")
>
> // Create DataFrame using the "geomesa" format
> val docdataFrame =
> spark.read.format("geomesa").options(dsParams).option("geomesa.feature",
> "oedocumentrecordset").load()
> docdataFrame.createOrReplaceTempView("documentview")
>
> Here is the complete stack trace:
>
> INFO [2019-03-15 01:07:21,569] ({pool-2-thread-43}
> Paragraph.java[jobRun]:380) - Run paragraph [paragraph_id:
> 20190222-204451_856915056, interpreter: , note_id: 2E6X2CDWW, user:
> anonymous]
>  WARN [2019-03-15 01:07:27,098] ({pool-2-thread-43}
> NotebookServer.java[afterStatusChange]:2316) - Job
> 20190222-204451_856915056 is finished, status: ERROR, exception: null,
> result: %text java.lang.IllegalStateException: Cannot call methods on a
> stopped SparkContext.
> This stopped SparkContext was created at:
>
>
> org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:860)
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> java.lang.reflect.Method.invoke(Method.java:498)
>
> org.apache.zeppelin.spark.BaseSparkScalaInterpreter.spark2CreateContext(BaseSparkScalaInterpreter.scala:259)
>
> org.apache.zeppelin.spark.BaseSparkScalaInterpreter.createSparkContext(BaseSparkScalaInterpreter.scala:178)
>
> org.apache.zeppelin.spark.SparkScala211Interpreter.open(SparkScala211Interpreter.scala:89)
>
> 

Re: Spark issue moving from local to yarn-client

2019-03-14 Thread Dave Boyd
Jeff:

Running a simple spark.version paragraph I sometimes get this:
INFO [2019-03-15 01:12:18,720] ({pool-2-thread-49} 
RemoteInterpreter.java[call]:142) - Open RemoteInterpreter 
org.apache.zeppelin.spark.SparkInterpreter
 INFO [2019-03-15 01:12:18,721] ({pool-2-thread-49} 
RemoteInterpreter.java[pushAngularObjectRegistryToRemote]:436) - Push local 
angular object registry from ZeppelinServer to remote interpreter group 
spark:shared_process
 WARN [2019-03-15 01:13:30,593] ({pool-2-thread-49} 
NotebookServer.java[afterStatusChange]:2316) - Job 20190207-030535_192412278 is 
finished, status: ERROR, exception: null, result: %text 
java.lang.IllegalStateException: Spark context stopped while waiting for backend
at 
org.apache.spark.scheduler.TaskSchedulerImpl.waitBackendReady(TaskSchedulerImpl.scala:614)
at 
org.apache.spark.scheduler.TaskSchedulerImpl.postStartHook(TaskSchedulerImpl.scala:169)
at org.apache.spark.SparkContext.(SparkContext.scala:567)
at org.apache.spark.SparkContext.(SparkContext.scala:117)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2336)
at org.apache.spark.SparkContext.getOrCreate(SparkContext.scala)
at 
org.apache.zeppelin.spark.Spark2Shims.setupSparkListener(Spark2Shims.java:38)
at 
org.apache.zeppelin.spark.NewSparkInterpreter.open(NewSparkInterpreter.java:120)
at 
org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:62)
at 
org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:69)
at 
org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:616)
at org.apache.zeppelin.scheduler.Job.run(Job.java:188)
at 
org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:140)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

 INFO [2019-03-15 01:13:30,598] ({pool-2-thread-49} 
VFSNotebookRepo.java[save]:196) - Saving note:2E4D6HQ3F
 INFO [2019-03-15 01:13:30,600] ({pool-2-thread-49} 
SchedulerFactory.java[jobFinished]:120) - Job 20190207-030535_192412278 
finished by scheduler 
org.apache.zeppelin.interpreter.remote.RemoteInterpreter-spark:shared_process-shared_session
When I run this spark sql paragraph:

// DataStore params to a hypothetical GeoMesa Accumulo table
val dsParams = Map(
  "instanceId" -> "oedl",
  "zookeepers" -> "oedevnode00,oedevnode01,oedevnode02",
  "user"   -> "oe_user",
  "password"   -> "XXX",
  "tableName"  -> "CoalesceSearch")

// Create DataFrame using the "geomesa" format
val docdataFrame = 
spark.read.format("geomesa").options(dsParams).option("geomesa.feature", 
"oedocumentrecordset").load()
docdataFrame.createOrReplaceTempView("documentview")

Here is the complete stack trace:

INFO [2019-03-15 01:07:21,569] ({pool-2-thread-43} Paragraph.java[jobRun]:380) 
- Run paragraph [paragraph_id: 20190222-204451_856915056, interpreter: , 
note_id: 2E6X2CDWW, user: anonymous]
 WARN [2019-03-15 01:07:27,098] ({pool-2-thread-43} 
NotebookServer.java[afterStatusChange]:2316) - Job 20190222-204451_856915056 is 
finished, status: ERROR, exception: null, result: %text 
java.lang.IllegalStateException: Cannot call methods on a stopped SparkContext.
This stopped SparkContext was created at:

org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:860)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:498)
org.apache.zeppelin.spark.BaseSparkScalaInterpreter.spark2CreateContext(BaseSparkScalaInterpreter.scala:259)
org.apache.zeppelin.spark.BaseSparkScalaInterpreter.createSparkContext(BaseSparkScalaInterpreter.scala:178)
org.apache.zeppelin.spark.SparkScala211Interpreter.open(SparkScala211Interpreter.scala:89)
org.apache.zeppelin.spark.NewSparkInterpreter.open(NewSparkInterpreter.java:102)
org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:62)
org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:69)
org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:616)
org.apache.zeppelin.scheduler.Job.run(Job.java:188)

Re: Spark issue moving from local to yarn-client

2019-03-14 Thread Jeff Zhang
Hi Dave,

Could you paste the full stacktrace ? You can find it in the spark
interpreter log file which is located in ZEPPELIN_HOME/logs

Xun Liu  于2019年3月15日周五 上午8:21写道:

> Hi
>
> You can first execute a simple statement in spark, through sparksql, to
> see if it can run normally in YARN.
> If sparksql is running without problems, check the zeppelin and spark on
> yarn issues.
>
> Also, what do you use for zeppelin-0.7.4? zeppelin-0.8.2? Is it a branch
> that you maintain yourself?
>
> 在 2019年3月15日,上午6:31,Dave Boyd  写道:
>
> All:
>
>I have some code that worked fine in Zeppelin 0.7.4 but I am having
> issues in 0.8.2 when going from spark master of local to yarn-client.  Yarn
> client worked in 0.7.4.
>
> When my master is set to local[*] it runs just fine.  However, as soon as
> I switch to yarn-client I get the Cannot call methods on a stopped
> SparkContext error.   In looking at my yarn logs everything creates fine
> and the job finishes without an error.  The executors start just fine
> from what I get out of yarn logs.
>
> Any suggestions on where to look?   This happens with any note that trys
> to run spark.
>
> If I try this very simple code:
>
> // Spark Version
> spark.version
>
> I get this error:
>
> java.lang.IllegalStateException: Spark context stopped while waiting for
> backend at
> org.apache.spark.scheduler.TaskSchedulerImpl.waitBackendReady(TaskSchedulerImpl.scala:614)
> at
> org.apache.spark.scheduler.TaskSchedulerImpl.postStartHook(TaskSchedulerImpl.scala:169)
> at org.apache.spark.SparkContext.(SparkContext.scala:567) at
> org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2313) at
> org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:868)
> at
> org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:860)
> at scala.Option.getOrElse(Option.scala:121) at
> org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:860)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498) at
> org.apache.zeppelin.spark.BaseSparkScalaInterpreter.spark2CreateContext(BaseSparkScalaInterpreter.scala:259)
> at
> org.apache.zeppelin.spark.BaseSparkScalaInterpreter.createSparkContext(BaseSparkScalaInterpreter.scala:178)
> at
> org.apache.zeppelin.spark.SparkScala211Interpreter.open(SparkScala211Interpreter.scala:89)
> at
> org.apache.zeppelin.spark.NewSparkInterpreter.open(NewSparkInterpreter.java:102)
> at
> org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:62)
> at
> org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:69)
> at
> org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:616)
> at org.apache.zeppelin.scheduler.Job.run(Job.java:188) at
> org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:140)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
> at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
>
> What am I missing?
>
> --
> = mailto:db...@incadencecorp.com  
> 
> David W. Boyd
> VP,  Data Solutions
> 10432 Balls Ford, Suite 240
> Manassas, VA 20109
> office:   +1-703-552-2862
> cell: +1-703-402-7908
> == http://www.incadencecorp.com/ 
> ISO/IEC JTC1 WG9, editor ISO/IEC 20547 Big Data Reference Architecture
> Chair ANSI/INCITS TC Big Data
> Co-chair NIST Big Data Public Working Group Reference Architecture
> First Robotic Mentor - FRC, FTC - www.iliterobotics.org
> Board Member- USSTEM Foundation - www.usstem.org
>
> The information contained in this message may be privileged
> and/or confidential and protected from disclosure.
> If the reader of this message is not the intended recipient
> or an employee or agent responsible for delivering this message
> to the intended recipient, you are hereby notified that any
> dissemination, distribution or copying of this communication
> is strictly prohibited.  If you have received this communication
> in error, please notify the sender immediately by replying to
> this message and deleting the material from any computer.
>
>
>
>
>

-- 
Best Regards

Jeff Zhang


Re: Spark issue moving from local to yarn-client

2019-03-14 Thread Xun Liu
Hi

You can first execute a simple statement in spark, through sparksql, to see if 
it can run normally in YARN.
If sparksql is running without problems, check the zeppelin and spark on yarn 
issues.

Also, what do you use for zeppelin-0.7.4? zeppelin-0.8.2? Is it a branch that 
you maintain yourself?

> 在 2019年3月15日,上午6:31,Dave Boyd  写道:
> 
> All:
> 
>I have some code that worked fine in Zeppelin 0.7.4 but I am having issues 
> in 0.8.2 when going from spark master of local to yarn-client.  Yarn client 
> worked in 0.7.4.   
> When my master is set to local[*] it runs just fine.  However, as soon as I 
> switch to yarn-client I get the Cannot call methods on a stopped SparkContext 
> error.   In looking at my yarn logs everything creates fine and the job 
> finishes without an error.  The executors start just fine
> from what I get out of yarn logs.   
> Any suggestions on where to look?   This happens with any note that trys to 
> run spark.
> 
> If I try this very simple code:
> 
> // Spark Version
> spark.version
> I get this error:
> 
> 
>> 
>> java.lang.IllegalStateException: Spark context stopped while waiting for 
>> backend at 
>> org.apache.spark.scheduler.TaskSchedulerImpl.waitBackendReady(TaskSchedulerImpl.scala:614)
>>  at 
>> org.apache.spark.scheduler.TaskSchedulerImpl.postStartHook(TaskSchedulerImpl.scala:169)
>>  at org.apache.spark.SparkContext.(SparkContext.scala:567) at 
>> org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2313) at 
>> org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:868)
>>  at 
>> org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:860)
>>  at scala.Option.getOrElse(Option.scala:121) at 
>> org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:860)
>>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>>  at 
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>  at java.lang.reflect.Method.invoke(Method.java:498) at 
>> org.apache.zeppelin.spark.BaseSparkScalaInterpreter.spark2CreateContext(BaseSparkScalaInterpreter.scala:259)
>>  at
>>  
>> org.apache.zeppelin.spark.BaseSparkScalaInterpreter.createSparkContext(BaseSparkScalaInterpreter.scala:178)
>>  at 
>> org.apache.zeppelin.spark.SparkScala211Interpreter.open(SparkScala211Interpreter.scala:89)
>>  at 
>> org.apache.zeppelin.spark.NewSparkInterpreter.open(NewSparkInterpreter.java:102)
>>  at 
>> org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:62) at 
>> org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:69)
>>  at 
>> org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:616)
>>  at org.apache.zeppelin.scheduler.Job.run(Job.java:188) at 
>> org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:140) at 
>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at 
>> java.util.concurrent.FutureTask.run(FutureTask.java:266)
>>  at 
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>>  at 
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>>  at 
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>>  at 
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>>  at java.lang.Thread.run(Thread.java:748)
>> 
> What am I missing?
> -- 
> = mailto:db...@incadencecorp.com  
> 
> David W. Boyd 
> VP,  Data Solutions   
> 10432 Balls Ford, Suite 240  
> Manassas, VA 20109 
> office:   +1-703-552-2862
> cell: +1-703-402-7908
> == http://www.incadencecorp.com/  
> 
> ISO/IEC JTC1 WG9, editor ISO/IEC 20547 Big Data Reference Architecture
> Chair ANSI/INCITS TC Big Data
> Co-chair NIST Big Data Public Working Group Reference Architecture
> First Robotic Mentor - FRC, FTC - www.iliterobotics.org 
> 
> Board Member- USSTEM Foundation - www.usstem.org 
> 
> The information contained in this message may be privileged 
> and/or confidential and protected from disclosure.  
> If the reader of this message is not the intended recipient 
> or an employee or agent responsible for delivering this message 
> to the intended recipient, you are hereby notified that any 
> dissemination, distribution or copying of this communication 
> is strictly prohibited.  If you have received this communication 
> in error, please notify the sender immediately by replying to 
> this message and deleting the material from any computer.
> 
>