If I explain, further more, In our server, when it is started, we also
start a spark cluster. Our server is an OSGI environment and we are calling
the SqlContext.SqlContext().stop() method to stop the master in OSGI
deactivate method.

On Tue, Oct 11, 2016 at 3:27 PM, Gimantha Bandara <giman...@wso2.com> wrote:

>  Hi all,
>
>
> When we are forcefully shutting down the server, we observe the following
> exception intermittently. What could be the reason? Please note that this
> exception is printed in logs multiple times and stop our server shutting
> down as soon as we trigger a forceful shutdown. What could be the root
> cause of this exception?
>
> TID: [-1] [] [2016-09-27 22:58:57,795] ERROR 
> {org.apache.spark.deploy.worker.Worker}
> -  Failed to launch executor app-20160927225850-0000/1 for CarbonAnalytics.
> {org.apache.spark.deploy.worker.Worker}
>     java.lang.IllegalStateException: Shutdown hooks cannot be modified
> during shutdown.
>             at org.apache.spark.util.SparkShutdownHookManager.add(
> ShutdownHookManager.scala:246)
>             at org.apache.spark.util.ShutdownHookManager$.addShutdownHook(
> ShutdownHookManager.scala:191)
>             at org.apache.spark.util.ShutdownHookManager$.addShutdownHook(
> ShutdownHookManager.scala:180)
>             at org.apache.spark.deploy.worker.ExecutorRunner.start(
> ExecutorRunner.scala:75)
>             at org.apache.spark.deploy.worker.Worker$$anonfun$
> receive$1.applyOrElse(Worker.scala:472)
>             at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.
> apply$mcV$sp(Inbox.scala:116)
>             at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:
> 204)
>             at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
>             at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(
> Dispatcher.scala:215)
>             at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1142)
>             at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:617)
>             at java.lang.Thread.run(Thread.java:745)
>     TID: [-1] [] [2016-09-27 22:58:57,798]  WARN 
> {org.apache.spark.deploy.master.Master}
> -  Got status update for unknown executor app-20160927225850-0000/0
> {org.apache.spark.deploy.master.Master}
>     TID: [-1234] [] [2016-09-27 22:58:57,802]  INFO 
> {org.wso2.carbon.core.ServerManagement}
> -  All requests have been served. {org.wso2.carbon.core.ServerManagement}
>     TID: [-1234] [] [2016-09-27 22:58:57,802]  INFO 
> {org.wso2.carbon.core.ServerManagement}
> -  Waiting for deployment completion... {org.wso2.carbon.core.
> ServerManagement}
>     TID: [-1] [] [2016-09-27 22:58:57,803] ERROR 
> {org.apache.spark.deploy.worker.Worker}
> -  Failed to launch executor app-20160927225850-0000/2 for CarbonAnalytics.
> {org.apache.spark.deploy.worker.Worker}
>     java.lang.IllegalStateException: Shutdown hooks cannot be modified
> during shutdown.
>             at org.apache.spark.util.SparkShutdownHookManager.add(
> ShutdownHookManager.scala:246)
>             at org.apache.spark.util.ShutdownHookManager$.addShutdownHook(
> ShutdownHookManager.scala:191)
>             at org.apache.spark.util.ShutdownHookManager$.addShutdownHook(
> ShutdownHookManager.scala:180)
>             at org.apache.spark.deploy.worker.ExecutorRunner.start(
> ExecutorRunner.scala:75)
>             at org.apache.spark.deploy.worker.Worker$$anonfun$
> receive$1.applyOrElse(Worker.scala:472)
>             at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.
> apply$mcV$sp(Inbox.scala:116)
>             at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:
> 204)
>             at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
>             at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(
> Dispatcher.scala:215)
>             at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1142)
>             at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:617)
>             at java.lang.Thread.run(Thread.java:745)
>     TID: [-1] [] [2016-09-27 22:58:57,834] ERROR 
> {org.apache.spark.ContextCleaner}
> -  Error in cleaning thread {org.apache.spark.ContextCleaner}
>     java.lang.InterruptedException
>             at java.lang.Object.wait(Native Method)
>             at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:
> 143)
>             at org.apache.spark.ContextCleaner$$anonfun$org$
> apache$spark$ContextCleaner$$keepCleaning$1.apply$mcV$sp(
> ContextCleaner.scala:176)
>             at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.
> scala:1181)
>             at org.apache.spark.ContextCleaner.org$apache$
> spark$ContextCleaner$$keepCleaning(ContextCleaner.scala:173)
>             at org.apache.spark.ContextCleaner$$anon$3.run(
> ContextCleaner.scala:68)
>     TID: [-1] [] [2016-09-27 22:58:57,835] ERROR
> {org.apache.spark.util.Utils} -  uncaught error in thread SparkListenerBus,
> stopping SparkContext {org.apache.spark.util.Utils}
>     java.lang.InterruptedException
>             at java.util.concurrent.locks.AbstractQueuedSynchronizer.
> doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:998)
>             at java.util.concurrent.locks.AbstractQueuedSynchronizer.
> acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
>             at java.util.concurrent.Semaphore.acquire(Semaphore.java:312)
>             at org.apache.spark.util.AsynchronousListenerBus$$anon$
> 1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(
> AsynchronousListenerBus.scala:66)
>             at org.apache.spark.util.AsynchronousListenerBus$$anon$
> 1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(
> AsynchronousListenerBus.scala:65)
>             at org.apache.spark.util.AsynchronousListenerBus$$anon$
> 1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(
> AsynchronousListenerBus.scala:65)
>             at scala.util.DynamicVariable.withValue(DynamicVariable.
> scala:57)
>             at org.apache.spark.util.AsynchronousListenerBus$$anon$
> 1$$anonfun$run$1.apply$mcV$sp(AsynchronousListenerBus.scala:64)
>             at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.
> scala:1181)
>             at org.apache.spark.util.AsynchronousListenerBus$$anon$
> 1.run(AsynchronousListenerBus.scala:63)
>     TID: [-1] [] [2016-09-27 22:58:57,871] ERROR
> {org.apache.spark.util.Utils} -  throw uncaught fatal error in thread
> SparkListenerBus {org.apache.spark.util.Utils}
>     java.lang.InterruptedException
>             at java.util.concurrent.locks.AbstractQueuedSynchronizer.
> doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:998)
>             at java.util.concurrent.locks.AbstractQueuedSynchronizer.
> acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
>             at java.util.concurrent.Semaphore.acquire(Semaphore.java:312)
>             at org.apache.spark.util.AsynchronousListenerBus$$anon$
> 1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(
> AsynchronousListenerBus.scala:66)
>             at org.apache.spark.util.AsynchronousListenerBus$$anon$
> 1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(
> AsynchronousListenerBus.scala:65)
>             at org.apache.spark.util.AsynchronousListenerBus$$anon$
> 1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(
> AsynchronousListenerBus.scala:65)
>             at scala.util.DynamicVariable.withValue(DynamicVariable.
> scala:57)
>             at org.apache.spark.util.AsynchronousListenerBus$$anon$
> 1$$anonfun$run$1.apply$mcV$sp(AsynchronousListenerBus.scala:64)
>             at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.
> scala:1181)
>             at org.apache.spark.util.AsynchronousListenerBus$$anon$
> 1.run(AsynchronousListenerBus.scala:63)
>     TID: [-1] [] [2016-09-27 22:58:57,834] ERROR
> {org.apache.spark.rpc.netty.Inbox} -  Ignoring error
> {org.apache.spark.rpc.netty.Inbox}
>     org.apache.spark.SparkException: Error notifying standalone
> scheduler's driver endpoint
>             at org.apache.spark.scheduler.cluster.
> CoarseGrainedSchedulerBackend.removeExecutor(
> CoarseGrainedSchedulerBackend.scala:373)
>             at org.apache.spark.scheduler.cluster.
> SparkDeploySchedulerBackend.executorRemoved(SparkDeploySchedulerBackend.
> scala:144)
>             at org.apache.spark.deploy.client.AppClient$
> ClientEndpoint$$anonfun$receive$1.applyOrElse(AppClient.scala:184)
>             at org.apache.spark.rpc.netty.Inbox$$anonfun$process$1.
> apply$mcV$sp(Inbox.scala:116)
>             at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:
> 204)
>             at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
>             at org.apache.spark.rpc.netty.Dispatcher$MessageLoop.run(
> Dispatcher.scala:215)
>             at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1142)
>             at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:617)
>             at java.lang.Thread.run(Thread.java:745)
>     Caused by: java.lang.InterruptedException
>             at java.util.concurrent.locks.AbstractQueuedSynchronizer.
> doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1039)
>             at java.util.concurrent.locks.AbstractQueuedSynchronizer.
> tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
>             at scala.concurrent.impl.Promise$DefaultPromise.tryAwait(
> Promise.scala:208)
>             at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.
> scala:218)
>             at scala.concurrent.impl.Promise$
> DefaultPromise.result(Promise.scala:223)
>             at scala.concurrent.Await$$anonfun$result$1.apply(
> package.scala:107)
>             at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(
> BlockContext.scala:53)
>             at scala.concurrent.Await$.result(package.scala:107)
>             at org.apache.spark.rpc.RpcTimeout.awaitResult(
> RpcTimeout.scala:75)
>             at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(
> RpcEndpointRef.scala:101)
>             at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(
> RpcEndpointRef.scala:77)
>             at org.apache.spark.scheduler.cluster.
> CoarseGrainedSchedulerBackend.removeExecutor(
> CoarseGrainedSchedulerBackend.scala:370)
>             ... 9 more
>
> Appreciate your help!
>
> --
>
> Thanks,
> Gimantha
>
>


-- 
Gimantha Bandara
Software Engineer
WSO2. Inc : http://wso2.com
Mobile : +94714961919

Reply via email to