[jira] [Updated] (SPARK-9446) Clear Active SparkContext in stop() method

2015-07-31 Thread Sean Owen (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-9446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Owen updated SPARK-9446:
-
Shepherd: Sean Owen

 Clear Active SparkContext in stop() method
 --

 Key: SPARK-9446
 URL: https://issues.apache.org/jira/browse/SPARK-9446
 Project: Spark
  Issue Type: Bug
  Components: Spark Core
Affects Versions: 1.4.1
Reporter: Ted Yu
Priority: Minor

 In thread 'stopped SparkContext remaining active' on mailing list, Andres 
 observed the following in driver log:
 {code}
 15/07/29 15:17:09 WARN YarnSchedulerBackend$YarnSchedulerEndpoint: 
 ApplicationMaster has disassociated: address removed
 15/07/29 15:17:09 INFO YarnClientSchedulerBackend: Shutting down all executors
 Exception in thread Yarn application state monitor 
 org.apache.spark.SparkException: Error asking standalone scheduler to shut 
 down executors
 at 
 org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.stopExecutors(CoarseGrainedSchedulerBackend.scala:261)
 at 
 org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.stop(CoarseGrainedSchedulerBackend.scala:266)
 at 
 org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.stop(YarnClientSchedulerBackend.scala:158)
 at 
 org.apache.spark.scheduler.TaskSchedulerImpl.stop(TaskSchedulerImpl.scala:416)
 at 
 org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:1411)
 at org.apache.spark.SparkContext.stop(SparkContext.scala:1644)
 at 
 org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend$$anon$1.run(YarnClientSchedulerBackend.scala:139)
 Caused by: java.lang.InterruptedException
 at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1325)
 at 
 scala.concurrent.impl.Promise$DefaultPromise.tryAwait(Promise.scala:208)
 at 
 scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:218)
 at 
 scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
 at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:190)
 at 
 scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
 at scala.concurrent.Await$.result(package.scala:190)15/07/29 15:17:09 
 INFO YarnClientSchedulerBackend: Asking each executor to shut down
 at 
 org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:102)
 at 
 org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:78)
 at 
 org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.stopExecutors(CoarseGrainedSchedulerBackend.scala:257)
 ... 6 more
 {code}
 Effect of the above exception is that a stopped SparkContext is returned to 
 user since SparkContext.clearActiveContext() is not called.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-9446) Clear Active SparkContext in stop() method

2015-07-29 Thread Sean Owen (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-9446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Owen updated SPARK-9446:
-
Affects Version/s: 1.4.1
 Priority: Minor  (was: Major)
  Component/s: Spark Core

[~tedyu] you need to fill out the JIRA fields when creating one

 Clear Active SparkContext in stop() method
 --

 Key: SPARK-9446
 URL: https://issues.apache.org/jira/browse/SPARK-9446
 Project: Spark
  Issue Type: Bug
  Components: Spark Core
Affects Versions: 1.4.1
Reporter: Ted Yu
Priority: Minor

 In thread 'stopped SparkContext remaining active' on mailing list, Andres 
 observed the following in driver log:
 {code}
 15/07/29 15:17:09 WARN YarnSchedulerBackend$YarnSchedulerEndpoint: 
 ApplicationMaster has disassociated: address removed
 15/07/29 15:17:09 INFO YarnClientSchedulerBackend: Shutting down all executors
 Exception in thread Yarn application state monitor 
 org.apache.spark.SparkException: Error asking standalone scheduler to shut 
 down executors
 at 
 org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.stopExecutors(CoarseGrainedSchedulerBackend.scala:261)
 at 
 org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.stop(CoarseGrainedSchedulerBackend.scala:266)
 at 
 org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.stop(YarnClientSchedulerBackend.scala:158)
 at 
 org.apache.spark.scheduler.TaskSchedulerImpl.stop(TaskSchedulerImpl.scala:416)
 at 
 org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:1411)
 at org.apache.spark.SparkContext.stop(SparkContext.scala:1644)
 at 
 org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend$$anon$1.run(YarnClientSchedulerBackend.scala:139)
 Caused by: java.lang.InterruptedException
 at 
 java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1325)
 at 
 scala.concurrent.impl.Promise$DefaultPromise.tryAwait(Promise.scala:208)
 at 
 scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:218)
 at 
 scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
 at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:190)
 at 
 scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
 at scala.concurrent.Await$.result(package.scala:190)15/07/29 15:17:09 
 INFO YarnClientSchedulerBackend: Asking each executor to shut down
 at 
 org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:102)
 at 
 org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:78)
 at 
 org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.stopExecutors(CoarseGrainedSchedulerBackend.scala:257)
 ... 6 more
 {code}
 Effect of the above exception is that a stopped SparkContext is returned to 
 user since SparkContext.clearActiveContext() is not called.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org