Jacek Laskowski created SPARK-19807:
---------------------------------------
Summary: Add reason for cancellation when a stage is killed using
web UI
Key: SPARK-19807
URL: https://issues.apache.org/jira/browse/SPARK-19807
Project: Spark
Issue Type: Improvement
Components: Web UI
Affects Versions: 2.1.0
Reporter: Jacek Laskowski
Priority: Trivial
When a user kills a stage using web UI (in Stages page),
{{StagesTab.handleKillRequest}} requests {{SparkContext}} to cancel the stage
without giving a reason. {{SparkContext}} has {{cancelStage(stageId: Int,
reason: String)}} that Spark could use to pass the information for
monitoring/debugging purposes.
{code}
scala> sc.range(0, 5, 1, 1).mapPartitions { nums => { Thread.sleep(60 * 1000);
nums } }.count
{code}
Use http://localhost:4040/stages/ and click Kill.
{code}
org.apache.spark.SparkException: Job 0 cancelled because Stage 0 was cancelled
at
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1486)
at
org.apache.spark.scheduler.DAGScheduler.handleJobCancellation(DAGScheduler.scala:1426)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleStageCancellation$1.apply$mcVI$sp(DAGScheduler.scala:1415)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleStageCancellation$1.apply(DAGScheduler.scala:1408)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleStageCancellation$1.apply(DAGScheduler.scala:1408)
at
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofInt.foreach(ArrayOps.scala:234)
at
org.apache.spark.scheduler.DAGScheduler.handleStageCancellation(DAGScheduler.scala:1408)
at
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1670)
at
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1656)
at
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1645)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:628)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2019)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2040)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2059)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2084)
at org.apache.spark.rdd.RDD.count(RDD.scala:1158)
... 48 elided
{code}
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]