Github user kayousterhout commented on a diff in the pull request:

    https://github.com/apache/spark/pull/17166#discussion_r107534312
  
    --- Diff: 
core/src/main/scala/org/apache/spark/scheduler/TaskSchedulerImpl.scala ---
    @@ -239,14 +239,21 @@ private[spark] class TaskSchedulerImpl 
private[scheduler](
             //    simply abort the stage.
             tsm.runningTasksSet.foreach { tid =>
               val execId = taskIdToExecutorId(tid)
    -          backend.killTask(tid, execId, interruptThread)
    +          backend.killTask(tid, execId, interruptThread, reason = "stage 
cancelled")
             }
             tsm.abort("Stage %s cancelled".format(stageId))
             logInfo("Stage %d was cancelled".format(stageId))
           }
         }
       }
     
    +  override def killTaskAttempt(taskId: Long, interruptThread: Boolean, 
reason: String): Unit = {
    +    logInfo(s"Killing task ($reason): $taskId")
    +    val execId = taskIdToExecutorId.getOrElse(
    +      taskId, throw new IllegalArgumentException("Task not found: " + 
taskId))
    --- End diff --
    
    Also it's kind of ugly that this throws an exception (seems like it could 
be an unhappy surprise to the user that their SparkContext threw an exception / 
died).  How about instead changing the killTaskAttempt calls to return a 
boolean that's True if the task was successfully killed (and the returning 
false here)?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to