Github user yhuai commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16189#discussion_r92905603
  
    --- Diff: core/src/main/scala/org/apache/spark/executor/Executor.scala ---
    @@ -432,6 +465,93 @@ private[spark] class Executor(
       }
     
       /**
    +   * Supervises the killing / cancellation of a task by sending the 
interrupted flag, optionally
    +   * sending a Thread.interrupt(), and monitoring the task until it 
finishes.
    +   */
    +  private class TaskReaper(
    +      taskRunner: TaskRunner,
    +      val interruptThread: Boolean)
    +    extends Runnable {
    +
    +    private[this] val taskId: Long = taskRunner.taskId
    +
    +    private[this] val killPollingIntervalMs: Long =
    +      conf.getTimeAsMs("spark.task.reaper.pollingInterval", "10s")
    +
    +    private[this] val killTimeoutMs: Long = 
conf.getTimeAsMs("spark.task.reaper.killTimeout", "2m")
    +
    +    private[this] val takeThreadDump: Boolean =
    +      conf.getBoolean("spark.task.reaper.threadDump", true)
    +
    +    override def run(): Unit = {
    +      val startTimeMs = System.currentTimeMillis()
    +      def elapsedTimeMs = System.currentTimeMillis() - startTimeMs
    +      def timeoutExceeded(): Boolean = killTimeoutMs > 0 && elapsedTimeMs 
> killTimeoutMs
    +      try {
    +        // Only attempt to kill the task once. If interruptThread = false 
then a second kill
    +        // attempt would be a no-op and if interruptThread = true then it 
may not be safe or
    +        // effective to interrupt multiple times:
    +        taskRunner.kill(interruptThread = interruptThread)
    +        // Monitor the killed task until it exits:
    +        var finished: Boolean = false
    +        while (!finished && !timeoutExceeded()) {
    +          taskRunner.synchronized {
    +            // We need to synchronize on the TaskRunner while checking 
whether the task has
    +            // finished in order to avoid a race where the task is marked 
as finished right after
    +            // we check and before we call wait().
    +            if (taskRunner.isFinished) {
    +              finished = true
    +            } else {
    +              taskRunner.wait(killPollingIntervalMs)
    +            }
    +          }
    +          if (taskRunner.isFinished) {
    +            finished = true
    +          } else {
    +            logWarning(s"Killed task $taskId is still running after 
$elapsedTimeMs ms")
    +            if (takeThreadDump) {
    +              try {
    +                
Utils.getThreadDumpForThread(taskRunner.getThreadId).foreach { thread =>
    +                  if (thread.threadName == taskRunner.threadName) {
    +                    logWarning(s"Thread dump from task 
$taskId:\n${thread.stackTrace}")
    +                  }
    +                }
    +              } catch {
    +                case NonFatal(e) =>
    +                  logWarning("Exception thrown while obtaining thread 
dump: ", e)
    +              }
    +            }
    +          }
    +        }
    +
    +        if (!taskRunner.isFinished && timeoutExceeded()) {
    +          if (isLocal) {
    +            logError(s"Killed task $taskId could not be stopped within 
$killTimeoutMs ms; " +
    +              "not killing JVM because we are running in local mode.")
    +          } else {
    +            throw new SparkException(
    +              s"Killing executor JVM because killed task $taskId could not 
be stopped within " +
    +                s"$killTimeoutMs ms.")
    --- End diff --
    
    I guess I am not clear how we kill the JVM. Are we using this exception to 
kill the JVM?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to