Github user vanzin commented on the issue:

    https://github.com/apache/spark/pull/18897
  
    >  have the capability to safely shut down the spark application
    
    That's not true; that works just like "yarn kill". This is the code that 
does it in `DriverRunner.scala`, which is where the kill is finally processed:
    
    ```
      /** Terminate this driver (or prevent it from ever starting if not yet 
started) */
      private[worker] def kill(): Unit = {
        logInfo("Killing driver process!")
        killed = true
        synchronized {
          process.foreach { p =>
            val exitCode = Utils.terminateProcess(p, 
DRIVER_TERMINATE_TIMEOUT_MS)
            if (exitCode.isEmpty) {
              logWarning("Failed to terminate driver process: " + p +
                  ". This process will likely be orphaned.")
            }
          }
        }
      }
    ```
    
    This change seems also way more about creating some communication channel 
between arbitrary clients and the Spark AM. I'd like to see the bug and the PR 
explain that in way more detail, since the "kill" implementation doesn't really 
seem to be the meat of this change.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to