Github user aarondav commented on a diff in the pull request:
https://github.com/apache/spark/pull/498#discussion_r11889127
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -381,16 +381,23 @@ class SparkContext(config: SparkConf) extends Logging
{
* // In a separate thread:
* sc.cancelJobGroup("some_job_to_cancel")
* }}}
+ *
+ * If interruptOnCancel is set to true for the job group, then job
cancellation will result
+ * in Thread.interrupt() being called on the job's executor threads.
This is useful to help ensure
+ * that the tasks are actually stopped in a timely manner, but is off by
default due to HDFS-1208,
+ * where HDFS may respond to Thread.interrupt() by marking nodes as dead.
*/
- def setJobGroup(groupId: String, description: String) {
+ def setJobGroup(groupId: String, description: String, interruptOnCancel:
Boolean = false) {
--- End diff --
Note that this is not the ideal way to set this property, as this API is
mainly just for initializing the job group name. However, it avoids changing a
number of internal and external APIs (there are 4 calls in this function itself
that call into the cancellation API through different routes to the
`DAGScheduler#failJobAndIndependentStages`). Additionally, it provides the
unique benefit that if the job is cancelled by another source (e.g., Spark
fails the job, or the user uses the recently added cancel job feature in the
JobProgressTab), then we can still set the flag based on this property.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---