[jira] [Commented] (SPARK-20628) Keep track of nodes which are going to be shut down & avoid scheduling new tasks

2020-06-18 Thread Hyukjin Kwon (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-20628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17139132#comment-17139132
 ] 

Hyukjin Kwon commented on SPARK-20628:
--

[~holden] Looks like it had to be a SPIP.
Do we have a design doc too? Might be best to link it here if there is.

> Keep track of nodes which are going to be shut down & avoid scheduling new 
> tasks
> 
>
> Key: SPARK-20628
> URL: https://issues.apache.org/jira/browse/SPARK-20628
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core
>Affects Versions: 2.2.0, 2.3.0
>Reporter: Holden Karau
>Assignee: Holden Karau
>Priority: Major
> Fix For: 3.1.0
>
>
> Keep track of nodes which are going to be shut down. We considered adding 
> this for YARN but took a different approach, for instances where we can't 
> control instance termination though (EC2, GCE, etc.) this may make more sense.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-20628) Keep track of nodes which are going to be shut down & avoid scheduling new tasks

2020-05-06 Thread wuyi (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-20628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17100793#comment-17100793
 ] 

wuyi commented on SPARK-20628:
--

Hi [~holden] is this ticket resolved by 
[https://github.com/apache/spark/pull/26440]?

> Keep track of nodes which are going to be shut down & avoid scheduling new 
> tasks
> 
>
> Key: SPARK-20628
> URL: https://issues.apache.org/jira/browse/SPARK-20628
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core
>Affects Versions: 2.2.0, 2.3.0
>Reporter: Holden Karau
>Assignee: Holden Karau
>Priority: Major
> Fix For: 3.1.0
>
>
> Keep track of nodes which are going to be shut down. We considered adding 
> this for YARN but took a different approach, for instances where we can't 
> control instance termination though (EC2, GCE, etc.) this may make more sense.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-20628) Keep track of nodes which are going to be shut down & avoid scheduling new tasks

2018-12-14 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-20628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16721909#comment-16721909
 ] 

ASF GitHub Bot commented on SPARK-20628:


vanzin closed pull request #19267: [WIP][SPARK-20628][CORE] Blacklist nodes 
when they transition to DECOMMISSIONING state in YARN
URL: https://github.com/apache/spark/pull/19267
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/core/src/main/scala/org/apache/spark/HostState.scala 
b/core/src/main/scala/org/apache/spark/HostState.scala
new file mode 100644
index 0..17b374c3fac26
--- /dev/null
+++ b/core/src/main/scala/org/apache/spark/HostState.scala
@@ -0,0 +1,35 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark
+
+import org.apache.hadoop.yarn.api.records.NodeState
+
+private[spark] object HostState extends Enumeration {
+
+  type HostState = Value
+
+  val New, Running, Unhealthy, Decommissioning, Decommissioned, Lost, Rebooted 
= Value
+
+  def fromYarnState(state: String): Option[HostState] = {
+HostState.values.find(_.toString.toUpperCase == state)
+  }
+
+  def toYarnState(state: HostState): Option[String] = {
+NodeState.values.find(_.name == state.toString.toUpperCase).map(_.name)
+  }
+}
diff --git a/core/src/main/scala/org/apache/spark/internal/config/package.scala 
b/core/src/main/scala/org/apache/spark/internal/config/package.scala
index 9495cd2835f97..84edcff707d44 100644
--- a/core/src/main/scala/org/apache/spark/internal/config/package.scala
+++ b/core/src/main/scala/org/apache/spark/internal/config/package.scala
@@ -154,6 +154,16 @@ package object config {
 ConfigBuilder("spark.blacklist.application.fetchFailure.enabled")
   .booleanConf
   .createWithDefault(false)
+
+  private[spark] val BLACKLIST_DECOMMISSIONING_ENABLED =
+ConfigBuilder("spark.blacklist.decommissioning.enabled")
+  .booleanConf
+  .createWithDefault(false)
+
+  private[spark] val BLACKLIST_DECOMMISSIONING_TIMEOUT_CONF =
+ConfigBuilder("spark.blacklist.decommissioning.timeout")
+  .timeConf(TimeUnit.MILLISECONDS)
+  .createOptional
   // End blacklist confs
 
   private[spark] val UNREGISTER_OUTPUT_ON_HOST_ON_FETCH_FAILURE =
diff --git 
a/core/src/main/scala/org/apache/spark/scheduler/BlacklistTracker.scala 
b/core/src/main/scala/org/apache/spark/scheduler/BlacklistTracker.scala
index cd8e61d6d0208..7bc3db8ce1bb9 100644
--- a/core/src/main/scala/org/apache/spark/scheduler/BlacklistTracker.scala
+++ b/core/src/main/scala/org/apache/spark/scheduler/BlacklistTracker.scala
@@ -61,7 +61,13 @@ private[scheduler] class BlacklistTracker (
   private val MAX_FAILURES_PER_EXEC = conf.get(config.MAX_FAILURES_PER_EXEC)
   private val MAX_FAILED_EXEC_PER_NODE = 
conf.get(config.MAX_FAILED_EXEC_PER_NODE)
   val BLACKLIST_TIMEOUT_MILLIS = BlacklistTracker.getBlacklistTimeout(conf)
-  private val BLACKLIST_FETCH_FAILURE_ENABLED = 
conf.get(config.BLACKLIST_FETCH_FAILURE_ENABLED)
+  val BLACKLIST_DECOMMISSIONING_TIMEOUT_MILLIS =
+BlacklistTracker.getBlacklistDecommissioningTimeout(conf)
+  private val TASK_BLACKLISTING_ENABLED = 
BlacklistTracker.isTaskExecutionBlacklistingEnabled(conf)
+  private val DECOMMISSIONING_BLACKLISTING_ENABLED =
+BlacklistTracker.isDecommissioningBlacklistingEnabled(conf)
+  private val BLACKLIST_FETCH_FAILURE_ENABLED =
+BlacklistTracker.isFetchFailureBlacklistingEnabled(conf)
 
   /**
* A map from executorId to information on task failures.  Tracks the time 
of each task failure,
@@ -89,13 +95,13 @@ private[scheduler] class BlacklistTracker (
* successive blacklisted executors on one node.  Nonetheless, it will not 
grow too large because
* there cannot be many blacklisted executors on one node, before we stop 
requesting more
* executors on that node, and we clean up the list of blacklisted executors 
once an executor has
-   * been blacklisted for 

[jira] [Commented] (SPARK-20628) Keep track of nodes which are going to be shut down & avoid scheduling new tasks

2018-12-11 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-20628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16718305#comment-16718305
 ] 

ASF GitHub Bot commented on SPARK-20628:


SparkQA commented on issue #19045: [WIP][SPARK-20628][CORE][K8S] Keep track of 
nodes (/ spot instances) which are going to be shutdown
URL: https://github.com/apache/spark/pull/19045#issuecomment-446420850
 
 
   **[Test build #7 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/7/testReport)**
 for PR 19045 at commit 
[`af048f5`](https://github.com/apache/spark/commit/af048f5753cd99b68d2e5f8d268c52a119a2d84a).


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Keep track of nodes which are going to be shut down & avoid scheduling new 
> tasks
> 
>
> Key: SPARK-20628
> URL: https://issues.apache.org/jira/browse/SPARK-20628
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core
>Affects Versions: 2.2.0, 2.3.0
>Reporter: holdenk
>Priority: Major
>
> Keep track of nodes which are going to be shut down. We considered adding 
> this for YARN but took a different approach, for instances where we can't 
> control instance termination though (EC2, GCE, etc.) this may make more sense.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-20628) Keep track of nodes which are going to be shut down & avoid scheduling new tasks

2017-09-18 Thread Apache Spark (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-20628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16170276#comment-16170276
 ] 

Apache Spark commented on SPARK-20628:
--

User 'juanrh' has created a pull request for this issue:
https://github.com/apache/spark/pull/19267

> Keep track of nodes which are going to be shut down & avoid scheduling new 
> tasks
> 
>
> Key: SPARK-20628
> URL: https://issues.apache.org/jira/browse/SPARK-20628
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core
>Affects Versions: 2.2.0, 2.3.0
>Reporter: holdenk
>
> Keep track of nodes which are going to be shut down. We considered adding 
> this for YARN but took a different approach, for instances where we can't 
> control instance termination though (EC2, GCE, etc.) this may make more sense.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-20628) Keep track of nodes which are going to be shut down & avoid scheduling new tasks

2017-08-28 Thread Apache Spark (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-20628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144044#comment-16144044
 ] 

Apache Spark commented on SPARK-20628:
--

User 'holdenk' has created a pull request for this issue:
https://github.com/apache/spark/pull/19045

> Keep track of nodes which are going to be shut down & avoid scheduling new 
> tasks
> 
>
> Key: SPARK-20628
> URL: https://issues.apache.org/jira/browse/SPARK-20628
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core
>Affects Versions: 2.2.0, 2.3.0
>Reporter: holdenk
>
> Keep track of nodes which are going to be shut down. We considered adding 
> this for YARN but took a different approach, for instances where we can't 
> control instance termination though (EC2, GCE, etc.) this may make more sense.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-20628) Keep track of nodes which are going to be shut down & avoid scheduling new tasks

2017-05-13 Thread holdenk (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-20628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16009261#comment-16009261
 ] 

holdenk commented on SPARK-20628:
-

I'm going to take a crack at implementing this - I'm traveling a lot until 
Spark Summit so might be on the slow side working on this.

> Keep track of nodes which are going to be shut down & avoid scheduling new 
> tasks
> 
>
> Key: SPARK-20628
> URL: https://issues.apache.org/jira/browse/SPARK-20628
> Project: Spark
>  Issue Type: Sub-task
>  Components: Spark Core
>Affects Versions: 2.2.0, 2.3.0
>Reporter: holdenk
>
> Keep track of nodes which are going to be shut down. We considered adding 
> this for YARN but took a different approach, for instances where we can't 
> control instance termination though (EC2, GCE, etc.) this may make more sense.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org