SaurabhChawla100 commented on a change in pull request #27636: 
[SPARK-30873][CORE][YARN]Handling Node Decommissioning for Yarn cluster manger 
in Spark
URL: https://github.com/apache/spark/pull/27636#discussion_r390274710
 
 

 ##########
 File path: core/src/main/scala/org/apache/spark/internal/config/package.scala
 ##########
 @@ -1542,4 +1542,50 @@ package object config {
     .bytesConf(ByteUnit.BYTE)
     .createOptional
 
+  private[spark] val GRACEFUL_DECOMMISSION_ENABLE =
+    ConfigBuilder("spark.graceful.decommission.enable")
+      .doc("Whether to enable the node graceful decommissioning handling")
+      .booleanConf
+      .createWithDefault(false)
+
+  private[spark] val GRACEFUL_DECOMMISSION_FETCHFAILED_IGNORE_THRESHOLD =
+    ConfigBuilder("spark.graceful.decommission.fetchfailed.ignore.threshold")
+      .doc("Threshold of number of times fetchfailed ignored due to node" +
+        " decommission.This is configurable as per the need of the user and" +
+        " depending upon type of the cloud. If we keep this a large value and 
" +
+        " there is continuous decommission of nodes, in those scenarios stage" 
+
+        " will never abort and keeps on retrying in an unbounded manner.")
+      .intConf
+      .createWithDefault(8)
+
+  private[spark] val GRACEFUL_DECOMMISSION_EXECUTOR_LEASETIME_PCT =
+    ConfigBuilder("spark.graceful.decommission.executor.leasetimePct")
+      .doc("Percentage of time to expiry after which executors are killed " +
+        "(if enabled) on the node. Value ranges between (0-100)")
+      .intConf
+      .checkValue(v => v >= 0 && v < 100, "The percentage should be positive.")
+      .createWithDefault(50) // Pulled out of thin air.
+
+  private[spark] val GRACEFUL_DECOMMISSION_SHUFFLEDATA_LEASETIME_PCT =
+    ConfigBuilder("spark.graceful.decommission.shuffedata.leasetimePct")
+      .doc("Percentage of time to expiry after which shuffle data " +
+        "cleaned up (if enabled) on the node. Value ranges between (0-100)")
+      .intConf
+      .checkValue(v => v >= 0 && v < 100, "The percentage should be positive.")
+      .createWithDefault(90) // Pulled out of thin air.
+
+  private[spark] val GRACEFUL_DECOMMISSION_MIN_TERMINATION_TIME_IN_SEC =
+    ConfigBuilder("spark.graceful.decommission.min.termination.time")
+      .doc("Minimum time to termination below which node decommissioning is 
performed immediately")
+      .timeConf(TimeUnit.SECONDS)
+      .createWithDefaultString("60s")
+
+  private[spark] val GRACEFUL_DECOMMISSION_NODE_TIMEOUT =
+    ConfigBuilder("spark.graceful.decommission.node.timeout")
 
 Review comment:
   For hadoop3.1 and later version of hadoop there is an interface to get the 
value of decommissioning timeout method name -  getDecommissioningTimeout. So 
we have used that here in the code 
   
   ```
   val decommiossioningTimeout = x.getClass.getMethod(
                   "getDecommissioningTimeout").invoke(x).asInstanceOf[Integer]
                 if (decommiossioningTimeout != null) {
                   nodeTerminationTime = clock.getTimeMillis() + 
decommiossioningTimeout * 1000
                 }
   ```
   
   Since we are getting the value of decommiossioningTimeout  from the RM in 
that scenario we will be using that value otherwise we use the value specified 
in the config.
   
   And also if someone has backported the hadoop3.1 change to lower version of 
hadoop2.8 etc , For them also they can use decommiossioningTimeout instead of 
using config GRACEFUL_DECOMMISSION_NODE_TIMEOUT

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to