dongjoon-hyun commented on a change in pull request #35109:
URL: https://github.com/apache/spark/pull/35109#discussion_r779250877



##########
File path: 
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/scheduler/cluster/k8s/ExecutorRollPlugin.scala
##########
@@ -116,7 +117,39 @@ class ExecutorRollDriverPlugin extends DriverPlugin with 
Logging {
         listWithoutDriver.sortBy(e => e.totalDuration.toFloat / Math.max(1, 
e.totalTasks)).reverse
       case ExecutorRollPolicy.FAILED_TASKS =>
         listWithoutDriver.sortBy(_.failedTasks).reverse
+      case ExecutorRollPolicy.OUTLIER =>
+        // We build multiple outlier lists and concat in the following 
importance order to find
+        // outliers in various perspective:
+        //   AVERAGE_DURATION > TOTAL_DURATION > TOTAL_GC_TIME > FAILED_TASKS
+        // Since we will choose only first item, the duplication is okay. If 
there is no outlier,
+        // We fallback to TOTAL_DURATION policy.
+        outliers(listWithoutDriver.filter(_.totalTasks > 0), e => 
e.totalDuration / e.totalTasks) ++
+          outliers(listWithoutDriver, e => e.totalDuration) ++
+          outliers(listWithoutDriver, e => e.totalGCTime) ++
+          outliers(listWithoutDriver, e => e.failedTasks) ++
+          listWithoutDriver.sortBy(_.totalDuration).reverse
     }
     sortedList.headOption.map(_.id)
   }
+
+  /**
+   * Return executors whose metrics is outstanding, '(value - mean) > 
2-sigma'. This is a
+   * best-effort approach because the snapshot of ExecutorSummary is not a 
normal distribution.
+   * In case of normal distribution, this is known to be 2.5 percent.
+   */
+  private def outliers(
+      list: Seq[v1.ExecutorSummary],
+      get: v1.ExecutorSummary => Float): Seq[v1.ExecutorSummary] = {
+    if (list.isEmpty) {
+      list
+    } else {
+      val size = list.size
+      val mean = list.map(get).sum / size
+      val sd = sqrt(list.map(e => (get(e) - mean) * (get(e) - mean)).sum / 
size)

Review comment:
       I could optimize like the following.
   ```scala
         val size = list.size
         val values = list.map(get)
         val mean = values.sum / size
         val sd = sqrt(values.map(e => (e - mean) * (e - mean)).sum / size)
         list
           .filter(e => (get(e) - mean) > 2 * sd)
           .sortBy(e => get(e))
           .reverse
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to