attilapiros commented on a change in pull request #24245: 
[SPARK-13704][CORE][YARN] Reduce rack resolution time
URL: https://github.com/apache/spark/pull/24245#discussion_r271566531
 
 

 ##########
 File path: core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala
 ##########
 @@ -186,8 +186,23 @@ private[spark] class TaskSetManager(
 
   // Add all our tasks to the pending lists. We do this in reverse order
   // of task index so that tasks with low indices get launched first.
-  for (i <- (0 until numTasks).reverse) {
-    addPendingTask(i)
+  addPendingTasks()
+
+  private def addPendingTasks(): Unit = {
+    val (_, duration) = Utils.timeTakenMs {
+      for (i <- (0 until numTasks).reverse) {
+        addPendingTask(i, resolveRacks = false)
+      }
+      // Resolve the rack for each host. This can be slow, so de-dupe the list 
of hosts,
+      // and assign the rack to all relevant task indices.
+      val racks = sched.getRacksForHosts(pendingTasksForHost.keySet.toSeq)
 
 Review comment:
   I had the exact same thought when I reached that line. 
   Even thought about a possible solutions: 
   
   - creating a new val with the value `racks.entrySet` and generating the keys 
and values from this entry set (as in entry set the key and value is bound 
together the ordering will be fixed; even with one iteration the key and the 
value can be generated). 
   - Another possible and more elegant solution is calling 
`racks.asScala.unzip`. 
   
   Both solutions has some performance cost.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to