squito commented on a change in pull request #23951: [SPARK-13704][CORE][YARN] 
Re-implement RackResolver to reduce resolving time
URL: https://github.com/apache/spark/pull/23951#discussion_r266918813
 
 

 ##########
 File path: core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala
 ##########
 @@ -192,17 +192,17 @@ private[spark] class TaskSetManager(
     val (_, duration) = Utils.timeTakenMs {
       val hostToIndices = new HashMap[String, ArrayBuffer[Int]]()
       for (i <- (0 until numTasks).reverse) {
-        addPendingTask(i, Option(hostToIndices))
+        addPendingTask(i, Some(hostToIndices))
       }
-      // Convert preferred locations to racks in one invocation and zip with 
the origin indices.
-      // We de-duping the hosts to reduce this invocation further.
-      
sched.getRacksForHosts(hostToIndices.keySet.toList).zip(hostToIndices.values) 
foreach {
-        case (Some(rack), indices) =>
-          pendingTasksForRack.getOrElseUpdate(rack, new ArrayBuffer) ++= 
indices
-        case _ =>
+      // Resolve the rack for each host. This can be somehow slow, so de-dupe 
the list of hosts,
 
 Review comment:
   sorry I had a typo in my suggested comment, no "somehow" :P 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to