tgravescs commented on a change in pull request #26696:
[WIP][SPARK-18886][CORE] Make locality wait time be the time since a TSM's
available slots were fully utilized
URL: https://github.com/apache/spark/pull/26696#discussion_r355629582
##########
File path: core/src/main/scala/org/apache/spark/scheduler/Pool.scala
##########
@@ -119,4 +119,28 @@ private[spark] class Pool(
parent.decreaseRunningTasks(taskNum)
}
}
+
+ override def updateAvailableSlots(numSlots: Float): Unit = {
+ schedulingMode match {
+ case SchedulingMode.FAIR =>
+ val usableWeights = schedulableQueue.asScala
+ .map(s => if (s.getSortedTaskSetQueue.nonEmpty) (s, s.weight) else
(s, 0))
+ val totalWeights = usableWeights.map(_._2).sum
+ usableWeights.foreach({case (schedulable, usableWeight) =>
+ schedulable.updateAvailableSlots(
+ Math.max(numSlots * usableWeight / totalWeights,
schedulable.minShare))
+ })
+ case SchedulingMode.FIFO =>
+ val sortedSchedulableQueue =
+
schedulableQueue.asScala.toSeq.sortWith(taskSetSchedulingAlgorithm.comparator)
+ var isFirst = true
+ for (schedulable <- sortedSchedulableQueue) {
+ schedulable.updateAvailableSlots(if (isFirst) numSlots else 0)
Review comment:
I'm not sure this makes sense. by default we run FIFO and we can have
multiple tasksets at the same time with a single Job. so if we have plenty of
slots to fit both tasksets in we could end up with same broken behavior we have
now in the second taskset. Its perhaps slightly better as when first tasket
finishes then it does go more quickly but in the mean time you could have
wasted a lot of time.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]