Github user davies commented on a diff in the pull request:
https://github.com/apache/spark/pull/3779#discussion_r23335344
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala ---
@@ -495,13 +495,39 @@ private[spark] class TaskSetManager(
* Get the level we can launch tasks according to delay scheduling,
based on current wait time.
*/
private def getAllowedLocalityLevel(curTime: Long):
TaskLocality.TaskLocality = {
- while (curTime - lastLaunchTime >= localityWaits(currentLocalityIndex)
&&
- currentLocalityIndex < myLocalityLevels.length - 1)
- {
- // Jump to the next locality level, and remove our waiting time for
the current one since
- // we don't want to count it again on the next one
- lastLaunchTime += localityWaits(currentLocalityIndex)
- currentLocalityIndex += 1
+ // remove the emptyList from pendingTasks lazily
+ def hasNonEmptyList(pendingTasks: HashMap[String, ArrayBuffer[Int]]):
Boolean = {
--- End diff --
The for-loop will end once there is a list which is not empty, or it will
remove all the empty list (not accessed in next time). So I think it will not
be expensive even in a big cluster. We also could do it for tasks, then it will
fix the false-positive issue.
In short term, this will fix the bug for streaming examples. In long term,
this can reduce the latence for streaming (such as updateStateByKey or others
needs union).
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]