tgravescs commented on a change in pull request #27207: [SPARK-18886][CORE]
Make Locality wait time measure resource under utilization due to delay
scheduling.
URL: https://github.com/apache/spark/pull/27207#discussion_r402360614
##########
File path:
core/src/test/scala/org/apache/spark/scheduler/TaskSchedulerImplSuite.scala
##########
@@ -898,18 +1083,17 @@ class TaskSchedulerImplSuite extends SparkFunSuite with
LocalSparkContext with B
}
// Here is the main check of this test -- we have the same offers again,
and we schedule it
- // successfully. Because the scheduler first tries to schedule with
locality in mind, at first
- // it won't schedule anything on executor1. But despite that, we don't
abort the job. Then the
- // scheduler tries for ANY locality, and successfully schedules tasks on
executor1.
+ // successfully. Because the scheduler tries to schedule with locality in
mind, at first
+ // it won't schedule anything on executor1. But despite that, we don't
abort the job.
val secondTaskAttempts = taskScheduler.resourceOffers(offers).flatten
- assert(secondTaskAttempts.size == 2)
- secondTaskAttempts.foreach { taskAttempt => assert("executor1" ===
taskAttempt.executorId) }
Review comment:
I see we are still on node local locality since we didn't reject any so we
have to wait the timeout here so these are rejected at this point.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]