squito commented on a change in pull request #23677: [SPARK-26755][SCHEDULER] :
Optimize Spark Scheduler to dequeue speculative tasks…
URL: https://github.com/apache/spark/pull/23677#discussion_r304565953
##########
File path:
core/src/test/scala/org/apache/spark/scheduler/TaskSetManagerSuite.scala
##########
@@ -1691,79 +1692,74 @@ class TaskSetManagerSuite extends SparkFunSuite with
LocalSparkContext with Logg
// > 0ms, so advance the clock by 1ms here.
clock.advance(1)
assert(manager.checkSpeculatableTasks(0))
- assert(sched.speculativeTasks.toSet === Set(2, 3))
- assert(manager.copiesRunning(2) === 1)
+ assert(sched.speculativeTasks.toSet === Set(1, 3))
+ assert(manager.copiesRunning(1) === 1)
assert(manager.copiesRunning(3) === 1)
// Offer resource to start the speculative attempt for the running task.
We offer more
// resources, and ensure that speculative tasks get scheduled
appropriately -- only one extra
// copy per speculatable task
val taskOption2 = manager.resourceOffer("exec1", "host1", NO_PREF)
- val taskOption3 = manager.resourceOffer("exec1", "host1", NO_PREF)
+ val taskOption3 = manager.resourceOffer("exec2", "host2", NO_PREF)
assert(taskOption2.isDefined)
val task2 = taskOption2.get
+ // Ensure that task index 3 is launched on host1 and task index 4 on host2
assert(task2.index === 3)
assert(task2.taskId === 4)
assert(task2.executorId === "exec1")
assert(task2.attemptNumber === 1)
assert(taskOption3.isDefined)
val task3 = taskOption3.get
- assert(task3.index === 2)
+ assert(task3.index === 1)
assert(task3.taskId === 5)
- assert(task3.executorId === "exec1")
+ assert(task3.executorId === "exec2")
assert(task3.attemptNumber === 1)
clock.advance(1)
// Running checkSpeculatableTasks again should return false
assert(!manager.checkSpeculatableTasks(0))
- assert(manager.copiesRunning(2) === 2)
+ assert(manager.copiesRunning(1) === 2)
assert(manager.copiesRunning(3) === 2)
// Offering additional resources should not lead to any speculative tasks
being respawned
assert(manager.resourceOffer("exec1", "host1", ANY).isEmpty)
assert(manager.resourceOffer("exec2", "host2", ANY).isEmpty)
assert(manager.resourceOffer("exec3", "host3", ANY).isEmpty)
- }
- test("SPARK-26755 Ensure that a speculative task obeys the original locality
preferences") {
- sc = new SparkContext("local", "test")
+ // Launch a new set of tasks with locality preferences
Review comment:
I meant to test everything we want to test with one scheduler and one
taskset. My reason for consolidating things is just so its easier to keep find
tests related to speculative execution, to get a sense of what's already tested
(eg. in the code today, I had a heard time telling if there were any tests
which make sure we don't launch speculative tasks on the same node as the
original).
anyway if this is a pain, just go back to having it be two separate tests.
Its really two very independent set of tests, so better to have it be in
separate tests.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]