guiyanakuang commented on a change in pull request #34743:
URL: https://github.com/apache/spark/pull/34743#discussion_r759020789
##########
File path:
core/src/test/scala/org/apache/spark/scheduler/TaskSchedulerImplSuite.scala
##########
@@ -1243,16 +1243,22 @@ class TaskSchedulerImplSuite extends SparkFunSuite with
LocalSparkContext with B
config.EXCLUDE_ON_FAILURE_ENABLED.key -> "true"
)
- val taskSet = FakeTask.createTaskSet(2, (0 until 2).map { _ =>
Seq(TaskLocation("host0")) }: _*)
- taskScheduler.submitTasks(taskSet)
- val tsm = taskScheduler.taskSetManagerForAttempt(taskSet.stageId,
taskSet.stageAttemptId).get
-
val offers = IndexedSeq(
// each offer has more than enough free cores for the entire task set,
so when combined
// with the locality preferences, we schedule all tasks on one executor
new WorkerOffer("executor0", "host0", 4),
new WorkerOffer("executor1", "host1", 4)
)
+ // SPARK-37488 modified the test. We need to get the WorkerOffer before
submitting the task
+ // to ensure that the taskScheduler already has the relevant executor
alive, otherwise
+ // we will not pending the task to forHost if the HostTaskLocation level
task does not find
+ // a live executor for the host, which defeats the purpose of this test
+ taskScheduler.resourceOffers(offers).flatten
Review comment:
I modified the existing test because it also needs to be done when the
executor is ready, and because of this pr change HostTaskLocation does not
pending at the forHost level without checking for a live executor
So I call `resourceOffers(offers)` first before submitting the task
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]