Ngone51 commented on a change in pull request #28257:
URL: https://github.com/apache/spark/pull/28257#discussion_r411427954
##########
File path:
core/src/test/scala/org/apache/spark/scheduler/BarrierTaskContextSuite.scala
##########
@@ -276,4 +276,20 @@ class BarrierTaskContextSuite extends SparkFunSuite with
LocalSparkContext {
initLocalClusterSparkContext()
testBarrierTaskKilled(interruptOnKill = true)
}
+
+ test("SPARK-31485: barrier stage should fail if only partial tasks are
launched") {
+ initLocalClusterSparkContext(2)
+ val rdd0 = sc.parallelize(Seq(0, 1, 2, 3), 2)
+ val dep = new OneToOneDependency[Int](rdd0)
+ // set up a barrier stage with 2 tasks and both tasks prefer executor 0
(only 1 core) for
Review comment:
> this seems wrong to me, locality preference is causing failure?
Yeah, I have to say so. We do have enough slots in this case but it doesn't
guarantee that all tasks in a task sets can be scheduled in a single round
resource offer because of delay scheduling. While partial scheduling is ok for
normal task set, it's unacceptable for barrier task set.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]