tgravescs commented on a change in pull request #28257:
URL: https://github.com/apache/spark/pull/28257#discussion_r411419641
##########
File path:
core/src/test/scala/org/apache/spark/scheduler/BarrierTaskContextSuite.scala
##########
@@ -276,4 +276,20 @@ class BarrierTaskContextSuite extends SparkFunSuite with
LocalSparkContext {
initLocalClusterSparkContext()
testBarrierTaskKilled(interruptOnKill = true)
}
+
+ test("SPARK-31485: barrier stage should fail if only partial tasks are
launched") {
+ initLocalClusterSparkContext(2)
+ val rdd0 = sc.parallelize(Seq(0, 1, 2, 3), 2)
+ val dep = new OneToOneDependency[Int](rdd0)
+ // set up a barrier stage with 2 tasks and both tasks prefer executor 0
(only 1 core) for
Review comment:
this seems wrong to me, locality preference is causing failure? I
thought we should be looking at all available slots and only erroring if you
didn't have enough. Here you have 2 workers with 1 cores each so you should be
able to fit 2 tasks on there eventually.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]