LuciferYang commented on code in PR #49350:
URL: https://github.com/apache/spark/pull/49350#discussion_r1918489929
##########
core/src/test/scala/org/apache/spark/deploy/ExternalShuffleServiceSuite.scala:
##########
@@ -102,14 +77,18 @@ class ExternalShuffleServiceSuite extends ShuffleSuite
with BeforeAndAfterAll wi
// Invalidate the registered executors, disallowing access to their
shuffle blocks (without
// deleting the actual shuffle files, so we could access them without the
shuffle service).
- rpcHandler.applicationRemoved(sc.conf.getAppId, false /* cleanupLocalDirs
*/)
+ LocalSparkCluster.get.get.workers.foreach(_.askSync[Boolean](
+ ApplicationRemoveTest(sc.conf.getAppId, false)
+ ))
// Now Spark will receive FetchFailed, and not retry the stage due to
"spark.test.noStageRetry"
// being set.
- val e = intercept[SparkException] {
- rdd.count()
+ eventually(timeout(60.seconds), interval(100.milliseconds)) {
Review Comment:
Every time I see `eventually(timeout ...`, I worry that this test might be
flaky.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]