JoshRosen commented on code in PR #49350:
URL: https://github.com/apache/spark/pull/49350#discussion_r1908122195
##########
core/src/test/scala/org/apache/spark/deploy/ExternalShuffleServiceSuite.scala:
##########
@@ -102,14 +73,18 @@ class ExternalShuffleServiceSuite extends ShuffleSuite
with BeforeAndAfterAll wi
// Invalidate the registered executors, disallowing access to their
shuffle blocks (without
// deleting the actual shuffle files, so we could access them without the
shuffle service).
- rpcHandler.applicationRemoved(sc.conf.getAppId, false /* cleanupLocalDirs
*/)
+ LocalSparkCluster.get.get.workers.head.askSync[Boolean](
+ ApplicationRemoveTest(sc.conf.getAppId, false)
+ )
Review Comment:
The original version of this test was creating subcomponents of
`ExternalShuffleService` in the test JVM then was having two executor JVMs (via
`local-cluster[2,1,1024]`) run tasks.
The updated version is launching the ESS in the workers themselves, so we
now have two handlers.
Given this, I'm wondering if this should perhaps be `worker.foreach` instead
of `workers.head` (both here and in all of the other cases below as well)?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]