vanzin commented on a change in pull request #25299: [SPARK-27651][Core] Avoid
the network when shuffle blocks are fetched from the same host
URL: https://github.com/apache/spark/pull/25299#discussion_r308945696
##########
File path:
core/src/test/scala/org/apache/spark/ExternalShuffleServiceSuite.scala
##########
@@ -96,6 +98,35 @@ class ExternalShuffleServiceSuite extends ShuffleSuite with
BeforeAndAfterAll wi
e.getMessage should include ("Fetch failure will not retry stage due to
testing config")
}
+ test("SPARK-27651: host local disk reading avoids external shuffle service
on the same node") {
+ sc = new SparkContext("local-cluster[2,1,1024]", "test", conf)
+ conf.get(config.SHUFFLE_HOST_LOCAL_DISK_READING_ENABLED) should equal(true)
+ sc.env.blockManager.externalShuffleServiceEnabled should equal(true)
+ sc.env.blockManager.externalShuffleServiceEnabled should equal(true)
+ sc.env.blockManager.shuffleClient.getClass should
equal(classOf[ExternalShuffleClient])
+
+ // In a slow machine, one slave may register hundreds of milliseconds
ahead of the other one.
+ // If we don't wait for all slaves, it's possible that only one executor
runs all jobs. Then
+ // all shuffle blocks will be in this executor,
ShuffleBlockFetcherIterator will directly fetch
+ // local blocks from the local BlockManager and won't send requests to
ExternalShuffleService.
+ // In this case, we won't receive FetchFailed. And it will make this test
fail.
+ // Therefore, we should wait until all slaves are up
+ TestUtils.waitUntilExecutorsUp(sc, 2, 60000)
+
+ val rdd = sc.parallelize(0 until 1000, 10).map(i => (i, 1)).reduceByKey(_
+ _)
Review comment:
`.map { i => ... }`
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]