Ngone51 commented on code in PR #49350:
URL: https://github.com/apache/spark/pull/49350#discussion_r1903256111


##########
core/src/test/scala/org/apache/spark/shuffle/HostLocalShuffleReadingSuite.scala:
##########
@@ -78,7 +78,13 @@ class HostLocalShuffleReadingSuite extends SparkFunSuite 
with Matchers with Loca
     test(s"host local shuffle reading with external shuffle service 
$essStatus") {
       conf.set(SHUFFLE_SERVICE_ENABLED, isESSEnabled)
         .set(STORAGE_LOCAL_DISK_BY_EXECUTORS_CACHE_SIZE, 5)
-      sc = new SparkContext("local-cluster[2,1,1024]", 
"test-host-local-shuffle-reading", conf)
+      val master = if (isESSEnabled) {
+        conf.set(EXECUTOR_CORES, 1)
+        "local-cluster[1,2,2048]"
+      } else {
+        "local-cluster[2,1,1024]"
+      }
+      sc = new SparkContext(master, "test-host-local-shuffle-reading", conf)

Review Comment:
   With the original configuration (`local-cluster[2,1,1024]`), those two 
executors are launched in two different workers. And so each of them registers 
to a different external shuffle service. E.g., exec-A -> ess-A, exec-B -> 
ess-B. Since exec-A and exec-B share the same host (meets the requirement of 
the host-local shuffle reading), there could be an problem that exec-A may want 
to fetch shuffle data from ess-B when the host-local shuffle readig feature is 
enabled. And this would lead to the fetch failures as exec-A is not registered 
with ess-B.
   
   While with the current configuration (`local-cluster[1,2,2048]`), these two 
executors are launched within the same worker indeed. As a result, these two 
executors share the same external shuffle service. So, the host-local shuffle 
reading feature can be applied under this setup.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to