Ngone51 commented on a change in pull request #31643:
URL: https://github.com/apache/spark/pull/31643#discussion_r585353311



##########
File path: 
common/network-shuffle/src/main/java/org/apache/spark/network/shuffle/OneForOneBlockFetcher.java
##########
@@ -138,9 +138,24 @@ private FetchShuffleBlocks createFetchShuffleBlocksMsg(
     }
     long[] mapIds = Longs.toArray(mapIdToReduceIds.keySet());
     int[][] reduceIdArr = new int[mapIds.length][];
+    int blockIdIndex = 0;
     for (int i = 0; i < mapIds.length; i++) {
       reduceIdArr[i] = Ints.toArray(mapIdToReduceIds.get(mapIds[i]));
+      // The `blockIds`'s order must be same with the read order specified in 
in FetchShuffleBlocks
+      // because the shuffle data's return order should match the `blockIds`'s 
order to ensure
+      // blockId and data match.
+      if (!batchFetchEnabled) {
+        for (int j = 0; j < reduceIdArr[i].length; j++) {
+          this.blockIds[blockIdIndex++] = "shuffle_" + shuffleId + "_" + 
mapIds[i] + "_"
+                  + reduceIdArr[i][j];

Review comment:
       Then, how about we carry the raw `blockId` with the splited block parts? 
For example, we put the raw `blockId` at index 0 (we can override the useless 
prefix "shuffle") to be compatible between batchfetch and non-batchfetch. 
   
   I think using one map instead of two would simplify the codes and be easier 
to maintain.
   
   BTW, I saw your latest updates, and I dont' think the `LinkedHashMap` helps 
anything in the current way(option1), while it does help in option2. 

##########
File path: 
common/network-shuffle/src/main/java/org/apache/spark/network/shuffle/OneForOneBlockFetcher.java
##########
@@ -138,9 +138,24 @@ private FetchShuffleBlocks createFetchShuffleBlocksMsg(
     }
     long[] mapIds = Longs.toArray(mapIdToReduceIds.keySet());
     int[][] reduceIdArr = new int[mapIds.length][];
+    int blockIdIndex = 0;
     for (int i = 0; i < mapIds.length; i++) {
       reduceIdArr[i] = Ints.toArray(mapIdToReduceIds.get(mapIds[i]));
+      // The `blockIds`'s order must be same with the read order specified in 
in FetchShuffleBlocks
+      // because the shuffle data's return order should match the `blockIds`'s 
order to ensure
+      // blockId and data match.
+      if (!batchFetchEnabled) {
+        for (int j = 0; j < reduceIdArr[i].length; j++) {
+          this.blockIds[blockIdIndex++] = "shuffle_" + shuffleId + "_" + 
mapIds[i] + "_"
+                  + reduceIdArr[i][j];

Review comment:
       Then, how about we carry the raw `blockId` with the splited block parts? 
For example, we can put the raw `blockId` at index 0 (we can override the 
useless prefix "shuffle") to be compatible between batchfetch and 
non-batchfetch. 
   
   I think using one map instead of two would simplify the codes and be easier 
to maintain.
   
   BTW, I saw your latest updates, and I dont' think the `LinkedHashMap` helps 
anything in the current way(option1), while it does help in option2. 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to