xuanyuanking commented on a change in pull request #25620: [SPARK-25341][Core] 
Support rolling back a shuffle map stage and re-generate the shuffle files
URL: https://github.com/apache/spark/pull/25620#discussion_r319122941
 
 

 ##########
 File path: 
common/network-shuffle/src/main/java/org/apache/spark/network/shuffle/ExternalBlockHandler.java
 ##########
 @@ -300,7 +300,7 @@ public ShuffleMetrics() {
     }
 
     ManagedBufferIterator(FetchShuffleBlocks msg, int numBlockIds) {
-      final int[] mapIdAndReduceIds = new int[2 * numBlockIds];
+      final long[] mapIdAndReduceIds = new long[2 * numBlockIds];
 
 Review comment:
   Actually we already have long[] for map id and int[] for reduce id in the 
message, here we need is kinda assemble work to flatten reduce id and its 
corresponding mapid.
   The current way waste memory, we can also do it in a cpu consuming way, 
which is for each index, calculate which map id and reduce id corresponding 
with the `idx`.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to