advancedxy commented on code in PR #811:
URL: https://github.com/apache/incubator-uniffle/pull/811#discussion_r1163502169


##########
client-spark/common/src/main/java/org/apache/spark/shuffle/writer/WriteBufferManager.java:
##########
@@ -130,7 +131,7 @@ public WriteBufferManager(
     this.bufferSize = bufferManagerOptions.getBufferSize();
     this.spillSize = bufferManagerOptions.getBufferSpillThreshold();
     this.instance = serializer.newInstance();
-    this.buffers = Maps.newHashMap();
+    this.buffers = Maps.newConcurrentMap();

Review Comment:
   if `buffers` is synchronized, is `ConcurrentMap` still needed?



##########
client-spark/common/src/main/java/org/apache/spark/shuffle/writer/WriteBufferManager.java:
##########
@@ -173,25 +174,27 @@ public List<ShuffleBlockInfo> addRecord(int partitionId, 
Object key, Object valu
       return null;
     }
     List<ShuffleBlockInfo> result = Lists.newArrayList();
-    if (buffers.containsKey(partitionId)) {
-      WriterBuffer wb = buffers.get(partitionId);
-      if (wb.askForMemory(serializedDataLength)) {
+    synchronized (buffers) {

Review Comment:
   This seems heavy, is there any better ways?



##########
client-spark/common/src/main/java/org/apache/spark/shuffle/writer/WriteBufferManager.java:
##########
@@ -204,26 +207,29 @@ public List<ShuffleBlockInfo> addRecord(int partitionId, 
Object key, Object valu
   }
 
   // transform all [partition, records] to [partition, ShuffleBlockInfo] and 
clear cache
-  public synchronized List<ShuffleBlockInfo> clear() {
+  public List<ShuffleBlockInfo> clear() {
     List<ShuffleBlockInfo> result = Lists.newArrayList();
     long dataSize = 0;
     long memoryUsed = 0;
-    for (Entry<Integer, WriterBuffer> entry : buffers.entrySet()) {
-      WriterBuffer wb = entry.getValue();
-      dataSize += wb.getDataLength();
-      memoryUsed += wb.getMemoryUsed();
-      result.add(createShuffleBlock(entry.getKey(), wb));
-      copyTime += wb.getCopyTime();
+    synchronized (buffers) {
+      final Set<Entry<Integer, WriterBuffer>> entrySet = buffers.entrySet();
+      for (Entry<Integer, WriterBuffer> entry : entrySet) {
+        WriterBuffer wb = entry.getValue();
+        dataSize += wb.getDataLength();
+        memoryUsed += wb.getMemoryUsed();
+        result.add(createShuffleBlock(entry.getKey(), wb));
+        copyTime += wb.getCopyTime();
+      }
+      LOG.info("Flush total buffer for shuffleId[" + shuffleId + "] with 
allocated["
+          + allocatedBytes + "], dataSize[" + dataSize + "], memoryUsed[" + 
memoryUsed + "]");
+      buffers.clear();
     }
-    LOG.info("Flush total buffer for shuffleId[" + shuffleId + "] with 
allocated["
-        + allocatedBytes + "], dataSize[" + dataSize + "], memoryUsed[" + 
memoryUsed + "]");
-    buffers.clear();
     return result;
   }
 
   // transform records to shuffleBlock
   protected ShuffleBlockInfo createShuffleBlock(int partitionId, WriterBuffer 
wb) {
-    byte[] data = wb.getData();
+    byte[] data = wb.finalizeAndGetData();

Review Comment:
   `finalizeAndGetData` is removed, you should also update this one.



##########
client-spark/common/src/main/java/org/apache/spark/shuffle/writer/WriteBufferManager.java:
##########
@@ -76,7 +77,7 @@ public class WriteBufferManager extends MemoryConsumer {
   private Map<Integer, List<ShuffleServerInfo>> partitionToServers;
   private int serializerBufferSize;
   private int bufferSegmentSize;
-  private long copyTime = 0;
+  private volatile long copyTime = 0;

Review Comment:
   volatile isn't necessary?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to