codenohup commented on code in PR #2979:
URL: https://github.com/apache/celeborn/pull/2979#discussion_r1875418746


##########
worker/src/main/java/org/apache/celeborn/service/deploy/worker/storage/PartitionDataWriter.java:
##########
@@ -293,14 +293,15 @@ public void flush(boolean finalFlush, boolean fromEvict) 
throws IOException {
           // read flush buffer to generate correct chunk offsets
           // data header layout (mapId, attemptId, nextBatchId, length)
           if (numBytes > chunkSize) {
-            ByteBuffer headerBuf = ByteBuffer.allocate(16);
+            ByteBuffer headerBuf = 
ByteBuffer.allocate(PushDataHeaderUtils.BATCH_HEADER_SIZE);

Review Comment:
   If a worker receives a data buffer from the an old version client or not 
supported engine client, will this modification yield accurate results?



##########
client-flink/common/src/main/java/org/apache/celeborn/plugin/flink/readclient/FlinkShuffleClientImpl.java:
##########
@@ -79,6 +76,7 @@ public class FlinkShuffleClientImpl extends ShuffleClientImpl 
{
   private ConcurrentHashMap<String, TransportClient> currentClient =
       JavaUtils.newConcurrentHashMap();
   private long driverTimestamp;
+  private final int BATCH_HEADER_SIZE = 4 * 4;

Review Comment:
   Does this feature currently support only Spark? Will flink be supported in 
the future?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to