vanzin commented on a change in pull request #23602: 
[SPARK-26674][CORE]Consolidate CompositeByteBuf when reading large frame
URL: https://github.com/apache/spark/pull/23602#discussion_r253686204
 
 

 ##########
 File path: 
common/network-common/src/main/java/org/apache/spark/network/util/TransportFrameDecoder.java
 ##########
 @@ -141,10 +153,20 @@ private ByteBuf decodeNext() {
 
     // Otherwise, create a composite buffer.
     CompositeByteBuf frame = 
buffers.getFirst().alloc().compositeBuffer(Integer.MAX_VALUE);
+    long lastConsolidatedCapacity = 0L;
     while (remaining > 0) {
       ByteBuf next = nextBufferForFrame(remaining);
       remaining -= next.readableBytes();
       frame.addComponent(next).writerIndex(frame.writerIndex() + 
next.readableBytes());
+      if (frame.capacity() - lastConsolidatedCapacity >= 
consolidateBufsThreshold) {
+        // Because the bytebuf created is far less than it's capacity in most 
cases,
 
 Review comment:
   its; but really the comment is just repeating the code, so it isn't really 
useful.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to