vanzin commented on a change in pull request #23602: 
[SPARK-26674][CORE]Consolidate CompositeByteBuf when reading large frame
URL: https://github.com/apache/spark/pull/23602#discussion_r255276592
 
 

 ##########
 File path: 
common/network-common/src/main/java/org/apache/spark/network/util/TransportFrameDecoder.java
 ##########
 @@ -48,14 +49,29 @@
   private static final int LENGTH_SIZE = 8;
   private static final int MAX_FRAME_SIZE = Integer.MAX_VALUE;
   private static final int UNKNOWN_FRAME_SIZE = -1;
+  private static final long DEFAULT_CONSOLIDATE_FRAME_BUFS_DELTA_THRESHOLD = 
20 * 1024 * 1024;
 
   private final LinkedList<ByteBuf> buffers = new LinkedList<>();
   private final ByteBuf frameLenBuf = Unpooled.buffer(LENGTH_SIZE, 
LENGTH_SIZE);
+  private CompositeByteBuf frameBuf = null;
+  private long consolidateFrameBufsDeltaThreshold;
+  private long consolidatedFrameBufSize = 0;
+  private int consolidatedNumComponents = 0;
 
   private long totalSize = 0;
   private long nextFrameSize = UNKNOWN_FRAME_SIZE;
+  private int frameRemainingBytes = UNKNOWN_FRAME_SIZE;
   private volatile Interceptor interceptor;
 
+  public TransportFrameDecoder() {
 
 Review comment:
   I though you were going to make this configurable. Where are you reading the 
value from the configuration?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to