vanzin commented on a change in pull request #23602: 
[SPARK-26674][CORE]Consolidate CompositeByteBuf when reading large frame
URL: https://github.com/apache/spark/pull/23602#discussion_r253685345
 
 

 ##########
 File path: 
common/network-common/src/main/java/org/apache/spark/network/util/TransportFrameDecoder.java
 ##########
 @@ -56,6 +56,18 @@
   private long nextFrameSize = UNKNOWN_FRAME_SIZE;
   private volatile Interceptor interceptor;
 
+  private long consolidateBufsThreshold = Long.MAX_VALUE;
+  long consolidatedCount = 0L;
+  long consolidatedTotalTime = 0L;
 
 Review comment:
   I see you added these for the perf test, but I'd rather have the test itself 
handle this. The frame decoder should just consolidate the frame as needed.
   
   From outside of this class, the perf metric that matters is "how long does 
it take to build a frame of size x from y input buffers, with various 
consolidation threshold values", and you can answer that without keeping these 
internal perf counters.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to