vanzin commented on a change in pull request #23602:
[SPARK-26674][CORE]Consolidate CompositeByteBuf when reading large frame
URL: https://github.com/apache/spark/pull/23602#discussion_r256556933
##########
File path:
common/network-common/src/test/java/org/apache/spark/network/util/TransportFrameDecoderSuite.java
##########
@@ -47,6 +51,88 @@ public void testFrameDecoding() throws Exception {
verifyAndCloseDecoder(decoder, ctx, data);
}
+ @Test
+ public void testConsolidationForDecodingNonFullyWrittenByteBuf() {
Review comment:
If I understand correctly, this is testing that consolidation is reducing
the amount of memory needed to hold a frame? But since you're writing just 1 MB
to the decoder, that's not triggering consolidation, is it?
Playing with `CompositeByteBuf`, it adjusts the internal capacity based on
the readable bytes of the components, but the component buffers remain
unchanged, so still holding on to the original amount of memory:
```
scala> cb.numComponents()
res4: Int = 2
scala> cb.capacity()
res5: Int = 8
scala> cb.component(0).capacity()
res6: Int = 1048576
```
So I'm not sure this test is testing anything useful.
Also it would be nice not to use so many magic numbers.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]