Github user liyezhang556520 commented on a diff in the pull request:
https://github.com/apache/spark/pull/12038#discussion_r57830669
--- Diff:
common/network-common/src/main/java/org/apache/spark/network/util/TransportFrameDecoder.java
---
@@ -139,14 +139,18 @@ private ByteBuf decodeNext() throws Exception {
return nextBufferForFrame(remaining);
}
- // Otherwise, create a composite buffer.
- CompositeByteBuf frame = buffers.getFirst().alloc().compositeBuffer();
--- End diff --
@vanzin , I'm not sure why Netty underlying set a maximum number components
(max size is Integer.MAX_VALUE), and the default value is only 16, this seems
very small for consolidation. Will it occurs other problem when there are too
many small buffers under `compositeBuffer`? Is that why it will consolidate
when the small buffer number reaches the `maxNumCompnent`?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]