[
https://issues.apache.org/jira/browse/CASSANDRA-8670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14386007#comment-14386007
]
Benedict commented on CASSANDRA-8670:
-------------------------------------
I've pushed some suggestions for further refactoring
[here|https://github.com/belliottsmith/cassandra/tree/8670-suggestions]. I've
only looked at the overall class hierarchy, I haven't focused yet on reviewing
the method implementation changes.
Mostly these changes flatten the class hierarchy; it's gotten deep enough I
don't think there's a good reason to maintain the distinction between
DataStreamOutputPlus and DataStreamOutputPlusAndChannel, especially since we
often just mock up a Channel based off the OutputStream. I've also flattened
NIODataOutputStream and DataOutputStreamByteBufferPlus into
BufferedDataOutputStreamPlus, since we only write to the buffer if we don't
exceed its size. At the same time, since we are now refactoring this whole
hierarchy, I made DataOutputBuffer extend BufferedDataOutputStreamPlus, and
just ensures the buffer grows as necessary, and have removed
FastByteArrayOutputStream since we no longer need it.
I've also stopped SequentialWriter implementing WritableByteChannel, and now
pass in its internal Channel, since that's the only way the operations will
benefit. As a follow up ticket, we should probably move SequentialWriter to
utilising BufferedDataOutputStreamPlus directly, so that it can benefit from
faster encoding of primitives
Let me know what you think of the changes to the hierarchy, and once we've
ironed that out we can move on to the home stretch and confirm the code
changes. One other thing we could consider is dropping the "Plus" from
everything except the interface, since it seems superfluous, and it's all
fairly verbose.
> Large columns + NIO memory pooling causes excessive direct memory usage
> -----------------------------------------------------------------------
>
> Key: CASSANDRA-8670
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8670
> Project: Cassandra
> Issue Type: Bug
> Components: Core
> Reporter: Ariel Weisberg
> Assignee: Ariel Weisberg
> Fix For: 3.0
>
> Attachments: largecolumn_test.py
>
>
> If you provide a large byte array to NIO and ask it to populate the byte
> array from a socket it will allocate a thread local byte buffer that is the
> size of the requested read no matter how large it is. Old IO wraps new IO for
> sockets (but not files) so old IO is effected as well.
> Even If you are using Buffered{Input | Output}Stream you can end up passing a
> large byte array to NIO. The byte array read method will pass the array to
> NIO directly if it is larger than the internal buffer.
> Passing large cells between nodes as part of intra-cluster messaging can
> cause the NIO pooled buffers to quickly reach a high watermark and stay
> there. This ends up costing 2x the largest cell size because there is a
> buffer for input and output since they are different threads. This is further
> multiplied by the number of nodes in the cluster - 1 since each has a
> dedicated thread pair with separate thread locals.
> Anecdotally it appears that the cost is doubled beyond that although it isn't
> clear why. Possibly the control connections or possibly there is some way in
> which multiple
> Need a workload in CI that tests the advertised limits of cells on a cluster.
> It would be reasonable to ratchet down the max direct memory for the test to
> trigger failures if a memory pooling issue is introduced. I don't think we
> need to test concurrently pulling in a lot of them, but it should at least
> work serially.
> The obvious fix to address this issue would be to read in smaller chunks when
> dealing with large values. I think small should still be relatively large (4
> megabytes) so that code that is reading from a disk can amortize the cost of
> a seek. It can be hard to tell what the underlying thing being read from is
> going to be in some of the contexts where we might choose to implement
> switching to reading chunks.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)