[
https://issues.apache.org/jira/browse/CASSANDRA-13630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16137116#comment-16137116
]
Ariel Weisberg commented on CASSANDRA-13630:
--------------------------------------------
I added some comments on
https://github.com/jasobrown/cassandra/commit/2d58ad5f0ca5a63cc0fbead0b9234876d2dbd770#diff-55d5a06a8f012c31e11a06fc3f5bb960R265
At a high level the thing that worries me most is fan out message patterns. I
thought worst case memory amplification from this NIO approach was 2x message
size which is worse than our current 1x message size, but it's not, it's
cluster size * message size if a message is fanned out to all nodes in the
cluster. At the barest of bare minimums we need to detect this condition (large
message + fanout) and log it. But really I would need to be convinced that we
don't ever send large messages to the entire cluster. Just by nature of the
problem serialization is faster than networking so the large message would be
serialized to all the connections faster than the bytes can be drained out.
I think you have the gist of it on the receive side where a thread is forced to
block for large messages. You are also creating a thread per large message
channel. I really wonder if that be a shared pool of threads and we size it
generously. Heck, use same pool for send and receive.
Looking over the tests now.
> support large internode messages with netty
> -------------------------------------------
>
> Key: CASSANDRA-13630
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13630
> Project: Cassandra
> Issue Type: Task
> Components: Streaming and Messaging
> Reporter: Jason Brown
> Assignee: Jason Brown
> Fix For: 4.0
>
>
> As part of CASSANDRA-8457, we decided to punt on large mesages to reduce the
> scope of that ticket. However, we still need that functionality to ship a
> correctly operating internode messaging subsystem.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]