[ https://issues.apache.org/jira/browse/CASSANDRA-8457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14243008#comment-14243008 ]
Ariel Weisberg commented on CASSANDRA-8457: ------------------------------------------- bq. To establish if there's likely a benefit to exploit, we could most likely refactor this code comparatively minimally (than rewriting to NIO/Netty) to make use of the SharedExecutorPool to establish if such a positive effect is indeed to be had, as this would reduce the number of threads in flight to those actually serving work on the OTCs. This wouldn't affect the ITC, but I am dubious of their contribution. We should probably also actually test if this is indeed a problem from clusters at scale performing in-memory CL>1 reads. I wonder what there is to be gained by having a single socket for inbound/outbound? Running a representative test will take some doing. cstar doesn't support multiple stress clients and it seems like the clusters only have 3 nodes? This is another argument for getting some decent size performance runs in CI working rather then doing one-off manual tests. Having profiling artifacts collected as part of this would also make doing performance research and validation easier. I feel pretty under informed when we discuss what to do next due to the lack of profiling information and the lack of canonical/repeatable performance data and workloads. > nio MessagingService > -------------------- > > Key: CASSANDRA-8457 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8457 > Project: Cassandra > Issue Type: New Feature > Components: Core > Reporter: Jonathan Ellis > Assignee: Ariel Weisberg > Labels: performance > Fix For: 3.0 > > > Thread-per-peer (actually two each incoming and outbound) is a big > contributor to context switching, especially for larger clusters. Let's look > at switching to nio, possibly via Netty. -- This message was sent by Atlassian JIRA (v6.3.4#6332)