[ https://issues.apache.org/jira/browse/CASSANDRA-14746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17212457#comment-17212457 ]
Joey Lynch edited comment on CASSANDRA-14746 at 10/12/20, 3:45 PM: ------------------------------------------------------------------- Hi [~jmckenzie] thanks for pinging on this! {quote}Was this the goal of the MS rewrite? I have no horse in this race - I just thought the goal of it was to tighten up some of the things that were present / still troublesome after Jason's rewrite of things rather than specifically targeting performance improvements. {quote} I think phrasing it as no regression is also fine. Our testing so far identified major regressions after both refactors (stability after the first e.g. not delivering mutations across datacenters and some decent performance regressions especially around TLS after the second that had to be fixed up). {quote}And fwiw, the benchmarks I've seen on 4.0 show a pretty significant improvement in throughput if nothing else, but in terms of bar - no regression for a rewrite seems like a good low water mark to block on. {quote} Some of the issues only surfaced on multi hundred node clusters spanning multiple datacenters and in various configurations (e.g. TLS on + compression off, compresison on + TLS off), I haven't seen very many large scale tests outside this ticket (most are 6 node clusters with a single datacenter) or that span uncommon configurations (e.g. which options enabled, disabled, token setup, etc...). If you know of any results that are public we can certainly link them to this ticket :) and aggregate all the verification work in one place! {quote}> What do you think about this as acceptance criteria for the work here? {quote} I think the three identified remaining tests in the sub-tasks are good enough to call this done from our end, but if there are more test setups from the public let's get those recorded as well: 1. CASSANDRA-14764 - breaking point comparison with 3.0. This will give us good signal on regressions from the 3.0 series with a "typical" multi-DC setup 2. CASSANDRA-14747 - 200 node cluster with all options disabled (e.g. if someone is using VPC and direct connects to peer their VPCs so they don't have to pay TLS compute costs): This will tell us if we broke the performance of the non TLS path 3 CASSANDRA-15181 - Can we successfully stream to nodes, how long does that take, how long does it take with and without TLS on. [~jmckenzie] For what it's worth these kind of scientific and rigorous tests are hard to run and expensive (in dollars and engineering time), which is probably why they don't usually get run before, e.g. 2.1, 3.0 and 3.11 all failed these kinds of tests resulting in numerous regression bug reports in the 6-12 months after release. Our hope is that we can invest the time and money ahead of time instead of after the release for 4.0. was (Author: jolynch): Hi [~jmckenzie] thanks for pinging on this! {quote}Was this the goal of the MS rewrite? I have no horse in this race - I just thought the goal of it was to tighten up some of the things that were present / still troublesome after Jason's rewrite of things rather than specifically targeting performance improvements. {quote} I think phrasing it as no regression is also fine. Our testing so far identified major regressions after both refactors (stability after the first e.g. not delivering mutations across datacenters and some decent performance regressions especially around TLS after the second that had to be fixed up). {quote}And fwiw, the benchmarks I've seen on 4.0 show a pretty significant improvement in throughput if nothing else, but in terms of bar - no regression for a rewrite seems like a good low water mark to block on. {quote} Some of the issues only surfaced on multi hundred node clusters spanning multiple datacenters and in various configurations (e.g. TLS on + encryption off, encryption on + TLS off), I haven't seen very many large scale tests outside this ticket (most are 6 node clusters with a single datacenter) or that span uncommon configurations (e.g. which options enabled, disabled, token setup, etc...). If you know of any results that are public we can certainly link them to this ticket :) and aggregate all the verification work in one place! {quote}> What do you think about this as acceptance criteria for the work here? {quote} I think the three identified remaining tests in the sub-tasks are good enough to call this done from our end, but if there are more test setups from the public let's get those recorded as well: 1. CASSANDRA-14764 - breaking point comparison with 3.0. This will give us good signal on regressions from the 3.0 series with a "typical" multi-DC setup 2. CASSANDRA-14747 - 200 node cluster with all options disabled (e.g. if someone is using VPC and direct connects to peer their VPCs so they don't have to pay TLS compute costs): This will tell us if we broke the performance of the non TLS path 3 CASSANDRA-15181 - Can we successfully stream to nodes, how long does that take, how long does it take with and without TLS on. [~jmckenzie] For what it's worth these kind of scientific and rigorous tests are hard to run and expensive (in dollars and engineering time), which is probably why they don't usually get run before, e.g. 2.1, 3.0 and 3.11 all failed these kinds of tests resulting in numerous regression bug reports in the 6-12 months after release. Our hope is that we can invest the time and money ahead of time instead of after the release for 4.0. > Ensure Netty Internode Messaging Refactor is Solid > -------------------------------------------------- > > Key: CASSANDRA-14746 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14746 > Project: Cassandra > Issue Type: Improvement > Components: Legacy/Streaming and Messaging > Reporter: Joey Lynch > Assignee: Joey Lynch > Priority: Normal > Labels: 4.0-QA > Fix For: 4.0-beta > > > Before we release 4.0 let's ensure that the internode messaging refactor is > 100% solid. As internode messaging is naturally used in many code paths and > widely configurable we have a large number of cluster configurations and test > configurations that must be vetted. > We plan to vary the following: > * Version of Cassandra 3.0.17 vs 4.0-alpha > * Cluster sizes with *multi-dc* deployments ranging from 6 - 100 nodes > * Client request rates varying between 1k QPS and 100k QPS of varying sizes > and shapes (BATCH, INSERT, SELECT point, SELECT range, etc ...) > * Internode compression > * Internode SSL (as well as openssl vs jdk) > * Internode Coalescing options > We are looking to measure the following as appropriate: > * Latency distributions of reads and writes (lower is better) > * Scaling limit, aka maximum throughput before violating p99 latency > deadline of 10ms @ LOCAL_QUORUM, on a fixed hardware deployment for 100% > writes, 100% reads and 50-50 writes+reads (higher is better) > * Thread counts (lower is better) > * Context switches (lower is better) > * On-CPU time of tasks (higher periods without context switch is better) > * GC allocation rates / throughput for a fixed size heap (lower allocation > better) > * Streaming recovery time for a single node failure, i.e. can Cassandra > saturate the NIC > > The goal is that 4.0 should have better latency, more throughput, fewer > threads, fewer context switches, less GC allocation, and faster recovery > time. I'm putting Jason Brown as the reviewer since he implemented most of > the internode refactor. > Current collaborators driving this QA task: Dinesh Joshi, Jordan West, Joey > Lynch (Netflix), Vinay Chella (Netflix) > Owning committer(s): Jason Brown -- This message was sent by Atlassian Jira (v8.3.4#803005) --------------------------------------------------------------------- To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org