[
https://issues.apache.org/jira/browse/CASSANDRA-14660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16591758#comment-16591758
]
Benedict edited comment on CASSANDRA-14660 at 8/24/18 2:53 PM:
---------------------------------------------------------------
Thanks. I edited the patch just slightly, to do the efficient copy for both
methods, and committed to
[3.0|https://github.com/apache/cassandra/commit/5c4ce600c4e24a656fd538f14ec5f4951d231e6e],
[3.11|https://github.com/apache/cassandra/commit/68f8966d5bc1a11a50091290b534c0e33903dd4d]
and
[trunk|https://github.com/apache/cassandra/commit/ffde38a2567517da780c0411b0338d5a445ea551].
was (Author: benedict):
Thanks. I edited the patch just slightly, to do the efficient copy for both
methods, and committed to
[3.0|https://github.com/apache/cassandra/commit/5c4ce600c4e24a656fd538f14ec5f4951d231e6e],
[3.11]https://github.com/apache/cassandra/commit/68f8966d5bc1a11a50091290b534c0e33903dd4d]
and
[trunk|https://github.com/apache/cassandra/commit/ffde38a2567517da780c0411b0338d5a445ea551].
> Improve TokenMetaData cache populating performance for large cluster
> --------------------------------------------------------------------
>
> Key: CASSANDRA-14660
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14660
> Project: Cassandra
> Issue Type: Improvement
> Components: Coordination
> Environment: Benchmark is on MacOSX 10.13.5, 2017 MBP
> Reporter: Pengchao Wang
> Assignee: Pengchao Wang
> Priority: Critical
> Labels: Performance
> Fix For: 3.0.x, 3.11.x, 4.x
>
> Attachments: 14660-3.0.txt, 14660-trunk.txt,
> TokenMetaDataBenchmark.java
>
>
> TokenMetaData#cachedOnlyTokenMap is a method C* used to get a consistent
> token and topology view on coordinations without paying read lock cost. Upon
> first read the method acquire a synchronize lock and generate a copy of major
> token meta data structures and cached it, and upon every token meta data
> changes(due to gossip changes), the cache get cleared and next read will
> taking care of cache population.
> For small to medium size clusters this strategy works pretty well. But large
> clusters can actually be suffered from the locking since cache populating is
> much slower. On one of our largest cluster (~1000 nodes, 125k tokens, C*
> 3.0.15) each cache population take about 500~700ms, and during that there
> are no requests can go through since synchronize lock was acquired. This
> caused waves of timeouts errors when there are large amount gossip messages
> propagating cross the cluster, such as in the case of cluster restarting.
> Base on profiling we found that the cost mostly comes from copying
> tokenToEndpointMap. It is a SortedBiMultiValueMap made from a forward map use
> TreeMap and a reverse map use guava TreeMultiMap. There is an optimization in
> TreeMap helps reduce copying complexity from O(N*log(N)) to O(N) when copying
> from already ordered data. But guava's TreeMultiMap copying missed that
> optimization and make it ~10 times slower than it actually need to be on our
> size of cluster.
> The patch attached to the issue replace the reverse TreeMultiMap<K, V> to a
> vanilla TreeMap<K, TreeSet<V>> in SortedBiMultiValueMap to make sure we can
> copy it O(N) time.
> I also attached a benchmark script (TokenMetaDataBenchmark.java), which
> simulates a large cluster then measures average latency for TokenMetaData
> cache populating.
> Benchmark result before and after that patch:
> {code:java}
> trunk:
> before 100ms, after 13ms
> 3.0.x:
> before 199ms, after 15ms
> {code}
> (On 3.0.x even the forward TreeMap copying is slow, the O(N*log(N)) to O(N)
> optimization is not applied because the key comparator is dynamically created
> and TreeMap cannot determine the source and dest are in same order)
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]