[
https://issues.apache.org/jira/browse/CASSANDRA-10855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15086257#comment-15086257
]
Robert Stupp commented on CASSANDRA-10855:
------------------------------------------
It would help if we would have some more workloads as cassandra-stress
profiles. Could you help with that?
BTW, I have to correct my last statement partly: _trades-fwd-lcs-nolz4_ has the
99.x% key-cache hit ration (using 10% of the key cache's capacity) and,
_regression r/w_ has the bad, ~10% hit ratio. (Sorry for the confusion, I
accidentally swapped the console log files to get the hit ratios)
_trades-fwd-lcs-nolz4_ [Operation
1|http://cstar.datastax.com/graph?command=one_job&stats=6fcb6cbc-aafa-11e5-947f-0256e416528f&metric=op_rate&operation=1_user&smoothing=1&show_aggregates=true&xmin=0&xmax=840.18&ymin=0&ymax=145796.2]
(mixed writes+reads) gives a nice perf improvement during the first seconds
but then egalizes with the existing implementation. [Operation
2|http://cstar.datastax.com/graph?command=one_job&stats=6fcb6cbc-aafa-11e5-947f-0256e416528f&metric=op_rate&operation=2_user&smoothing=1&show_aggregates=true&xmin=0&xmax=62.81&ymin=0&ymax=211599.3],
4 and 6 (all just reads) show a slight regression or no difference.
OTOH _cassci regression test r/w_ shows the perf regression of about 5%.
> Use Caffeine (W-TinyLFU) for on-heap caches
> -------------------------------------------
>
> Key: CASSANDRA-10855
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10855
> Project: Cassandra
> Issue Type: Improvement
> Reporter: Ben Manes
> Labels: performance
>
> Cassandra currently uses
> [ConcurrentLinkedHashMap|https://code.google.com/p/concurrentlinkedhashmap]
> for performance critical caches (key, counter) and Guava's cache for
> non-critical (auth, metrics, security). All of these usages have been
> replaced by [Caffeine|https://github.com/ben-manes/caffeine], written by the
> author of the previously mentioned libraries.
> The primary incentive is to switch from LRU policy to W-TinyLFU, which
> provides [near optimal|https://github.com/ben-manes/caffeine/wiki/Efficiency]
> hit rates. It performs particularly well in database and search traces, is
> scan resistant, and as adds a very small time/space overhead to LRU.
> Secondarily, Guava's caches never obtained similar
> [performance|https://github.com/ben-manes/caffeine/wiki/Benchmarks] to CLHM
> due to some optimizations not being ported over. This change results in
> faster reads and not creating garbage as a side-effect.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)