[ 
https://issues.apache.org/jira/browse/CASSANDRA-9754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15565823#comment-15565823
 ] 

Michael Kjellman commented on CASSANDRA-9754:
---------------------------------------------

Stable all night! My large test partitions have grown to ~12.5GB Just as stable 
-- latencies are unchanged. I'm so happy!!! ~7ms avg p99.9th and ~925 
microseconds average read latency. GC basically non-existant -- and for what GC 
is happening, the instances are averaging a 111 microsecond ParNew collection 
-- almost NO CMS! Compaction is keeping up.

On the converse side (the control 2.1 cluster running the same load) has 
instances are OOMing left and right -- CMS is frequently running 250 ms 
collections, ParNew is running 1.28 times a second on average with 75 ms 
average ParNew times. Horrible! And that's average -- the upper percentiles are 
a mess so I won't bore everyone. Read latencies are currently 380 ms average 
with many 15 *second* read latencies in the p99.9.



> Make index info heap friendly for large CQL partitions
> ------------------------------------------------------
>
>                 Key: CASSANDRA-9754
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-9754
>             Project: Cassandra
>          Issue Type: Improvement
>            Reporter: sankalp kohli
>            Assignee: Michael Kjellman
>            Priority: Minor
>             Fix For: 4.x
>
>         Attachments: 9754_part1-v1.diff, 9754_part2-v1.diff
>
>
>  Looking at a heap dump of 2.0 cluster, I found that majority of the objects 
> are IndexInfo and its ByteBuffers. This is specially bad in endpoints with 
> large CQL partitions. If a CQL partition is say 6,4GB, it will have 100K 
> IndexInfo objects and 200K ByteBuffers. This will create a lot of churn for 
> GC. Can this be improved by not creating so many objects?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to