[ 
https://issues.apache.org/jira/browse/CASSANDRA-9754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15564840#comment-15564840
 ] 

Michael Kjellman commented on CASSANDRA-9754:
---------------------------------------------

I fixed the last (pretty nasty) bug today/tonight! The issue was in 
IndexedSliceReader#IndexedBlockFetcher where I was failing to properly 
initializing a new iterator to the given start of the slice for the read query. 
This caused every read to iterate over all indexed entries every time. 
Fortunately that bug had brought some performance concerns on the underlying 
read logic to my attention which I also addressed thinking that was the root 
cause.

I'm currently running my performance/stress load in three separate performance 
clusters; two with a build that has Birch and one that is a control version of 
2.1.16. I'm currently performing 700 reads per/sec per instance and 1.5k writes 
per/sec.

Read Latencies in both Birch perf clusters are showing (at the storage proxy 
level) 838 microseconds latencies in the average percentile and only 7.4 
milliseconds in the p99.9th!
Write Latencies in both Birch perf clusters are showing (at the storage proxy 
level) 138 microseconds in the average percentile and 775 microseconds in the 
p99.9th!

There is basically no GC to be spoken for and the latencies are very stable 
(and have been) for the past hour since I restarted the load with the fix for 
the Iterator as mentioned above.

The best thing about all these stats above is many of the reads are occurring 
against a (currently) 8.5GB rows! The control cluster has latencies 7-8x the 
Birch clusters so far and GC is out of control and instances are starting to 
constantly OOM. It's hard to compare anything against the control cluster as 
things start to fall apart very significantly after the test CQL partitions 
grow above ~4GB.... eek.

I'm going to let the load continue overnight to grow the partitions larger (I'm 
targeting 50GB for this first performance milestone). 

It's pretty hard to not be happy when you see these numbers. This could end up 
being very very epic for our little project. I'm *pretty*, pretty, pretty (okay 
*really() happy tonight!!

> Make index info heap friendly for large CQL partitions
> ------------------------------------------------------
>
>                 Key: CASSANDRA-9754
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-9754
>             Project: Cassandra
>          Issue Type: Improvement
>            Reporter: sankalp kohli
>            Assignee: Michael Kjellman
>            Priority: Minor
>             Fix For: 4.x
>
>         Attachments: 9754_part1-v1.diff, 9754_part2-v1.diff
>
>
>  Looking at a heap dump of 2.0 cluster, I found that majority of the objects 
> are IndexInfo and its ByteBuffers. This is specially bad in endpoints with 
> large CQL partitions. If a CQL partition is say 6,4GB, it will have 100K 
> IndexInfo objects and 200K ByteBuffers. This will create a lot of churn for 
> GC. Can this be improved by not creating so many objects?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to