[ 
https://issues.apache.org/jira/browse/CASSANDRA-9754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15583224#comment-15583224
 ] 

Michael Kjellman commented on CASSANDRA-9754:
---------------------------------------------

Here is cfstats from one of the instances.

{code}
Keyspace: test_keyspace
        Read Count: 114179492
        Read Latency: 1.6377607135701742 ms.
        Write Count: 662747473
        Write Latency: 0.030130128499184786 ms.
        Pending Flushes: 0
                Table: largetext1
                SSTable count: 26
                SSTables in each level: [0, 3, 7, 8, 8, 0, 0, 0, 0]
                Space used (live): 434883821719
                Space used (total): 434883821719
                Space used by snapshots (total): 0
                Off heap memory used (total): 67063584
                SSTable Compression Ratio: 0.7882047641965452
                Number of keys (estimate): 14
                Memtable cell count: 58930
                Memtable data size: 25518748
                Memtable off heap memory used: 0
                Memtable switch count: 3416
                Local read count: 71154231
                Local read latency: 2.468 ms
                Local write count: 410631676
                Local write latency: 0.030 ms
                Pending flushes: 0
                Bloom filter false positives: 0
                Bloom filter false ratio: 0.00000
                Bloom filter space used: 496
                Bloom filter off heap memory used: 288
                Index summary off heap memory used: 1144
                Compression metadata off heap memory used: 67062152
                Compacted partition minimum bytes: 20924301
                Compacted partition maximum bytes: 91830775932
                Compacted partition mean bytes: 19348020195
                Average live cells per slice (last five minutes): 
0.9998001524322566
                Maximum live cells per slice (last five minutes): 1.0
                Average tombstones per slice (last five minutes): 0.0
                Maximum tombstones per slice (last five minutes): 0.0

                Table: largeuuid1
                SSTable count: 59
                SSTables in each level: [1, 10, 48, 0, 0, 0, 0, 0, 0]
                Space used (live): 9597872057
                Space used (total): 9597872057
                Space used by snapshots (total): 0
                Off heap memory used (total): 3960428
                SSTable Compression Ratio: 0.2836031289299396
                Number of keys (estimate): 27603
                Memtable cell count: 228244
                Memtable data size: 7874514
                Memtable off heap memory used: 0
                Memtable switch count: 521
                Local read count: 18463741
                Local read latency: 0.271 ms
                Local write count: 108570121
                Local write latency: 0.031 ms
                Pending flushes: 0
                Bloom filter false positives: 0
                Bloom filter false ratio: 0.00000
                Bloom filter space used: 22008
                Bloom filter off heap memory used: 21536
                Index summary off heap memory used: 11308
                Compression metadata off heap memory used: 3927584
                Compacted partition minimum bytes: 42511
                Compacted partition maximum bytes: 4866323
                Compacted partition mean bytes: 1290148
                Average live cells per slice (last five minutes): 
0.9992537806937392
                Maximum live cells per slice (last five minutes): 1.0
                Average tombstones per slice (last five minutes): 0.0
                Maximum tombstones per slice (last five minutes): 0.0

                Table: timeuuid1
                SSTable count: 7
                SSTables in each level: [0, 1, 3, 3, 0, 0, 0, 0, 0]
                Space used (live): 103161816378
                Space used (total): 103161816378
                Space used by snapshots (total): 0
                Off heap memory used (total): 13820716
                SSTable Compression Ratio: 0.9105016396444802
                Number of keys (estimate): 6
                Memtable cell count: 150596
                Memtable data size: 41378801
                Memtable off heap memory used: 0
                Memtable switch count: 1117
                Local read count: 24561527
                Local read latency: 0.264 ms
                Local write count: 143545778
                Local write latency: 0.033 ms
                Pending flushes: 0
                Bloom filter false positives: 0
                Bloom filter false ratio: 0.00000
                Bloom filter space used: 128
                Bloom filter off heap memory used: 72
                Index summary off heap memory used: 308
                Compression metadata off heap memory used: 13820336
                Compacted partition minimum bytes: 25109161
                Compacted partition maximum bytes: 76525646610
                Compacted partition mean bytes: 13722083374
                Average live cells per slice (last five minutes): 
0.9993586310818542
                Maximum live cells per slice (last five minutes): 1.0
                Average tombstones per slice (last five minutes): 0.0
                Maximum tombstones per slice (last five minutes): 0.0
{code}

> Make index info heap friendly for large CQL partitions
> ------------------------------------------------------
>
>                 Key: CASSANDRA-9754
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-9754
>             Project: Cassandra
>          Issue Type: Improvement
>            Reporter: sankalp kohli
>            Assignee: Michael Kjellman
>            Priority: Minor
>             Fix For: 4.x
>
>         Attachments: gc_collection_times_with_birch.png, 
> gc_collection_times_without_birch.png, gc_counts_with_birch.png, 
> gc_counts_without_birch.png, 
> perf_cluster_1_with_birch_read_latency_and_counts.png, 
> perf_cluster_1_with_birch_write_latency_and_counts.png, 
> perf_cluster_2_with_birch_read_latency_and_counts.png, 
> perf_cluster_2_with_birch_write_latency_and_counts.png, 
> perf_cluster_3_without_birch_read_latency_and_counts.png, 
> perf_cluster_3_without_birch_write_latency_and_counts.png
>
>
>  Looking at a heap dump of 2.0 cluster, I found that majority of the objects 
> are IndexInfo and its ByteBuffers. This is specially bad in endpoints with 
> large CQL partitions. If a CQL partition is say 6,4GB, it will have 100K 
> IndexInfo objects and 200K ByteBuffers. This will create a lot of churn for 
> GC. Can this be improved by not creating so many objects?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to