[ 
https://issues.apache.org/jira/browse/CASSANDRA-7184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-7184:
----------------------------------------
    Component/s: Compaction

> improvement  of  SizeTieredCompaction
> -------------------------------------
>
>                 Key: CASSANDRA-7184
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-7184
>             Project: Cassandra
>          Issue Type: Improvement
>          Components: Compaction
>            Reporter: Jianwei Zhang
>            Assignee: Jianwei Zhang
>            Priority: Minor
>              Labels: compaction
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> 1,  In our usage scenario, there is no duplicated insert and no delete . The 
> data increased all the time, and some big sstables are generated (100GB for 
> example).  we don't want these sstables to participate in the 
> SizeTieredCompaction any more. so we add a max threshold which is set to 
> 100GB . Sstables larger than the threshold will not be compacted. Should this 
> strategy be added to the trunk ?
> 2,  In our usage scenario, maybe hundreds of sstable need to be compacted in 
> a Major Compaction. The total size would be larger to 5TB. So during the 
> compaction, when the size writed reach to a configed threshhold(200GB for 
> example), it switch to write a new sstable. In this way, we avoid to generate 
> too huge sstables. Too huge sstable have some bad infuence: 
>  (1) It will be larger than the capacity of a disk;
>  (2) If the sstable is corrupt, lots of objects will be influenced .
> Should this strategy be added to the trunk ?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org

Reply via email to