Since you moved from size tiered compaction. All your sstables are in L0.
You might be hitting this. Copied from code.
// LevelDB gives each level a score of how much data it contains vs its
ideal amount, and
        // compacts the level with the highest score. But this falls apart
spectacularly once you
        // get behind.  Consider this set of levels:
        // L0: 988 [ideal: 4]
        // L1: 117 [ideal: 10]
        // L2: 12  [ideal: 100]
        //
        // The problem is that L0 has a much higher score (almost 250) than
L1 (11), so what we'll
        // do is compact a batch of MAX_COMPACTING_L0 sstables with all 117
L1 sstables, and put the
        // result (say, 120 sstables) in L1. Then we'll compact the next
batch of MAX_COMPACTING_L0,
        // and so forth.  So we spend most of our i/o rewriting the L1 data
with each batch.
        //
        // If we could just do *all* L0 a single time with L1, that would
be ideal.  But we can't
        // -- see the javadoc for MAX_COMPACTING_L0.
        //
        // LevelDB's way around this is to simply block writes if L0
compaction falls behind.
        // We don't have that luxury.
        //
        // So instead, we
        // 1) force compacting higher levels first, which minimizes the i/o
needed to compact
        //    optimially which gives us a long term win, and
        // 2) if L0 falls behind, we will size-tiered compact it to reduce
read overhead until
        //    we can catch up on the higher levels.
        //
        // This isn't a magic wand -- if you are consistently writing too
fast for LCS to keep
        // up, you're still screwed.  But if instead you have intermittent
bursts of activity,
        // it can help a lot.


On Tue, Jul 9, 2013 at 3:23 PM, PARASHAR, BHASKARJYA JAY <bp1...@att.com>wrote:

>  Thanks Sankalp…I will look at these.****
>
> ** **
>
> *From:* sankalp kohli [mailto:kohlisank...@gmail.com]
> *Sent:* Tuesday, July 09, 2013 3:22 PM
> *To:* user@cassandra.apache.org
>
> *Subject:* Re: Leveled Compaction, number of SStables growing.****
>
> ** **
>
> Do you have lot of sstables in L0? ****
>
> Since you moved from size tiered compaction with lot of data, it will take
> time for it to compact. ****
>
> You might want to increase the compaction settings to speed it up. ****
>
> ** **
>
> On Tue, Jul 9, 2013 at 12:33 PM, PARASHAR, BHASKARJYA JAY <bp1...@att.com>
> wrote:****
>
> Thanks Jake. Guess we will have to increase the size.****
>
>  ****
>
> *From:* Jake Luciani [mailto:jak...@gmail.com]
> *Sent:* Tuesday, July 09, 2013 2:05 PM
> *To:* user
> *Subject:* Re: Leveled Compaction, number of SStables growing.****
>
>  ****
>
> We run with 128mb some run with 256mb.  Leveled compaction creates fixed
> sized sstables by design so this is the only way to lower the file count.*
> ***
>
>  ****
>
> On Tue, Jul 9, 2013 at 2:56 PM, PARASHAR, BHASKARJYA JAY <bp1...@att.com>
> wrote:****
>
> Hi,****
>
>  ****
>
> We recently switched from size tired compaction to Leveled compaction. We
> made this change because our rows are frequently updated. We also have a
> lot of data.****
>
> With size-tiered compaction, we have about 5-10 sstables per CF. So with
> about 15 CF’s we had about 100 sstables.****
>
> With a sstable default sixe of 5mb, now after leveled compaction, we have
> about 130k sstables and growing as the writes increases. There are a lot of
> compaction jobs pending.****
>
> If we increase the SStable size to 20mb, that will be about 30k sstables
> but it’s still a lot.****
>
>  ****
>
> Is this common? Any solution, hints on reducing the sstables are welcome.*
> ***
>
>  ****
>
> Thanks****
>
> -Jay****
>
>
>
> ****
>
>  ****
>
> --
> http://twitter.com/tjake ** **
>
> ** **
>

Reply via email to