Welcome to the wonderland of SSTableSize of LCS. There is some discussion 
around it, but no guidelines yet. 

I asked the people in the IRC, someone is running as high as 128M on the 
production with no problem. I guess you have to test it on your system and see 
how it performs. 

Attached is the related thread for your reference.

-Wei

----- Original Message -----
From: "Andras Szerdahelyi" <andras.szerdahe...@ignitionone.com>
To: user@cassandra.apache.org
Sent: Wednesday, March 27, 2013 1:19:06 AM
Subject: Re: bloom filter fp ratio of 0.98 with fp_chance of 0.01


Aaron, 




What version are you using ? 


1.1.9 





Have you changed the bf_ chance ? The sstables need to be rebuilt for it to 
take affect. 


I did ( several times ) and I ran upgradesstables after 





Not sure what this means. 
Are you saying it's in a boat on a river, with tangerine trees and marmalade 
skies ? 


You nailed it. A significant number of reads are done from hundreds of sstables 
( I have to add, compaction is apparently constantly 6000-7000 tasks behind and 
the vast majority of the reads access recently written data ) 





Take a look at the nodetool cfhistograms to get a better idea of the row size 
and use that info when consdiering the sstable size. 


It's around 1-20K, what should I optimise the LCS sstable size for? I suppose 
"I want to fit as many complete rows as possible in to a single sstable to keep 
file count down while avoiding compactions of oversized ( double digit 
gigabytes? ) sstables at higher levels ? " 
Do I have to run a major compaction after a change to sstable_size_in_mb ? The 
larger sstable size wouldn't really affect sstables on levels above L0 , would 
it? 






Thanks!! 
Andras 






From: aaron morton < aa...@thelastpickle.com > 
Reply-To: " user@cassandra.apache.org " < user@cassandra.apache.org > 
Date: Tuesday 26 March 2013 21:46 
To: " user@cassandra.apache.org " < user@cassandra.apache.org > 
Subject: Re: bloom filter fp ratio of 0.98 with fp_chance of 0.01 




What version are you using ? 
1.2.0 allowed a null bf chance, and I think it returned .1 for LCS and .01 for 
STS compaction. 
Have you changed the bf_ chance ? The sstables need to be rebuilt for it to 
take affect. 





and sstables read is in the skies Not sure what this means. 
Are you saying it's in a boat on a river, with tangerine trees and marmalade 
skies ? 





SSTable count: 22682 

Lots of files there, I imagine this would dilute the effectiveness of the key 
cache. It's caching (sstable, key) tuples. 
You may want to look at increasing the sstable_size with LCS. 





Compacted row minimum size: 104 
Compacted row maximum size: 263210 


Compacted row mean size: 3041 
Take a look at the nodetool cfhistograms to get a better idea of the row size 
and use that info when consdiering the sstable size. 


Cheers 








----------------- 
Aaron Morton 
Freelance Cassandra Consultant 
New Zealand 


@aaronmorton 
http://www.thelastpickle.com 


On 26/03/2013, at 6:16 AM, Andras Szerdahelyi < 
andras.szerdahe...@ignitionone.com > wrote: 




Hello list, 


Could anyone shed some light on how an FP chance of 0.01 coexist with a 
measured FP ratio of .. 0.98 ? Am I reading this wrong or are 98% of the 
requests hitting the bloom filter create a false positive while the "target" 
false ratio is 0.01? 
( Also key cache hit ratio is around 0.001 and sstables read is in the skies ( 
non-exponential (non-) drop off for LCS ) but that should be filed under 
"effect" and not "cause"? ) 



[default@unknown] use KS; 
Authenticated to keyspace: KS 
[default@KS] describe CF; 
ColumnFamily: CF 
Key Validation Class: org.apache.cassandra.db.marshal.BytesType 
Default column value validator: org.apache.cassandra.db.marshal.BytesType 
Columns sorted by: org.apache.cassandra.db.marshal.BytesType 
GC grace seconds: 691200 
Compaction min/max thresholds: 4/32 
Read repair chance: 0.1 
DC Local Read repair chance: 0.0 
Replicate on write: true 
Caching: ALL 
Bloom Filter FP chance: 0.01 
Built indexes: [] 
Compaction Strategy: 
org.apache.cassandra.db.compaction.LeveledCompactionStrategy 
Compaction Strategy Options: 
sstable_size_in_mb: 5 
Compression Options: 
sstable_compression: org.apache.cassandra.io.compress.SnappyCompressor 



Keyspace: KS 
Read Count: 628950 
Read Latency: 93.19921121869784 ms. 
Write Count: 1219021 
Write Latency: 0.14352380885973254 ms. 
Pending Tasks: 0 
Column Family: CF 
SSTable count: 22682 
Space used (live): 119771434915 
Space used (total): 119771434915 
Number of Keys (estimate): 203837952 
Memtable Columns Count: 13125 
Memtable Data Size: 33212827 
Memtable Switch Count: 15 
Read Count: 629009 
Read Latency: 88.434 ms. 
Write Count: 1219038 
Write Latency: 0.095 ms. 
Pending Tasks: 0 
Bloom Filter False Positives: 37939419 
Bloom Filter False Ratio: 0.97928 
Bloom Filter Space Used: 261572784 
Compacted row minimum size: 104 
Compacted row maximum size: 263210 
Compacted row mean size: 3041 


I upgraded sstables after changing the FP chance 


Thanks! 
Andras 
--- Begin Message ---
"I'm still wondering about how to chose the size of the sstable under LCS.
Default is 5MB, people use to configure it to 10MB and now you configure it
at 128MB. What are the benefits or disadvantages of a very small size
(let's say 5 MB) vs big size (like 128MB) ?"

This seems to be the biggest question about LCS, and it is still
unanswered. Does anyone (commiters maybe) know about it ?

It would help at least us 5, but probably more people.

Alain


2013/3/8 Michael Theroux <mthero...@yahoo.com>

> I've asked this myself in the past... fairly arbitrarily chose 10MB based
> on Wei's experience,
>
> -Mike
>
> On Mar 8, 2013, at 1:50 PM, Hiller, Dean wrote:
>
> > +1  (I would love to know this info).
> >
> > Dean
> >
> > From: Wei Zhu <wz1...@yahoo.com<mailto:wz1...@yahoo.com>>
> > Reply-To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>"
> <user@cassandra.apache.org<mailto:user@cassandra.apache.org>>, Wei Zhu <
> wz1...@yahoo.com<mailto:wz1...@yahoo.com>>
> > Date: Friday, March 8, 2013 11:11 AM
> > To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>" <
> user@cassandra.apache.org<mailto:user@cassandra.apache.org>>
> > Subject: Re: Size Tiered -> Leveled Compaction
> >
> > I have the same wonder.
> > We started with the default 5M and the compaction after repair takes too
> long on 200G node, so we increase the size to 10M sort of arbitrarily since
> there is not much documentation around it. Our tech op team still thinks
> there are too many files in one directory. To fulfill the guidelines from
> them (don't remember the exact number, but something in the range of 50K
> files), we will need to increase the size to around 50M. I think the
> latency of  opening one file is not impacted much by the number of files in
> one directory for the modern file system. But "ls" and other operations
> suffer.
> >
> > Anyway, I asked about the side effect of the bigger SSTable in IRC,
> someone was mentioning during read, C* reads the whole SSTable from disk in
> order to access the row which causes more disk IO compared with the smaller
> SSTable. I don't know enough about the internal of the Cassandra, not sure
> whether it's the case or not. If that is the case (with question mark) ,
> the SSTable or the row is kept in the memory? Hope someone can confirm the
> theory here. Or I have to dig in to the source code to find it.
> >
> > Another concern is during repair, does it stream the whole SSTable or
> the partial of it when mismatch is detected? I see the claim for both, can
> someone please confirm also?
> >
> > The last thing is the effectiveness of the parallel LCS on 1.2. It takes
> quite some time for the compaction to finish after repair for LCS for
> 1.1.X. Both CPU and disk Util is low during the compaction which means LCS
> doesn't fully utilized resource.  It will make the life easier if the issue
> is addressed in 1.2.
> >
> > Bottom line is that there is not much documentation/guideline/successful
> story around LCS although it sounds beautiful on paper.
> >
> > Thanks.
> > -Wei
> > ________________________________
> > From: Alain RODRIGUEZ <arodr...@gmail.com<mailto:arodr...@gmail.com>>
> > To: user@cassandra.apache.org<mailto:user@cassandra.apache.org>
> > Cc: Wei Zhu <wz1...@yahoo.com<mailto:wz1...@yahoo.com>>
> > Sent: Friday, March 8, 2013 1:25 AM
> > Subject: Re: Size Tiered -> Leveled Compaction
> >
> > I'm still wondering about how to chose the size of the sstable under
> LCS. Defaul is 5MB, people use to configure it to 10MB and now you
> configure it at 128MB. What are the benefits or inconveniants of a very
> small size (let's say 5 MB) vs big size (like 128MB) ?
> >
> > Alain
> >
> >
> > 2013/3/8 Al Tobey <a...@ooyala.com<mailto:a...@ooyala.com>>
> > We saw the exactly the same thing as Wei Zhu, > 100k tables in a
> directory causing all kinds of issues.  We're running 128MiB ssTables with
> LCS and have disabled compaction throttling.  128MiB was chosen to get file
> counts under control and reduce the number of files C* has to manage &
> search. I just looked and a ~250GiB node is using about 10,000 files, which
> is quite manageable.  This configuration is running smoothly in production
> under mixed read/write load.
> >
> > We're on RAID0 across 6 15k drives per machine. When we migrated data to
> this cluster we were pushing well over 26k/s+ inserts with CL_QUORUM. With
> compaction throttling enabled at any rate it just couldn't keep up. With
> throttling off, it runs smoothly and does not appear to have an impact on
> our applications, so we always leave it off, even in EC2.  An 8GiB heap is
> too small for this config on 1.1. YMMV.
> >
> > -Al Tobey
> >
> > On Thu, Feb 14, 2013 at 12:51 PM, Wei Zhu <wz1...@yahoo.com<mailto:
> wz1...@yahoo.com>> wrote:
> > I haven't tried to switch compaction strategy. We started with LCS.
> >
> > For us, after massive data imports (5000 w/seconds for 6 days), the
> first repair is painful since there is quite some data inconsistency. For
> 150G nodes, repair brought in about 30 G and created thousands of pending
> compactions. It took almost a day to clear those. Just be prepared LCS is
> really slow in 1.1.X. System performance degrades during that time since
> reads could go to more SSTable, we see 20 SSTable lookup for one read.. (We
> tried everything we can and couldn't speed it up. I think it's single
> threaded.... and it's not recommended to turn on multithread compaction. We
> even tried that, it didn't help )There is parallel LCS in 1.2 which is
> supposed to alleviate the pain. Haven't upgraded yet, hope it works:)
> >
> >
> http://www.datastax.com/dev/blog/performance-improvements-in-cassandra-1-2
> >
> >
> > Since our cluster is not write intensive, only 100 w/seconds. I don't
> see any pending compactions during regular operation.
> >
> > One thing worth mentioning is the size of the SSTable, default is 5M
> which is kind of small for 200G (all in one CF) data set, and we are on
> SSD.  It more than  150K files in one directory. (200G/5M = 40K SSTable and
> each SSTable creates 4 files on disk)  You might want to watch that and
> decide the SSTable size.
> >
> > By the way, there is no concept of Major compaction for LCS. Just for
> fun, you can look at a file called $CFName.json in your data directory and
> it tells you the SSTable distribution among different levels.
> >
> > -Wei
> >
> > ________________________________
> > From: Charles Brophy <cbro...@zulily.com<mailto:cbro...@zulily.com>>
> > To: user@cassandra.apache.org<mailto:user@cassandra.apache.org>
> > Sent: Thursday, February 14, 2013 8:29 AM
> > Subject: Re: Size Tiered -> Leveled Compaction
> >
> > I second these questions: we've been looking into changing some of our
> CFs to use leveled compaction as well. If anybody here has the wisdom to
> answer them it would be of wonderful help.
> >
> > Thanks
> > Charles
> >
> > On Wed, Feb 13, 2013 at 7:50 AM, Mike <mthero...@yahoo.com<mailto:
> mthero...@yahoo.com>> wrote:
> > Hello,
> >
> > I'm investigating the transition of some of our column families from
> Size Tiered -> Leveled Compaction.  I believe we have some high-read-load
> column families that would benefit tremendously.
> >
> > I've stood up a test DB Node to investigate the transition.  I
> successfully alter the column family, and I immediately noticed a large
> number (1000+) pending compaction tasks become available, but no compaction
> get executed.
> >
> > I tried running "nodetool sstableupgrade" on the column family, and the
> compaction tasks don't move.
> >
> > I also notice no changes to the size and distribution of the existing
> SSTables.
> >
> > I then run a major compaction on the column family.  All pending
> compaction tasks get run, and the SSTables have a distribution that I would
> expect from LeveledCompaction (lots and lots of 10MB files).
> >
> > Couple of questions:
> >
> > 1) Is a major compaction required to transition from size-tiered to
> leveled compaction?
> > 2) Are major compactions as much of a concern for LeveledCompaction as
> their are for Size Tiered?
> >
> > All the documentation I found concerning transitioning from Size Tiered
> to Level compaction discuss the alter table cql command, but I haven't
> found too much on what else needs to be done after the schema change.
> >
> > I did these tests with Cassandra 1.1.9.
> >
> > Thanks,
> > -Mike
> >
> >
> >
> >
> >
> >
> >
>
>

--- End Message ---

Reply via email to