Hi Amit, The size recommendations are based on balancing CPU and the amount of data stored on a node. LCS requires less disk space but generally requires much more CPU to keep up with compaction for the same amount of data, which is why the size recommendation is smaller. There is nothing wrong with attaching a larger disk, of course. The sizes are recommendations to start with when you have nothing else to go by. If your cluster is light on writes, you may be able to add much larger amounts data than the suggested sizes and have no problem keeping up with LCS compaction. If your cluster is heavy on writes, you may find you can only store a small fraction of the data per node you were able to store with STCS. You will have to benchmark for your use-case.
The 10 TB number is from a theoretical situation where LCS would result in reading a maximum of 7 SSTables to return a read -- if LCS compaction can keep up. Cheers, Mark On Thu, Apr 13, 2017 at 8:23 AM, Amit Singh F <amit.f.si...@ericsson.com> wrote: > Hi All, > > > > We are in process of migrating from STCS to LCS and was just doing few > reads on line . Below is the excerpt from Datastax recommendation on data > size : > > > > Doc link : https://docs.datastax.com/en/landing_page/doc/landing_page/ > planning/planningHardware.html > > > > > > Also there is one more recommendation where it hints down to disk size can > be limited to 10 TB (worst case) . Below is also excerpt also : > > > > Doc link : http://www.datastax.com/dev/blog/leveled-compaction-in- > apache-cassandra > > > > > > So are there any restrictions/scenarios due to which 600GB is the > preferred one in LCS. > > > > Thanks & Regards > > Amit Singh > > >