On Wed, Sep 26, 2012 at 2:35 AM, Rob Coli rc...@palominodb.com wrote:
150,000 sstables seem highly unlikely to be performant. As a simple
example of why, on the read path the bloom filter for every sstable
must be consulted...
Unfortunately that's a bad example since that's not true.
Leveled
On Wed, Sep 26, 2012 at 6:05 AM, Sylvain Lebresne sylv...@datastax.com wrote:
On Wed, Sep 26, 2012 at 2:35 AM, Rob Coli rc...@palominodb.com wrote:
150,000 sstables seem highly unlikely to be performant. As a simple
example of why, on the read path the bloom filter for every sstable
must be
See my comments inline
2012/9/25 Aaron Turner synfina...@gmail.com
On Mon, Sep 24, 2012 at 10:02 AM, Віталій Тимчишин tiv...@gmail.com
wrote:
Why so?
What are pluses and minuses?
As for me, I am looking for number of files in directory.
700GB/512MB*5(files per SST) = 7000 files, that
On Tue, Sep 25, 2012 at 10:36 AM, Віталій Тимчишин tiv...@gmail.com wrote:
See my comments inline
2012/9/25 Aaron Turner synfina...@gmail.com
On Mon, Sep 24, 2012 at 10:02 AM, Віталій Тимчишин tiv...@gmail.com
wrote:
Why so?
What are pluses and minuses?
As for me, I am looking for
On Sun, Sep 23, 2012 at 12:24 PM, Aaron Turner synfina...@gmail.com wrote:
Leveled compaction've tamed space for us. Note that you should set
sstable_size_in_mb to reasonably high value (it is 512 for us with ~700GB
per node) to prevent creating a lot of small files.
512MB per sstable? Wow,
Why so?
What are pluses and minuses?
As for me, I am looking for number of files in directory.
700GB/512MB*5(files per SST) = 7000 files, that is OK from my view.
700GB/5MB*5 = 70 files, that is too much for single directory, too much
memory used for SST data, too huge compaction queue (that
On Mon, Sep 24, 2012 at 10:02 AM, Віталій Тимчишин tiv...@gmail.com wrote:
Why so?
What are pluses and minuses?
As for me, I am looking for number of files in directory.
700GB/512MB*5(files per SST) = 7000 files, that is OK from my view.
700GB/5MB*5 = 70 files, that is too much for single
If you are using ext3 there is a hard limit on number if files in a
directory of 32K. EXT4 as a much higher limit (cant remember exactly
IIRC). So true that having many files is not a problem for the file
system though your VFS cache could be less efficient since you would
have a higher
If you think about space, use Leveled compaction! This won't only allow you
to fill more space, but also will shrink you data much faster in case of
updates. Size compaction can give you 3x-4x more space used than there are
live data. Consider the following (our simplified) scenario:
1) The data
On Sun, Sep 23, 2012 at 8:18 PM, Віталій Тимчишин tiv...@gmail.com wrote:
If you think about space, use Leveled compaction! This won't only allow you
to fill more space, but also will shrink you data much faster in case of
updates. Size compaction can give you 3x-4x more space used than there
While diskspace is cheap, nodes are not that cheap, and usually systems have a
1T limit on each node which means we would love to really not add more nodes
until we hit 70% disk space instead of the normal 50% that we have read about
due to compaction.
Is there any way to use less disk space
1. Use compression
2. Used Leveled Compaction
Also, 1TB/node is a lot larger then the normal recommendation...
generally speaking more in the 300-400GB range.
On Thu, Sep 20, 2012 at 8:10 PM, Hiller, Dean dean.hil...@nrel.gov wrote:
While diskspace is cheap, nodes are not that cheap, and
12 matches
Mail list logo