Are you using leveled compaction?  If so, what do you have the file size
set at?  If you're using the defaults, you'll have a ton of really small
files.  I believe Albert Tobey recommended using 256MB for the
table sstable_size_in_mb to avoid this problem.


On Sun, Jul 14, 2013 at 5:10 PM, Paul Ingalls <paulinga...@gmail.com> wrote:

> I'm running into a problem where instances of my cluster are hitting over
> 450K open files.  Is this normal for a 4 node 1.2.6 cluster with
> replication factor of 3 and about 50GB of data on each node?  I can push
> the file descriptor limit up, but I plan on having a much larger load so
> I'm wondering if I should be looking at something else….
>
> Let me know if you need more info…
>
> Paul
>
>
>


-- 
Jon Haddad
http://www.rustyrazorblade.com
skype: rustyrazorblade

Reply via email to