Are you still seeing the same exceptions about too many open files?



On Thu, Jul 17, 2014 at 6:28 AM, Bhaskar Singhal <bhaskarsing...@yahoo.com>
wrote:

> Even after changing ulimits and moving to the recommended production
> settings, we are still seeing the same issue.
>
> root@lnx148-76:~# cat /proc/17663/limits
> Limit                     Soft Limit           Hard Limit           Units
> Max cpu time              unlimited            unlimited            seconds
> Max file size             unlimited            unlimited            bytes
> Max data size             unlimited            unlimited            bytes
> Max stack size            8388608              unlimited            bytes
> Max core file size        0                    unlimited            bytes
> Max resident set          unlimited            unlimited            bytes
> Max processes             256502               256502
> processes
> Max open files            4096                 4096                 files
> Max locked memory         65536                65536                bytes
> Max address space         unlimited            unlimited            bytes
> Max file locks            unlimited            unlimited            locks
> Max pending signals       256502               256502               signals
> Max msgqueue size         819200               819200               bytes
> Max nice priority         0                    0
> Max realtime priority     0                    0
> Max realtime timeout      unlimited            unlimited            us
>
>
> Regards,
> Bhaskar
>
>
>   On Thursday, 10 July 2014 12:09 AM, Robert Coli <rc...@eventbrite.com>
> wrote:
>
>
> On Tue, Jul 8, 2014 at 10:17 AM, Bhaskar Singhal <bhaskarsing...@yahoo.com
> > wrote:
>
> But I am wondering why does Cassandra need to keep 3000+ commit log
> segment files open?
>
>
> Because you are writing faster than you can flush to disk.
>
> =Rob
>
>
>
>

Reply via email to