Scrub apparently dies because it cannot acquire a file descriptor. Scrub does
not correctly closes files
(https://issues.apache.org/jira/browse/CASSANDRA-2669)
so that may be part of why that happens. However, a simple fix is probably to
raise up the file descriptor limit.
--
Sylvain
On Fri,
As far as scrub goes that could be it. I'm already running unlimited file
handles though so ulimit not answer unfortunately
Dominic
On 17 June 2011 12:12, Sylvain Lebresne sylv...@datastax.com wrote:
Scrub apparently dies because it cannot acquire a file descriptor. Scrub
does
not correctly
On Fri, Jun 17, 2011 at 1:51 PM, Dominic Williams
dwilli...@system7.co.uk wrote:
As far as scrub goes that could be it. I'm already running unlimited file
handles though so ulimit not answer unfortunately
Are you sure ? How many file descriptors are open on the system when
you get that
scrub
Even without lsof, you should be able to get the data from /proc/$pid
-ryan
On Fri, Jun 17, 2011 at 5:08 AM, Dominic Williams
dwilli...@system7.co.uk wrote:
Unfortunately I shutdown that node and anyway lsof wasn't installed.
But $ulimit gives
unlimited
On 17 June 2011 13:00, Sylvain
Yeah that would get the count (although I don't think you can see filenames
- or maybe I just don't know how). Unfortunately that node was shut down. I
then tried restarting with storage port 7001 to isolate as was quite toxic
for performance of cluster but it now get's OOM on restart.
If it's