Hi Eric, As far as I can tell, this is only an issue with the localBroker. The HdfsBroker does not have this problem and I'm not sure about KFS (CloudStore). The problem with this situation is that there is no graceful way to recover. The operator needs to up the number descriptors and then restart the process. So, in some sense, the abort is the correct thing to do. However, it is possible to implement some sort of open file cache inside the localBroker. This would allow the process to reuse old descriptors that haven't been used in a while.
Please go ahead and file a ticket on this issue. - Doug On Sun, Dec 14, 2008 at 9:33 PM, EricHolmberg <[email protected]>wrote: > > > I've run ulimit -n 2048 to increase the maximum file count to 2048 and > > will re-run the test. > > The test passes, now -- thanks for you help! > > After looking at the DFS Broker message of "Too many open files", I > think that the message is reasonable. I wonder if there is a better > way to handle it on the RangerServer or Master side such that the > RangeServer doesn't segfault. > > Should I open a ticket for this? > > Thanks, > > Eric > > P.S. In case anyone else runs into this, I actually edited /etc/ > security/limits.conf and added the two following lines to increase the > maximum number of file handles to 4096 for all users. > > * soft nofile 4096 > * hard nofile 4096 > > It looks like the maximum number of handles used was around 1500 near > the end of the import, so for now, 4096 file handles seems > conservative. > > > > > --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "Hypertable Development" group. To post to this group, send email to [email protected] To unsubscribe from this group, send email to [email protected] For more options, visit this group at http://groups.google.com/group/hypertable-dev?hl=en -~----------~----~----~----~------~----~------~--~---
