Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change 
notification.

The "TooManyOpenFiles" page has been changed by SteveLoughran:
http://wiki.apache.org/hadoop/TooManyOpenFiles?action=diff&rev1=1&rev2=2

Comment:
fix the link

  
  You can see this on Linux machines in client-side applications, server code 
or even in test runs.
  
- It is caused by per-process limits on the number of files that a single 
user/process can have open, which was introduced in 
[[http://lkml.indiana.edu/hypermail/linux/kernel/0812.0/01183.html|2.6.27]. The 
default value, 128, was chosen because "that should be enough". 
+ It is caused by per-process limits on the number of files that a single 
user/process can have open, which was introduced in 
[[http://lkml.indiana.edu/hypermail/linux/kernel/0812.0/01183.html|the 2.6.27 
kernel]]. The default value, 128, was chosen because "that should be enough". 
  
  In Hadoop, it isn't. To fix this log in/su/ssh as root and edit 
{{{/etc/sysctl.conf}}}
  

Reply via email to