Thanks Peter for the heads up.
Note that the problem is more severe with JVM's use of per-thread
selectors. https://issues.apache.org/jira/browse/HADOOP-4346 avoids
using JVM selectos. Even with HADOOP-4346, a limit of 128 is too small.
I wish 4346 went into earlier versions of Hadoop.
Raghu.
Peter Romianowski wrote:
Hi,
we just came across a very serious problem with Hadoop (and any other
nio intense Java-application) and kernel 2.6.27.
Short story:
Increase epoll maximum_instances (/proc/sys/fs/epoll/max_user_instances)
to prevent "Too many open files" errors regardless your ulimit -n settings.
Long story:
http://pero.blogs.aprilmayjune.org/2009/01/22/hadoop-and-linux-kernel-2627-epoll-limits/
I just wanted to drop this note since it took us 2 days to figure it
out... :(
Regards
Peter