Yeah. I see this in our dump of ulimits: [HBase-TRUNK] $ /bin/bash -xe /tmp/hudson8597777921705333113.sh + ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 128588 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 128588 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited
Let me add hostname to the output so when I to infrastructure I can tell them name of the machine w/ this regression. St.Ack On Mon, Aug 15, 2011 at 8:09 PM, Ted Yu <[email protected]> wrote: > From: > https://builds.apache.org/view/G-L/view/HBase/job/HBase-TRUNK/lastCompletedBuild/testReport/org.apache.hadoop.hbase.master/TestDistributedLogSplitting/testOrphanLogCreation/ > > Caused by: java.io.IOException: Too many open files > at sun.nio.ch.IOUtil.initPipe(Native Method) > at sun.nio.ch.EPollSelectorImpl.<init>(EPollSelectorImpl.java:49) > > FYI >
