Hi I've had a problem with a service on a CT failing with "too many open files".
The max number of file descriptors was 1024 - I've now increased the max number of open files and max user processes to 65535 on the hardware node per the info here: http://ithubinfo.blogspot.co.uk/2013/07/how-to-increase-ulimit-open-file-and.html I'm running Centos v6.5 as follows [root@machine ~]# uname -a Linux example.com 2.6.32-042stab084.17 #1 SMP Fri Dec 27 17:10:20 MSK 2013 x86_64 x86_64 x86_64 GNU/Linux The output of ulimit -a on the hardware node is as follows: [root@example ~]# ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 126948 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 65535 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 65535 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited The output of ulimit -a on the CT is the same. Question: is this set-up 'correct' and unlikely to cause future issues with access to files and resources i.e. is there a danger that I have over-committed the server? Many thanks and best regards Chip Scooter
_______________________________________________ Users mailing list Users@openvz.org https://lists.openvz.org/mailman/listinfo/users