"Spink, Gary R." wrote:
> But I get the same result regardless of where I try to do the 101st
connect.
> Isn't every PID a separate process? If not, what constitutes a separate
> process?
"Mileski, Andrew E." wrote:
>Then you should be able to have N processes each with X connections, until
you
>hit ths system wide limit N*X. If not, then something is amiss.
I could use some advice on finding what is amiss. After rebooting the
RedHat Linux 6.2 box, the "dmesg" command output contains this line--
pty: 2048 Unix98 ptys configured
"lsof|wc" shows that I have 477 open files.
"cat /proc/sys/fs/file-nr" says "283 16 6000".
So it should be possible to open "6000 - 477 = 5523" more files - right?
I then run a script from the Linux console that forks off 99 pppd processes
which successfully create 99 PPP connections to a router. That increases
the "lsof" command output to 3248 lines and changes "file-nr" to
"2660 1304 6000". So it should still be possible to open
"6000 - 3248 = 2752" more files - correct?
The 100th connect is generated from a telnet session and opens up 28
more files (14 for the master pppd process and 14 for the slave pppd
process. But when I try to repeat that for the 101st connect, the
kernel immediately reports these 2 errors:
ppp: dev_alloc_name failed (-23)
ppp_alloc failed
In another instance of this test, I executed the 100th and 101st
connect commands using "strace -f -o x.log" and was surprised to find no
difference between the traces for the successful and unsuccessful connect.
There doesn't seem to be any way to get strace to follow the forked pppd
processes.
Thanks in advance for any help with this problem. It would be really
nice to make use of more of that "2048 Unix98 PTY" feature for internal
load testing. Please reply to me as well as this mailing list.
Gary Spink
[EMAIL PROTECTED]
--
To unsubscribe:
mail -s unsubscribe [EMAIL PROTECTED] < /dev/null