I was seeing the segraults in a different location on Solaris, probably because my testing method was different. I created idle connections to the httpd to use up all the descriptors, rather than a flood of real requests. So we probably had (at least) two different bugs related to running out of file descriptors. --Brian
Jeff Trawick wrote: >[EMAIL PROTECTED] writes: > >>brianp 02/01/11 00:07:07 >> >> Modified: . STATUS >> Log: >> Updated STATUS to cover the worker segfault fixes >> >> > >> - * The worker MPM on Solaris segfaults when it runs out of file >> - descriptors. (This may affect other MPMs and/or platforms.) >> > >I can still readily hit this on current code (the same code that no >longer segfaults with graceful restart). > >[Fri Jan 11 07:26:37 2002] [error] (24)Too many open files: >apr_accept: (client socket) >[Fri Jan 11 07:26:37 2002] [error] [client 127.0.0.1] (24)Too many >open files: file permissions deny server access: /exp >ort/home/trawick/apacheinst/htdocs/index.html.en >[Fri Jan 11 07:26:37 2002] [error] [client 127.0.0.1] (24)Too many >open files: cannot access type map file: /export/home >/trawick/apacheinst/error/HTTP_FORBIDDEN.html.var >[Fri Jan 11 07:26:38 2002] [notice] child pid 25493 exit signal >Segmentation fault (11), possible coredump in /export/ho >me/trawick/apacheinst > >This is the same coredump I saw before: > >#0 0xff33a3cc in apr_wait_for_io_or_timeout (sock=0x738360, > for_read=1) at sendrecv.c:70 >70 FD_SET(sock->socketdes, &fdset); > >The socket has already been closed so trying to set bit -1 segfaults. >
