However, in my experience it is unusual for a too low limit on the number
of open files to result in a segmentation fault. Especially in a well
written program like Apache HTTPD. A well written program will normally
check whether the open (or any syscall which returns a file descriptor)
failed and
I have set my siege concurrency level a bit lower (20 users) and that seems
to have resolved the segfault issue. Its strange that I hadn't read
anywhere else that a lack of resources could cause that, but there it is. I
guess that running Debian 8, Apache 2.4.10, php-fpm and Mariadb was just a
bit
On Fri, Aug 21, 2015 at 6:14 PM, Daryl King allnatives.onl...@gmail.com
wrote:
Thanks Ryan. Strangely when running ulimit -n it returns 65536 in a ssh
session, but 1024 in webmin? Which one would be correct?
Limits set by the ulimit command (and the setrlimit syscall) are correct if
they are
I am running Apache 2.4.10 with mpm_event on a Debian 8 vps. When I run
Siege on my setup it runs well, except for a Segmentaion Fault at the very
end [child pid exit signal Segmentation fault (11)]. I have run GDB on
a core dump of the segfault and returned this:
[Using host libthread_db
Hi Daryl,
Typically when I see a core dump when running siege, it is a resource
issue. Out of memory, and/or I've reached the ulimit on my machine and need
to set it higher. The limit is 1024 (displayed via ulimit -n), and can be
changed via ulimit -n value. This change isn't persistent - and the
Thanks Ryan. Strangely when running ulimit -n it returns 65536 in a ssh
session, but 1024 in webmin? Which one would be correct?
On Sat, Aug 22, 2015 at 12:52 AM, R T i.r.dshiz...@gmail.com wrote:
Hi Daryl,
Typically when I see a core dump when running siege, it is a resource
issue. Out of