[modwsgi] Re: Segmentation fault - premature end of script headers
2.0.8 definitely fixed some segfaults on my end. So far, since September 30 I have not seen any segmentation faults. Seems that upgrade to 2.0.8 and using %{GLOBAL} solved the problem. Graham, thank you for the support! :) -- Maciej Wisniowski --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups modwsgi group. To post to this group, send email to modwsgi@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/modwsgi?hl=en -~--~~~~--~~--~--~---
[modwsgi] Re: Segmentation fault - premature end of script headers
Today I was not able to start my application as I got segmentation faults constantly. I've attached gdb and that is the result: (gdb) cont Continuing. Program received signal SIGSEGV, Segmentation fault. [Switching to Thread -1212216416 (LWP 29850)] PyErr_Occurred () at Python/errors.c:80 80 Python/errors.c: No such file or directory. in Python/errors.c (gdb) bt #0 PyErr_Occurred () at Python/errors.c:80 #1 0x002ce167 in _PyObject_GC_Malloc (basicsize=40) at Modules/ gcmodule.c:1326 #2 0x002ce21c in _PyObject_GC_NewVar (tp=0x3083c0, nitems=7) at Modules/gcmodule.c:1352 #3 0x00267c33 in PyTuple_New (size=7) at Objects/tupleobject.c:68 #4 0x0041cdc0 in ?? () #5 0x0007 in ?? () #6 0x001c in ?? () #7 0xb7beab18 in ?? () #8 0x0041cd4e in ?? () #9 0xb7be9af8 in ?? () #10 0xb758b22c in ?? () #11 0xb7be9b80 in ?? () #12 0x0042fed4 in ?? () #13 0x0042fed4 in ?? () #14 0xb7be9acc in ?? () #15 0xb7be9a2c in ?? () #16 0xfbad8001 in ?? () #17 0xb7be9ca0 in ?? () #18 0xb7be9ca0 in ?? () #19 0xb7be9ca0 in ?? () #20 0xb7be9ca0 in ?? () #21 0x0042fed4 in ?? () #22 0x08098608 in apr_bucket_type_eos () #23 0x09ba7920 in ?? () #24 0x002c in ?? () #25 0x in ?? () #26 0x0413 in ?? () #27 0x in ?? () (gdb) thread apply all bt Thread 4 (Thread -1211159648 (LWP 29848)): #0 0x00ad57a2 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2 #1 0x00bb33b1 in ___newselect_nocancel () from /lib/tls/libc.so.6 #2 0x0097826b in apr_sleep (t=29953) at time/unix/time.c:246 #3 0x00a76110 in wsgi_monitor_thread (thd=0x9a93420, data=0x9a92dd0) at mod_wsgi.c:8367 #4 0x0097783c in dummy_worker (opaque=0xfdfe) at threadproc/unix/ thread.c:142 #5 0x00c723cc in start_thread () from /lib/tls/libpthread.so.0 #6 0x00bba96e in clone () from /lib/tls/libc.so.6 Thread 3 (Thread -1211688032 (LWP 29849)): #0 0x00ad57a2 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2 #1 0x00bb33b1 in ___newselect_nocancel () from /lib/tls/libc.so.6 #2 0x0097826b in apr_sleep (t=100) at time/unix/time.c:246 #3 0x00a75f6a in wsgi_deadlock_thread (thd=0x9a93440, data=0x9a92dd0) at mod_wsgi.c:8279 #4 0x0097783c in dummy_worker (opaque=0xfdfe) at threadproc/unix/ thread.c:142 #5 0x00c723cc in start_thread () from /lib/tls/libpthread.so.0 #6 0x00bba96e in clone () from /lib/tls/libc.so.6 Thread 2 (Thread -1212216416 (LWP 29850)): #0 PyErr_Occurred () at Python/errors.c:80 #1 0x002ce167 in _PyObject_GC_Malloc (basicsize=40) at Modules/ gcmodule.c:1326 #2 0x002ce21c in _PyObject_GC_NewVar (tp=0x3083c0, nitems=7) at Modules/gcmodule.c:1352 #3 0x00267c33 in PyTuple_New (size=7) at Objects/tupleobject.c:68 #4 0x0041cdc0 in ?? () #5 0x0007 in ?? () #6 0x001c in ?? () #7 0xb7beab18 in ?? () #8 0x0041cd4e in ?? () #9 0xb7be9af8 in ?? () #10 0xb758b22c in ?? () #11 0xb7be9b80 in ?? () #12 0x0042fed4 in ?? () #13 0x0042fed4 in ?? () #14 0xb7be9acc in ?? () #15 0xb7be9a2c in ?? () #16 0xfbad8001 in ?? () #17 0xb7be9ca0 in ?? () #18 0xb7be9ca0 in ?? () #19 0xb7be9ca0 in ?? () #20 0xb7be9ca0 in ?? () #21 0x0042fed4 in ?? () #22 0x08098608 in apr_bucket_type_eos () #23 0x09ba7920 in ?? () #24 0x002c in ?? () #25 0x in ?? () #26 0x0413 in ?? () #27 0x in ?? () ---Type return to continue, or q return to quit--- Thread 1 (Thread -1208453440 (LWP 29847)): #0 0x00ad57a2 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2 #1 0x00c787c7 in do_sigwait () from /lib/tls/libpthread.so.0 #2 0x00c7888f in sigwait () from /lib/tls/libpthread.so.0 #3 0x009775ea in apr_signal_thread (signal_handler=0xa75e30 wsgi_check_signal) at threadproc/unix/signals.c:383 #4 0x00a76b61 in wsgi_start_process (p=0x9a0d0a8, daemon=0x9a92dd0) at mod_wsgi.c:8483 #5 0x00a7707a in wsgi_manage_process (reason=0, data=0x9a92dd0, status=11) at mod_wsgi.c:7708 #6 0x009703c8 in apr_proc_other_child_alert (proc=0xbfea8f80, reason=0, status=11) at misc/unix/otherchild.c:115 #7 0x080817ad in ap_mpm_run (_pconf=0x9a0d0a8, plog=0x9a3b160, s=0x9a0ef48) at worker.c:1611 #8 0x08061d9c in main (argc=3, argv=0xbfea90e4) at main.c:730 (gdb) cont Continuing. Program received signal SIGSEGV, Segmentation fault. PyErr_Occurred () at Python/errors.c:80 80 in Python/errors.c (gdb) cont Continuing. Program terminated with signal SIGSEGV, Segmentation fault. The program no longer exists. (gdb) quit After switching to WSGIApplicationGroup %{GLOBAL} my application started, but I have few more applications on this apache instance so I can't use this kind of setup. Is there anything interesting in the above gdb log? Any other commands that I can use next time? -- Maciej Wisniowski --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups modwsgi group. To post to this group, send email to modwsgi@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at
[modwsgi] Re: Segmentation fault - premature end of script headers
Not particularly useful unfortunately. Next thing would be to determine if crash happens as a result of import WSGI script file itself, or due to call of WSGI application. Thus at head of WSGI script file add: import sys print sys.stderr, START OF WSGI SCRIPT FILE and at end of WSGI script file add: print sys.stderr, END OF WSGI SCRIPT FILE If it isn't crashing at load of WSGI script file, both should appear in Apache error log. If does crash, add more debug output like that to ascertain which module being imported causes it to crash. If that is a big module, then need to recursively work out what module that module imports and do the import at start of WSGI script file and try and narrow down which module causes crash. I can't remember, but will test later, if one can manage to set environment variable to force Python to log all imports. This will help narrow it down quicker. Other option is since works in %{GLOBAL}, once everything imported, iterate over modules in sys.modules and find all that have __file__ referencing a .so file and print that out. That will tell you which C extension modules are being used. Standard ones should be okay, but third party ones would be worth a closer look. More later. Graham 2008/9/30 Pigletto [EMAIL PROTECTED]: Today I was not able to start my application as I got segmentation faults constantly. I've attached gdb and that is the result: (gdb) cont Continuing. Program received signal SIGSEGV, Segmentation fault. [Switching to Thread -1212216416 (LWP 29850)] PyErr_Occurred () at Python/errors.c:80 80 Python/errors.c: No such file or directory. in Python/errors.c (gdb) bt #0 PyErr_Occurred () at Python/errors.c:80 #1 0x002ce167 in _PyObject_GC_Malloc (basicsize=40) at Modules/ gcmodule.c:1326 #2 0x002ce21c in _PyObject_GC_NewVar (tp=0x3083c0, nitems=7) at Modules/gcmodule.c:1352 #3 0x00267c33 in PyTuple_New (size=7) at Objects/tupleobject.c:68 #4 0x0041cdc0 in ?? () #5 0x0007 in ?? () #6 0x001c in ?? () #7 0xb7beab18 in ?? () #8 0x0041cd4e in ?? () #9 0xb7be9af8 in ?? () #10 0xb758b22c in ?? () #11 0xb7be9b80 in ?? () #12 0x0042fed4 in ?? () #13 0x0042fed4 in ?? () #14 0xb7be9acc in ?? () #15 0xb7be9a2c in ?? () #16 0xfbad8001 in ?? () #17 0xb7be9ca0 in ?? () #18 0xb7be9ca0 in ?? () #19 0xb7be9ca0 in ?? () #20 0xb7be9ca0 in ?? () #21 0x0042fed4 in ?? () #22 0x08098608 in apr_bucket_type_eos () #23 0x09ba7920 in ?? () #24 0x002c in ?? () #25 0x in ?? () #26 0x0413 in ?? () #27 0x in ?? () (gdb) thread apply all bt Thread 4 (Thread -1211159648 (LWP 29848)): #0 0x00ad57a2 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2 #1 0x00bb33b1 in ___newselect_nocancel () from /lib/tls/libc.so.6 #2 0x0097826b in apr_sleep (t=29953) at time/unix/time.c:246 #3 0x00a76110 in wsgi_monitor_thread (thd=0x9a93420, data=0x9a92dd0) at mod_wsgi.c:8367 #4 0x0097783c in dummy_worker (opaque=0xfdfe) at threadproc/unix/ thread.c:142 #5 0x00c723cc in start_thread () from /lib/tls/libpthread.so.0 #6 0x00bba96e in clone () from /lib/tls/libc.so.6 Thread 3 (Thread -1211688032 (LWP 29849)): #0 0x00ad57a2 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2 #1 0x00bb33b1 in ___newselect_nocancel () from /lib/tls/libc.so.6 #2 0x0097826b in apr_sleep (t=100) at time/unix/time.c:246 #3 0x00a75f6a in wsgi_deadlock_thread (thd=0x9a93440, data=0x9a92dd0) at mod_wsgi.c:8279 #4 0x0097783c in dummy_worker (opaque=0xfdfe) at threadproc/unix/ thread.c:142 #5 0x00c723cc in start_thread () from /lib/tls/libpthread.so.0 #6 0x00bba96e in clone () from /lib/tls/libc.so.6 Thread 2 (Thread -1212216416 (LWP 29850)): #0 PyErr_Occurred () at Python/errors.c:80 #1 0x002ce167 in _PyObject_GC_Malloc (basicsize=40) at Modules/ gcmodule.c:1326 #2 0x002ce21c in _PyObject_GC_NewVar (tp=0x3083c0, nitems=7) at Modules/gcmodule.c:1352 #3 0x00267c33 in PyTuple_New (size=7) at Objects/tupleobject.c:68 #4 0x0041cdc0 in ?? () #5 0x0007 in ?? () #6 0x001c in ?? () #7 0xb7beab18 in ?? () #8 0x0041cd4e in ?? () #9 0xb7be9af8 in ?? () #10 0xb758b22c in ?? () #11 0xb7be9b80 in ?? () #12 0x0042fed4 in ?? () #13 0x0042fed4 in ?? () #14 0xb7be9acc in ?? () #15 0xb7be9a2c in ?? () #16 0xfbad8001 in ?? () #17 0xb7be9ca0 in ?? () #18 0xb7be9ca0 in ?? () #19 0xb7be9ca0 in ?? () #20 0xb7be9ca0 in ?? () #21 0x0042fed4 in ?? () #22 0x08098608 in apr_bucket_type_eos () #23 0x09ba7920 in ?? () #24 0x002c in ?? () #25 0x in ?? () #26 0x0413 in ?? () #27 0x in ?? () ---Type return to continue, or q return to quit--- Thread 1 (Thread -1208453440 (LWP 29847)): #0 0x00ad57a2 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2 #1 0x00c787c7 in do_sigwait () from /lib/tls/libpthread.so.0 #2 0x00c7888f in sigwait () from /lib/tls/libpthread.so.0 #3 0x009775ea in apr_signal_thread (signal_handler=0xa75e30
[modwsgi] Re: Segmentation fault - premature end of script headers
2008/9/30 Pigletto [EMAIL PROTECTED]: After switching to WSGIApplicationGroup %{GLOBAL} my application started, but I have few more applications on this apache instance so I can't use this kind of setup. Can you explain to me how WebFaction process/memory limits work? If you don't have issues with number of processes and only overall memory usage, then create a separate daemon process group for each application with it being forced to run in main interpreter of its own process. Thus: VirtualHost *:2867 ServerName my-domain.xyz WSGIDaemonProcess rek-prod-app-1 user=xyz group=xyz processes=2 threads=1 \ maximum-requests=500 inactivity-timeout=7200 stack-size=524288 \ display-name=%{GROUP} WSGIScriptAlias / /home2/(...)/rek_project-1.wsgi Directory /home2/(...)/rek_project-1/ WSGIProcessGroup rek-prod-app-1 WSGIApplicationGroup %{GLOBAL} Order deny,allow Allow from all /Directory WSGIDaemonProcess rek-prod-app-1 user=xyz group=xyz processes=2 threads=1 \ maximum-requests=500 inactivity-timeout=7200 stack-size=524288 \ display-name=%{GROUP} WSGIScriptAlias /suburl /home2/(...)/rek_project-2.wsgi Directory /home2/(...)/rek_project-2/ WSGIProcessGroup rek-prod-app-2 WSGIApplicationGroup %{GLOBAL} Order deny,allow Allow from all /Directory /VirtualHost This would end up with similar memory usage, the difference being that the application instances are in separate processes rather than separate sub interpreters of same process. Graham --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups modwsgi group. To post to this group, send email to modwsgi@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/modwsgi?hl=en -~--~~~~--~~--~--~---
[modwsgi] Re: Segmentation fault - premature end of script headers
Now, again, my application is working with the same setup as before (without GLOBAL). I don't know why this started without segfault now. Nothing has changed. I have to mention that the issue that caused I was not able to start my application today morning was because my memory was over the limit (before this I was disconnected while gdb'ing my app on another Apache instance and gdb process was hung using too much memory) so webfaction killed my processes. After my processes were killed I had to start everything and I was not albe to make one of my apps running (as you have seen already). So, important thing is that there were no changes in application code and no changes in apache configuration. Currently it works again and I can't do more debugging - it doesn't want to segfault. I've added some print statements as you've suggested but I think that wsgi script was imported properlywhen segmentation fault has occured becouse LoggingMiddleware had written empty oheaders.. and ocontent.. files. Can you explain to me how WebFaction process/memory limits work? There are no limits for number of processes only for memory usage. If you don't have issues with number of processes and only overall memory usage, then create a separate daemon process group for each application with it being forced to run in main interpreter of its own process. Thus: Directory /home2/(...)/rek_project-2/ WSGIProcessGroup rek-prod-app-2 WSGIApplicationGroup %{GLOBAL} Order deny,allow Allow from all /Directory /VirtualHost This would end up with similar memory usage, the difference being that the application instances are in separate processes rather than separate sub interpreters of same process. OK I'll try this. Strange thing is that I had no segmentation faults for two days (since my previous post), and today morning I've seen them one after one. I think about things like: maximum requests per child setting in apache, something with threading in apache, memcached - was not started while I was trying to start my application, but when I've switched to %{GLOBAL}, memcached was still down and it worked... I had segmentation faults before (with locmem caching, so it is not issue with memcached). AFAIR I saw some segfaults before using django- compress. Maybe this is something nasty in psycopg2. I think about adding print statements to all my middlewares and functions. This thing is really hard to debug especially on the server that is used by real users. Thank you very much for your help so far. -- Maciej Wisniowski --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups modwsgi group. To post to this group, send email to modwsgi@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/modwsgi?hl=en -~--~~~~--~~--~--~---
[modwsgi] Re: Segmentation fault - premature end of script headers
What do you get if you run: ulimit -a Maybe they have some sort of hard memory limits in place and you are hitting that. Graham 2008/9/30 Pigletto [EMAIL PROTECTED]: Now, again, my application is working with the same setup as before (without GLOBAL). I don't know why this started without segfault now. Nothing has changed. I have to mention that the issue that caused I was not able to start my application today morning was because my memory was over the limit (before this I was disconnected while gdb'ing my app on another Apache instance and gdb process was hung using too much memory) so webfaction killed my processes. After my processes were killed I had to start everything and I was not albe to make one of my apps running (as you have seen already). So, important thing is that there were no changes in application code and no changes in apache configuration. Currently it works again and I can't do more debugging - it doesn't want to segfault. I've added some print statements as you've suggested but I think that wsgi script was imported properlywhen segmentation fault has occured becouse LoggingMiddleware had written empty oheaders.. and ocontent.. files. Can you explain to me how WebFaction process/memory limits work? There are no limits for number of processes only for memory usage. If you don't have issues with number of processes and only overall memory usage, then create a separate daemon process group for each application with it being forced to run in main interpreter of its own process. Thus: Directory /home2/(...)/rek_project-2/ WSGIProcessGroup rek-prod-app-2 WSGIApplicationGroup %{GLOBAL} Order deny,allow Allow from all /Directory /VirtualHost This would end up with similar memory usage, the difference being that the application instances are in separate processes rather than separate sub interpreters of same process. OK I'll try this. Strange thing is that I had no segmentation faults for two days (since my previous post), and today morning I've seen them one after one. I think about things like: maximum requests per child setting in apache, something with threading in apache, memcached - was not started while I was trying to start my application, but when I've switched to %{GLOBAL}, memcached was still down and it worked... I had segmentation faults before (with locmem caching, so it is not issue with memcached). AFAIR I saw some segfaults before using django- compress. Maybe this is something nasty in psycopg2. I think about adding print statements to all my middlewares and functions. This thing is really hard to debug especially on the server that is used by real users. Thank you very much for your help so far. -- Maciej Wisniowski --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups modwsgi group. To post to this group, send email to modwsgi@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/modwsgi?hl=en -~--~~~~--~~--~--~---
[modwsgi] Re: Segmentation fault - premature end of script headers
On 30 Wrz, 14:41, Graham Dumpleton [EMAIL PROTECTED] wrote: What do you get if you run: ulimit -a Maybe they have some sort of hard memory limits in place and you are hitting that. Output of ulimit -a is: --- core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited pending signals (-i) 1024 max locked memory (kbytes, -l) 32 max memory size (kbytes, -m) unlimited open files (-n) 4096 pipe size(512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 200 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited --- AFAIK there is no hard limit at Webfaction. I have 160 MB memory limit but my processes were killed when memory usage was above 220 MB (ups..). Additionaly after every such incident I'm notified by Webfaction about this issue. So other segmentation faults I've seen before are not connected with process killing due to memory problems. One more question as I'm a bit confused about WSGIApplicationGroup directive. So far I was not using this at all. Does this mean that % {GLOBAL} was used implicitly - by default? I only had WSGIProcessGroup directives in use. I've added a lot of printsys.stderr statements into my application and I will try to raise segmentation fault somehow... -- Maciej Wisniowski --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups modwsgi group. To post to this group, send email to modwsgi@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/modwsgi?hl=en -~--~~~~--~~--~--~---
[modwsgi] Re: Segmentation fault - premature end of script headers
I've managed to get segmentation fault (I was just clicking around my application, I forced few reloads of mod_wsgi by changing wsgi script, etc.), and I was able to reproduce this few times. Again, I've connected to it with gdb but this time I've issued command 'share' before 'bt'. Thanks to this I was able to see much more interesting things. WSGI script is executed, processing reaches my function (view in Django) and exception is raised inside the view. Below is long output of gdb. Seems to me that it is psycopg2 issue...? In my code it is like: class OrManager(models.Manager): def latest(self, count=5): latest = cache.get('latest-offers') if latest is None: latest = self.filter(is_active=True).order_by('- date_added')[:count] print sys.stderr, latest # THIS LINE FAILS - real execution of the SQL I wonder whether this issue might be solved by using %{GLOBAL}? GDB session: (...) (gdb) cont Continuing. Program received signal SIGSEGV, Segmentation fault. [Switching to Thread -1212707936 (LWP 9463)] PyErr_Occurred () at Python/errors.c:80 80 Python/errors.c: No such file or directory. in Python/errors.c (gdb) bt #0 PyErr_Occurred () at Python/errors.c:80 #1 0x00d65167 in _PyObject_GC_Malloc (basicsize=40) at Modules/ gcmodule.c:1326 #2 0x00d6521c in _PyObject_GC_NewVar (tp=0xd9f3c0, nitems=7) at Modules/gcmodule.c:1352 #3 0x00cfec33 in PyTuple_New (size=7) at Objects/tupleobject.c:68 #4 0x00400dc0 in ?? () #5 0x0007 in ?? () #6 0x0009 in ?? () #7 0xb7b74aa8 in ?? () #8 0x00400d4e in ?? () #9 0x00d95980 in PyExc_IndexError () from /usr/lib/libpython2.5.so. 1.0 #10 0x in ?? () (gdb) share Symbols already loaded for /lib/tls/libm.so.6 Symbols already loaded for /home2/(...)/apache2.2//lib/libaprutil-1.so. 0 Symbols already loaded for /usr/lib/libsqlite3.so.0 Symbols already loaded for /usr/lib/libexpat.so.0 Symbols already loaded for /home2/(...)/apache2.2//lib/libapr-1.so.0 Symbols already loaded for /lib/libuuid.so.1 Symbols already loaded for /lib/tls/librt.so.1 Symbols already loaded for /lib/libcrypt.so.1 Symbols already loaded for /lib/tls/libpthread.so.0 Symbols already loaded for /lib/libdl.so.2 Symbols already loaded for /lib/tls/libc.so.6 Symbols already loaded for /lib/ld-linux.so.2 Symbols already loaded for /lib/libnss_files.so.2 Symbols already loaded for /home2/(...)/apache2.2/modules/mod_wsgi.so Symbols already loaded for /usr/lib/libpython2.5.so.1.0 Symbols already loaded for /lib/libutil.so.1 Symbols already loaded for /home2/(...)/apache2.2/modules/ mod_log_config.so Symbols already loaded for /home2/(...)/apache2.2/modules/ mod_auth_basic.so Symbols already loaded for /home2/(...)/apache2.2/modules/ mod_authz_user.so Symbols already loaded for /home2/(...)/apache2.2/modules/ mod_authz_host.so Symbols already loaded for /home2/(...)/apache2.2/modules/mod_env.so Symbols already loaded for /home2/(...)/modules/mod_alias.so Symbols already loaded for /home2/(...)/modules/mod_auth_tkt.so Symbols already loaded for /home2/(...)/modules/mod_rewrite.so Reading symbols from /usr/local/lib/python2.5/lib-dynload/ time.so...done. Loaded symbols for /usr/local/lib/python2.5/lib-dynload/time.so Reading symbols from /usr/local/lib/python2.5/lib-dynload/ collections.so...done. Loaded symbols for /usr/local/lib/python2.5/lib-dynload/collections.so Reading symbols from /usr/local/lib/python2.5/lib-dynload/ cStringIO.so...done. Loaded symbols for /usr/local/lib/python2.5/lib-dynload/cStringIO.so Reading symbols from /usr/local/lib/python2.5/lib-dynload/ strop.so...done. Loaded symbols for /usr/local/lib/python2.5/lib-dynload/strop.so Reading symbols from /usr/local/lib/python2.5/lib-dynload/ cPickle.so...done. Loaded symbols for /usr/local/lib/python2.5/lib-dynload/cPickle.so Reading symbols from /usr/local/lib/python2.5/lib-dynload/ _socket.so...done. Loaded symbols for /usr/local/lib/python2.5/lib-dynload/_socket.so Reading symbols from /usr/local/lib/python2.5/lib-dynload/ _ssl.so...done. Loaded symbols for /usr/local/lib/python2.5/lib-dynload/_ssl.so Reading symbols from /lib/libssl.so.4...done. Loaded symbols for /lib/libssl.so.4 Reading symbols from /lib/libcrypto.so.4...done. Loaded symbols for /lib/libcrypto.so.4 Reading symbols from /usr/lib/libgssapi_krb5.so.2...done. Loaded symbols for /usr/lib/libgssapi_krb5.so.2 Reading symbols from /usr/lib/libkrb5.so.3...done. Loaded symbols for /usr/lib/libkrb5.so.3 Reading symbols from /lib/libcom_err.so.2...done. Loaded symbols for /lib/libcom_err.so.2 Reading symbols from /usr/lib/libk5crypto.so.3...done. Loaded symbols for /usr/lib/libk5crypto.so.3 Reading symbols from /lib/libresolv.so.2...done. Loaded symbols for /lib/libresolv.so.2 Reading symbols from /usr/lib/libz.so.1...done. Loaded symbols for /usr/lib/libz.so.1 Reading symbols from /usr/local/lib/python2.5/lib-dynload/ operator.so...done. Loaded symbols for
[modwsgi] Re: Segmentation fault - premature end of script headers
Currently I use psycopg2 from the svn - version dated at January 2008. I've just looked at initd.org's svn and I see there is psycopg2-2.0.8 and in change log from march I found: 2008-03-07 James Henstridge [EMAIL PROTECTED] * psycopg/pqpath.c (_pq_fetch_tuples): Don't call Python APIs without holding the GIL. Maybe that is the problem? I'll give a try to newest psycopg2 -- Maciej Wisniowski --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups modwsgi group. To post to this group, send email to modwsgi@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/modwsgi?hl=en -~--~~~~--~~--~--~---
[modwsgi] Re: Segmentation fault - premature end of script headers
On Tue, Sep 30, 2008 at 4:22 PM, Pigletto [EMAIL PROTECTED] wrote: Maybe that is the problem? I'll give a try to newest psycopg2 2.0.8 definitely fixed some segfaults on my end. Brett --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups modwsgi group. To post to this group, send email to modwsgi@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/modwsgi?hl=en -~--~~~~--~~--~--~---
[modwsgi] Re: Segmentation fault - premature end of script headers
All I can suggest at this point are: 1. Disable the background email thread and see if the problem still occurs. 2. Force application to run in main interpreter by setting: WSGIApplicationGroup %{GLOBAL} This will eliminate possibility that problems are caused by a third party C extension module which isn't implemented correctly so as to work in secondary sub interpreters. 3. Try and attach gdb, if available, to running daemon process and see if you can catch traceback for what causes process to crash. See: http://code.google.com/p/modwsgi/wiki/DebuggingTechniques#Debugging_C... Thanks for these tips so far. You're writing about external threads... so I started wondering about django-compress. This app uses external applications eg. yui compressor or csstidy to compress all js and css files into one, big resource file (one for js and one for css). It does merging during startup of django processes but, AFAIK, only if there are any changes in js or css files, so this shouldn't be an issue when recent segmentation fault happened in my Apache, as there were no changes, but maybe it is somehow connected. I don't know the internals of django-compress too much. -- Maciej Wisniowski --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups modwsgi group. To post to this group, send email to modwsgi@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/modwsgi?hl=en -~--~~~~--~~--~--~---
[modwsgi] Re: Segmentation fault - premature end of script headers
Are you using the Python installation and Apache modules that WebFaction supplies or have you built any yourself from source code? Have you installed any third party Python modules yourself? Are you running PHP in the same Apache installation? Does your Django application create any background threads to perform processing. Am asking just to collect a bit more background information while I think about what to suggest. Graham 2008/9/28 Pigletto [EMAIL PROTECTED]: I'm using Django 1.0, mod_wsgi 2.3 in daemon mode. My application is hosted at Webfaction. Sometimes (it is not deterministic) I get errors like: [Sat Sep 27 01:51:06 2008] [error] [client 127.0.0.1] Premature end of script headers: rek.wsgi [Sat Sep 27 01:51:06 2008] [notice] child pid 15592 exit signal Segmentation fault (11) My apache is not running mod_python. It uses mpm-worker threads. Apache conf is like: (...) WSGIDaemonProcess rek-prod user=xyz group=xyz processes=2 threads=1 maximum-requests=500 inactivity-timeout=7200 stack-size=524288 display- name=%{GROUP} (...) VirtualHost *:2867 ServerName my-domain.xyz WSGIProcessGroup rek-prod WSGIScriptAlias / /home2/(...)/rek.wsgi Directory /home2/(...)/rek_project/ Order deny,allow Allow from all /Directory /VirtualHost User that apache is running is, say, 'xyz', same as set to WSGIDaemonProcess directive. I've set additional logging for wsgi (as described at http://code.google.com/p/modwsgi/wiki/DebuggingTechniques). What I see is that output headers and output content are both empty (0 bytes). Interesting is that the errors logged are appearing when daemon- processes are reching max-limit. Saved files are: size | date | filename 0 Sep 27 00:15 1222492508.42-26504-498.ocontent 136 Sep 27 00:15 1222492508.42-26504-498.oheaders 0 Sep 27 00:20 1222492841.42-26505-500.icontent 1813 Sep 27 00:20 1222492841.42-26505-500.iheaders 0 Sep 27 00:20 1222492841.42-26505-500.ocontent 136 Sep 27 00:20 1222492841.42-26505-500.oheaders 0 Sep 27 00:30 1222493410.86-26504-499.icontent 1813 Sep 27 00:30 1222493410.86-26504-499.iheaders 0 Sep 27 00:30 1222493410.86-26504-499.ocontent 136 Sep 27 00:30 1222493410.86-26504-499.oheaders 0 Sep 27 00:35 1222493743.3-9633-1.icontent 1813 Sep 27 00:35 1222493743.3-9633-1.iheaders 0 Sep 27 00:35 1222493743.3-9633-1.ocontent 0 Sep 27 00:35 1222493743.3-9633-1.oheaders 0 Sep 27 00:45 1222494313.46-26504-500.icontent 1813 Sep 27 00:45 1222494313.46-26504-500.iheaders 0 Sep 27 00:45 1222494313.46-26504-500.ocontent 136 Sep 27 00:45 1222494313.46-26504-500.oheaders 0 Sep 27 00:50 1222494648.64-10774-1.icontent 1813 Sep 27 00:50 1222494648.64-10774-1.iheaders 0 Sep 27 00:50 1222494648.64-10774-1.ocontent 0 Sep 27 00:50 1222494648.64-10774-1.oheaders 0 Sep 27 01:00 1222495215.96-11502-1.icontent 1813 Sep 27 01:00 1222495215.96-11502-1.iheaders 0 Sep 27 01:00 1222495215.96-11502-1.ocontent 0 Sep 27 01:00 1222495215.96-11502-1.oheaders 0 Sep 27 01:05 1222495558.97-11941-1.icontent 1813 Sep 27 01:05 1222495558.97-11941-1.iheaders 0 Sep 27 01:05 1222495558.97-11941-1.ocontent 0 Sep 27 01:05 1222495558.97-11941-1.oheaders 0 Sep 27 01:15 1222496119.36-12730-1.icontent 1813 Sep 27 01:15 1222496119.36-12730-1.iheaders 0 Sep 27 01:15 1222496119.36-12730-1.ocontent 0 Sep 27 01:15 1222496119.36-12730-1.oheaders 0 Sep 27 01:21 1222496460.87-13182-1.icontent 1813 Sep 27 01:21 1222496460.87-13182-1.iheaders 0 Sep 27 01:21 1222496460.87-13182-1.ocontent 0 Sep 27 01:21 1222496460.87-13182-1.oheaders 0 Sep 27 01:30 1222497021.53-13961-1.icontent 1813 Sep 27 01:30 1222497021.53-13961-1.iheaders 0 Sep 27 01:30 1222497021.53-13961-1.ocontent 0 Sep 27 01:30 1222497021.53-13961-1.oheaders 0 Sep 27 01:36 1222497362.97-14450-1.icontent 1813 Sep 27 01:36 1222497362.97-14450-1.iheaders 0 Sep 27 01:36 1222497362.97-14450-1.ocontent 0 Sep 27 01:36 1222497362.97-14450-1.oheaders 0 Sep 27 01:45 1222497924.66-15209-1.icontent 1813 Sep 27 01:45 1222497924.66-15209-1.iheaders 0 Sep 27 01:45 1222497924.66-15209-1.ocontent 0 Sep 27 01:45 1222497924.66-15209-1.oheaders 0 Sep 27 01:51 1222498265.33-15592-1.icontent 1813 Sep 27 01:51 1222498265.33-15592-1.iheaders 0 Sep 27 01:51 1222498265.33-15592-1.ocontent 0 Sep 27 01:51 1222498265.33-15592-1.oheaders 0 Sep 27 01:56 1222498563.45-16320-1.icontent 2300 Sep 27 01:56 1222498563.45-16320-1.iheaders 24720 Sep 27 01:56 1222498563.45-16320-1.ocontent 136 Sep 27 01:56 1222498563.45-16320-1.oheaders 0 Sep 27 01:56 1222498565.66-16794-1.icontent 2275 Sep 27 01:56 1222498565.66-16794-1.iheaders 3641 Sep 27 01:56 1222498565.66-16794-1.ocontent 127 Sep 27 01:56 1222498565.66-16794-1.oheaders 0 Sep 27 02:00 1222498840.89-16320-2.icontent 1813 Sep 27 02:00