[uWSGI] Help with the --reaper option for experienced uwsig users?

2012-10-22 Thread Andrew Fischer
I've been running uwsgi for about a year and I've run into a situation
that I can't seem to sort out. I'm not positive there is a good
solution, but maybe someone with far more knowledge than I could shed
some light. I'll give some background, please bear with me.

I run uwsig behind nginx to run a simple mercurial hgweb server. My
uwsgi configuration is pretty basic:

-M -p 4 -d /var/log/$daemon_name.log --pidfile /run/$daemon_name.pid
--pythonpath $hgwebpath --module $hgwebmodule

However, I recently added buildbot to our setup, which is triggered by
a commit hook in hgweb. It's all built in stuff, I didn't write any of
it.

Unfortunately this hook uses fork, and so generates defunct uwsgi
instances when it occurs. It appears to be a known issue with the
buildbot.

I decided uwsgi's --reaper option looked like it might help me out. It
did the trick, very handy since I didn't want to wade into the
buildbot codebase. Like the manual for --reaper says you should fix
your process spawning usage (if you can) ... and I don't think I can.

However, after enabling reaper I noticed that very large commit pushes
to hgweb over http would cause the process to be killed. It would
happen anytime a push of 20MB or larger was pushed up to the server.
(This is extremely rare, we just happen to have a project that carries
this much baggage).

After a lot of reading and testing, I found that by removing the
--reaper option from uswgi, the commits would no longer be killed. I
could push up as large a bundle as I liked (+100MB). However, without
the reaper my buildbot is back to leaving zombies all over the place.

Do any of you know more about the --reaper option, and if there is any
additional control over how it determines what a zombie process is? Or
is there is a different uwsgi option I should use? I fully realize
uwsgi is not the problem here; I blame uwsig and buildbot. But since
uwsgi is so flexible I wondered if there might be a way to have my
cake and eat it too, so to speak.

Big thanks for any feedback.
-Andrew



-- 
Andrew Fischer
___
uWSGI mailing list
uWSGI@lists.unbit.it
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi


Re: [uWSGI] Help with the --reaper option for experienced uwsig users?

2012-10-22 Thread Andrew Fischer
Sorry, I meant to say at the end I realize uwsgi is not the problem
here; I blame *hgweb* and buildbot.

-Andrew


On Mon, Oct 22, 2012 at 9:14 AM, Andrew Fischer wizzr...@gmail.com wrote:
 I've been running uwsgi for about a year and I've run into a situation
 that I can't seem to sort out. I'm not positive there is a good
 solution, but maybe someone with far more knowledge than I could shed
 some light. I'll give some background, please bear with me.

 I run uwsig behind nginx to run a simple mercurial hgweb server. My
 uwsgi configuration is pretty basic:

 -M -p 4 -d /var/log/$daemon_name.log --pidfile /run/$daemon_name.pid
 --pythonpath $hgwebpath --module $hgwebmodule

 However, I recently added buildbot to our setup, which is triggered by
 a commit hook in hgweb. It's all built in stuff, I didn't write any of
 it.

 Unfortunately this hook uses fork, and so generates defunct uwsgi
 instances when it occurs. It appears to be a known issue with the
 buildbot.

 I decided uwsgi's --reaper option looked like it might help me out. It
 did the trick, very handy since I didn't want to wade into the
 buildbot codebase. Like the manual for --reaper says you should fix
 your process spawning usage (if you can) ... and I don't think I can.

 However, after enabling reaper I noticed that very large commit pushes
 to hgweb over http would cause the process to be killed. It would
 happen anytime a push of 20MB or larger was pushed up to the server.
 (This is extremely rare, we just happen to have a project that carries
 this much baggage).

 After a lot of reading and testing, I found that by removing the
 --reaper option from uswgi, the commits would no longer be killed. I
 could push up as large a bundle as I liked (+100MB). However, without
 the reaper my buildbot is back to leaving zombies all over the place.

 Do any of you know more about the --reaper option, and if there is any
 additional control over how it determines what a zombie process is? Or
 is there is a different uwsgi option I should use? I fully realize
 uwsgi is not the problem here; I blame uwsig and buildbot. But since
 uwsgi is so flexible I wondered if there might be a way to have my
 cake and eat it too, so to speak.

 Big thanks for any feedback.
 -Andrew



 --
 Andrew Fischer



-- 
Andrew Fischer
LT Engineering Software
http://ltengsoft.com
___
uWSGI mailing list
uWSGI@lists.unbit.it
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi


Re: [uWSGI] Help with the --reaper option for experienced uwsig users?

2012-10-22 Thread Łukasz Mierzwa
2012/10/22 Andrew Fischer wizzr...@gmail.com:

 However, I recently added buildbot to our setup, which is triggered by
 a commit hook in hgweb. It's all built in stuff, I didn't write any of
 it.

Do You use this hook:

http://buildbot.net/buildbot/docs/0.8.7/manual/cfg-changesources.html#mercurial-hook

?? If not then please share its code.

-- 
Łukasz Mierzwa
___
uWSGI mailing list
uWSGI@lists.unbit.it
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi


Re: [uWSGI] Help with the --reaper option for experienced uwsig users?

2012-10-22 Thread Roberto De Ioris

Il giorno 22/ott/2012, alle ore 16:14, Andrew Fischer wizzr...@gmail.com ha 
scritto:

 I've been running uwsgi for about a year and I've run into a situation
 that I can't seem to sort out. I'm not positive there is a good
 solution, but maybe someone with far more knowledge than I could shed
 some light. I'll give some background, please bear with me.
 
 I run uwsig behind nginx to run a simple mercurial hgweb server. My
 uwsgi configuration is pretty basic:
 
 -M -p 4 -d /var/log/$daemon_name.log --pidfile /run/$daemon_name.pid
 --pythonpath $hgwebpath --module $hgwebmodule
 
 However, I recently added buildbot to our setup, which is triggered by
 a commit hook in hgweb. It's all built in stuff, I didn't write any of
 it.
 
 Unfortunately this hook uses fork, and so generates defunct uwsgi
 instances when it occurs. It appears to be a known issue with the
 buildbot.
 
 I decided uwsgi's --reaper option looked like it might help me out. It
 did the trick, very handy since I didn't want to wade into the
 buildbot codebase. Like the manual for --reaper says you should fix
 your process spawning usage (if you can) ... and I don't think I can.
 
 However, after enabling reaper I noticed that very large commit pushes
 to hgweb over http would cause the process to be killed. It would
 happen anytime a push of 20MB or larger was pushed up to the server.
 (This is extremely rare, we just happen to have a project that carries
 this much baggage).


Do you get some c traceback or specific loglines when the worker dies ?


--
Roberto De Ioris
http://unbit.it
JID: robe...@jabber.unbit.it

___
uWSGI mailing list
uWSGI@lists.unbit.it
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi


[uWSGI] why upstream timeout but upstream itself not busy

2012-10-22 Thread Samuel
Hi,

I recently found  some upstream timed out in nginx error log, but I checked
my upstream servers, who are not busy at all. the machine load is only 2-3
(8 cores), and uwsgitop reports a lot of processes are idle, not busy at
all.

upstream: django, 8 cores, 40 processes, listen 4000 sockets




-- 
*吴焱红(Samuel)*

博客: blog.shanbay.com
微博: 扇贝网 http://www.weibo.com/shanbay
人人网: 一起背单词公共主页 http://page.renren.com/699128841?ref=lnkprofile
___
uWSGI mailing list
uWSGI@lists.unbit.it
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi


Re: [uWSGI] why upstream timeout but upstream itself not busy

2012-10-22 Thread Roberto De Ioris

Il giorno 22/ott/2012, alle ore 17:23, Samuel samuel.yh...@gmail.com ha 
scritto:

 Hi, 
 
 I recently found  some upstream timed out in nginx error log, but I checked 
 my upstream servers, who are not busy at all. the machine load is only 2-3 (8 
 cores), and uwsgitop reports a lot of processes are idle, not busy at all. 
 
 upstream: django, 8 cores, 40 processes, listen 4000 sockets
 
 

nginx has an internal timeout on the response generation (60 seconds by 
default).

Maybe you had some response not generating in time (you may want to check slow 
requests in uwsgi logs, to see if some
is higher than nginx timeout)

--
Roberto De Ioris
http://unbit.it
JID: robe...@jabber.unbit.it

___
uWSGI mailing list
uWSGI@lists.unbit.it
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi


Re: [uWSGI] why upstream timeout but upstream itself not busy

2012-10-22 Thread Samuel
On Mon, Oct 22, 2012 at 11:37 PM, Roberto De Ioris robe...@unbit.it wrote:



 nginx has an internal timeout on the response generation (60 seconds by
 default).

 Maybe you had some response not generating in time (you may want to check
 slow requests in uwsgi logs, to see if some
 is higher than nginx timeout)


i set nginx proxy_read_timeout 120, and I also set  harakiri = 20 in uwsgi.
If any requests waiting longer than 20 seconds, the request itself should
be killed and the process should be recycled, while nginx cannot read
timeout connections?

-- 
*吴焱红(Samuel)*

博客: blog.shanbay.com
___
uWSGI mailing list
uWSGI@lists.unbit.it
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi


Re: [uWSGI] *really* general-purpose container support (ie. no --socket requirement/need)

2012-10-22 Thread Roberto De Ioris

Il giorno 22/ott/2012, alle ore 12:20, C Anthony Risinger anth...@xtfx.me ha 
scritto:

 On Sun, Oct 21, 2012 at 6:05 PM, C Anthony Risinger anth...@xtfx.me wrote:
 
 [...]
 
 the only other issue thus far appears to be within the tracebacker
 itself... i am consistently (but more-or-less at random points in
 code) hitting a SIGSEGV when it's enabled, within 30 seconds when
 loaded.
 
 [...]
 
 ... will keep investigating.
 
 i found the problem, and it's def with tracebacker.  i don't
 understand why it's happening exactly, but tuple()s created during
 duct - list (shown in python code, is really C of course):
 
 plugins/python/tracebacker.c:84-96
 sys._current_frames().items()
 
 ...are not getting DECREF'ed properly.  these tuples contain a
 thread_id and a PyFrame_Type... later on, the GC tries looping thru
 this tuple **expecting a length of 2, per ob_size** but OOPS! the
 PyFrame_Type has already been deallocated.
 
 i tried a couple things to fix (decref'ing iterators, next items, and
 avoiding a list altogether...) but i kept triggering a double-free(!?)
 (current_frames[_items]...)
 
 the docs say PyObject_GetIter() and PyIter_Next() return new refs that
 must be released -- this is not happening consistently in tracebacker
 -- eg. `threads_list_iter` and `stacktrace_iter` are released but
 `frames_iter` is not.
 
 i'm clearly missing something from lack of experience; Roberto you
 will probably know exactly the issue... i believe it lies with the
 handling of `_current_frame`.
 
 hopefully my probing will be of some use!
 
 thanks,
 
 

I am trying to address that, can you report the uWSGI config you are using ?

Thanks

--
Roberto De Ioris
http://unbit.it
JID: robe...@jabber.unbit.it

___
uWSGI mailing list
uWSGI@lists.unbit.it
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi


Re: [uWSGI] why upstream timeout but upstream itself not busy

2012-10-22 Thread Roberto De Ioris

Il giorno 22/ott/2012, alle ore 17:50, Samuel samuel.yh...@gmail.com ha 
scritto:

 
 On Mon, Oct 22, 2012 at 11:37 PM, Roberto De Ioris robe...@unbit.it wrote:
 
 
 nginx has an internal timeout on the response generation (60 seconds by 
 default).
 
 Maybe you had some response not generating in time (you may want to check 
 slow requests in uwsgi logs, to see if some
 is higher than nginx timeout)
 
 i set nginx proxy_read_timeout 120, and I also set  harakiri = 20 in uwsgi. 
 If any requests waiting longer than 20 seconds, the request itself should be 
 killed and the process should be recycled, while nginx cannot read timeout 
 connections?
  
 -- 
 吴焱红(Samuel)
 
 博客: blog.shanbay.com 
 

Are you using some kind async/non-blocking mode ? (like the gevent plugin).

In such a case, harakiri will be triggered only when ALL of your greenlet are 
blocked.

Per-core soft-harakiri (threads and greenlet cannot be destroyed without 
impacting the whole process, so we can only report
long-running requests as a log line) will be implemented in 1.4

--
Roberto De Ioris
http://unbit.it
JID: robe...@jabber.unbit.it

___
uWSGI mailing list
uWSGI@lists.unbit.it
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi


Re: [uWSGI] why upstream timeout but upstream itself not busy

2012-10-22 Thread Samuel
No, I don't think I used async mode, which seems quite difficult to be
handled from the doc.

I put my django ini here:

stats = /tmp/uwsgi_statsock
 workers = 40
 max-requests = 10
 listen = 4000
 socket = :8081
 chdir = /home/django/envs/product/src/
 home =  ../
 pythonpath = ./
 module = django_wsgi
 env = DJANGO_SETTINGS_MODULE=settings
 module = django.core.handlers.wsgi:WSGIHandler()
 daemonize = ./log/uwsgi.log
 logdate = true
 logslow = true
 logbig = true
 log-5xx = true
 disable-logging = true
 master = true
 auto-procname = true
 harakiri = 20
 harakiri-verbose = true
 single-interpreter = true
 pidfile = ./uwsgi.pid
 touch-reload = ./uwsgi.pid



On Mon, Oct 22, 2012 at 11:55 PM, Roberto De Ioris robe...@unbit.it wrote:


 Il giorno 22/ott/2012, alle ore 17:50, Samuel samuel.yh...@gmail.com ha
 scritto:

 
  On Mon, Oct 22, 2012 at 11:37 PM, Roberto De Ioris robe...@unbit.it
 wrote:
 
 
  nginx has an internal timeout on the response generation (60 seconds by
 default).
 
  Maybe you had some response not generating in time (you may want to
 check slow requests in uwsgi logs, to see if some
  is higher than nginx timeout)
 
  i set nginx proxy_read_timeout 120, and I also set  harakiri = 20 in
 uwsgi. If any requests waiting longer than 20 seconds, the request itself
 should be killed and the process should be recycled, while nginx cannot
 read timeout connections?
 
  --
  吴焱红(Samuel)
 
  博客: blog.shanbay.com
 

 Are you using some kind async/non-blocking mode ? (like the gevent plugin).

 In such a case, harakiri will be triggered only when ALL of your greenlet
 are blocked.

 Per-core soft-harakiri (threads and greenlet cannot be destroyed without
 impacting the whole process, so we can only report
 long-running requests as a log line) will be implemented in 1.4

 --
 Roberto De Ioris
 http://unbit.it
 JID: robe...@jabber.unbit.it

 ___
 uWSGI mailing list
 uWSGI@lists.unbit.it
 http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi




-- 
*吴焱红(Samuel)*

博客: blog.shanbay.com
___
uWSGI mailing list
uWSGI@lists.unbit.it
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi


[uWSGI] How to handle unknown requests?

2012-10-22 Thread Jeff Van Voorst

Greetings,

I am using uWSGI to serve one Flask app via an apache2 reverse proxy 
from port 443 to a uWSGI http socket.


Is there a way to have uWSGI quickly filter/ignore non-existent paths?

Thanks,

Jeff Van Voorst
___
uWSGI mailing list
uWSGI@lists.unbit.it
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi


Re: [uWSGI] How to handle unknown requests?

2012-10-22 Thread Samuel
If your request upstreamed to flask can be very few, then you could set
nginx only proxy such path.

for example:

location /path/ {
   proxy_pass uwsgi://
}

But if you have a lot of such paths, don't worry about 404 generated by
flask. I think 404 could be faster than any other request, just ignore it.


On Tue, Oct 23, 2012 at 12:34 AM, Jeff Van Voorst
jeff.vanvoo...@gmail.comwrote:

 Greetings,

 I am using uWSGI to serve one Flask app via an apache2 reverse proxy from
 port 443 to a uWSGI http socket.

 Is there a way to have uWSGI quickly filter/ignore non-existent paths?

 Thanks,

 Jeff Van Voorst
 __**_
 uWSGI mailing list
 uWSGI@lists.unbit.it
 http://lists.unbit.it/cgi-bin/**mailman/listinfo/uwsgihttp://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi




-- 
*吴焱红(Samuel)*

博客: blog.shanbay.com
___
uWSGI mailing list
uWSGI@lists.unbit.it
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi


Re: [uWSGI] *really* general-purpose container support (ie. no --socket requirement/need)

2012-10-22 Thread C Anthony Risinger
On Mon, Oct 22, 2012 at 10:53 AM, Roberto De Ioris robe...@unbit.it wrote:

 Il giorno 22/ott/2012, alle ore 12:20, C Anthony Risinger anth...@xtfx.me 
 ha scritto:

 [...]

 the docs say PyObject_GetIter() and PyIter_Next() return new refs that
 must be released -- this is not happening consistently in tracebacker
 -- eg. `threads_list_iter` and `stacktrace_iter` are released but
 `frames_iter` is not.

 i'm clearly missing something from lack of experience; Roberto you
 will probably know exactly the issue... i believe it lies with the
 handling of `_current_frame`.

 hopefully my probing will be of some use!

 I am trying to address that, can you report the uWSGI config you are using ?

only using args while i test/experiment:

# uwsgi --loop dumb --dumbloop-code=queue_worker.py
--dumbloop-function=main --enable-threads --venv ${VIRTUAL_ENV}
--py-tracebacker=./trace

...`queue_worker.py:Worker` is simply a subclass of (AMQP):

http://kombu.readthedocs.org/en/latest/reference/kombu.mixins.html#kombu.mixins.ConsumerMixin

... basically a shell for implementing a worker, and main() is little more than:

def main(core_id):
Worker().run()

once uWSGI is running, i start up socat:

# watch -n0.5 socat UNIX-CLIENT:trace1,retry=5 STDOUT

... and if the queue is loaded, i can hit the SIGSEGV within 1-20
seconds or less.

i can't really post the queue_worker.py file (and it would be mostly
irrelevant anyway) but i'll try to quick make a simple repro case,
preferably one that doesn't use the dumbloop (though i don't think
it's related; the code is stable so long as noone is *listening* to
the tracebacker)

thanks,

-- 

C Anthony
___
uWSGI mailing list
uWSGI@lists.unbit.it
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi


Re: [uWSGI] *really* general-purpose container support (ie. no --socket requirement/need)

2012-10-22 Thread C Anthony Risinger
On Mon, Oct 22, 2012 at 12:19 PM, C Anthony Risinger anth...@xtfx.me wrote:

 i can't really post the queue_worker.py file (and it would be mostly
 irrelevant anyway) but i'll try to quick make a simple repro case,
 preferably one that doesn't use the dumbloop (though i don't think
 it's related; the code is stable so long as noone is *listening* to
 the tracebacker)

i've created a barebones AMQP loopback worker capable of segfaulting
in ~5 seconds using rapid tracebacker + an in-memory transport.

...all documented here:

https://github.com/unbit/uwsgi/issues/29

thanks,

-- 

C Anthony
___
uWSGI mailing list
uWSGI@lists.unbit.it
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi


Re: [uWSGI] why upstream timeout but upstream itself not busy

2012-10-22 Thread Samuel
I'm thinking harariki was not working as expected  though uwsgi was running
in single core.

I opened uwsgi logging each request, and I'd like to check if any request
processing longer than 20 seconds. If any, then harakiri must be not
working properly  to kill such processes.




On Tue, Oct 23, 2012 at 12:07 AM, Samuel samuel.yh...@gmail.com wrote:

 No, I don't think I used async mode, which seems quite difficult to be
 handled from the doc.

 I put my django ini here:

 stats = /tmp/uwsgi_statsock
 workers = 40
 max-requests = 10
 listen = 4000
 socket = :8081
 chdir = /home/django/envs/product/src/
 home =  ../
 pythonpath = ./
 module = django_wsgi
 env = DJANGO_SETTINGS_MODULE=settings
 module = django.core.handlers.wsgi:WSGIHandler()
 daemonize = ./log/uwsgi.log
 logdate = true
 logslow = true
 logbig = true
 log-5xx = true
 disable-logging = true
 master = true
 auto-procname = true
 harakiri = 20
 harakiri-verbose = true
 single-interpreter = true
 pidfile = ./uwsgi.pid
 touch-reload = ./uwsgi.pid



 On Mon, Oct 22, 2012 at 11:55 PM, Roberto De Ioris robe...@unbit.itwrote:


 Il giorno 22/ott/2012, alle ore 17:50, Samuel samuel.yh...@gmail.com
 ha scritto:

 
  On Mon, Oct 22, 2012 at 11:37 PM, Roberto De Ioris robe...@unbit.it
 wrote:
 
 
  nginx has an internal timeout on the response generation (60 seconds by
 default).
 
  Maybe you had some response not generating in time (you may want to
 check slow requests in uwsgi logs, to see if some
  is higher than nginx timeout)
 
  i set nginx proxy_read_timeout 120, and I also set  harakiri = 20 in
 uwsgi. If any requests waiting longer than 20 seconds, the request itself
 should be killed and the process should be recycled, while nginx cannot
 read timeout connections?
 
  --
  吴焱红(Samuel)
 
  博客: blog.shanbay.com
 

 Are you using some kind async/non-blocking mode ? (like the gevent
 plugin).

 In such a case, harakiri will be triggered only when ALL of your greenlet
 are blocked.

 Per-core soft-harakiri (threads and greenlet cannot be destroyed without
 impacting the whole process, so we can only report
 long-running requests as a log line) will be implemented in 1.4

 --
 Roberto De Ioris
 http://unbit.it
 JID: robe...@jabber.unbit.it

 ___
 uWSGI mailing list
 uWSGI@lists.unbit.it
 http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi




 --
 *吴焱红(Samuel)*

 博客: blog.shanbay.com






-- 
*吴焱红(Samuel)*

博客: blog.shanbay.com
___
uWSGI mailing list
uWSGI@lists.unbit.it
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi