Thanks for the input, Mike. You have described our existing code, pretty
much, only we have nginx sitting in top proxying b/w multiple workers. It
works and works well, so I guess my search for even greater simplicity is a
bit unfounded here :-) Thanks!

On Tue, Jun 1, 2010 at 1:38 AM, Mike Orr <[email protected]> wrote:

> On Mon, May 31, 2010 at 2:09 PM, Eugueny Kontsevoy <[email protected]>
> wrote:
> > Thanks for all your replies guys, yes you're right - there are better
> tools
> > for this job (we're looking at celery) but that only increases the number
> of
> > moving parts for something fundamentally very simple.
> >
> > Using a message broker like RabbitMQ makes our chain look like:
> >
> > nginx queue -> paster queue -> celery queue -> worker process
> >
> > Why so many queues and so many processes (and config files) to manage?
> All
> > we really need is just to receive a request, return "200 OK" and start
> > working on it. There is no "GUI", no browser, just a series of HTTP POSTs
> > that don't expect anything in return.
>
> There are ways to make it much simpler, without using celery or
> Pylons.  If you're only queuing a job and returning a quick dummy
> response, you can use a Queue and BaseHTTPServer, both in the Python
> standard library. Have one thread accept requests, put the job in the
> queue, and return 204 No Content on success, or 4xx or 5xx on error.
> BaseHTTPServer is synchronous, but that won't matter unless you're
> getting several requests a second.
>
> Going up a level in complexity, you can use the wsgiref server, and
> write a basic WSGI application, optionally using WebOb and Routes. Or
> you can even plug in a Pylons application, with all the optional
> middlewares turned off, and skipping the Paste/INI stuff.
> (Specifically, you'd call make_app() in middleware.py, and pass that
> to wsgiref.simple_server.make_server().).  You could also use Routes
> manually, by defining a mapper and calling mapper.match() with the
> WSGI environ.
>
> Or instead of a queue, the request handler could spawn a thread to
> handle the task, while the parent thread returns the dummy response.
> You may want to keep a thread count in this case, to prevent too many
> worker threads from running simultaneously. You could keep an integer
> count, or a set of thread IDs. It may be possible to use weakrefs to
> put the thread itself in a set and have it automatically disappear
> when the thread exits; I'm not sure about that. Otherwise you'd have
> to consider: runaway threads (too many requests), deadlocks (two
> threads waiting for each other to do something), hung threads (a
> thread gets stuck for some reason), and zombie threads (thread won't
> exit for some reason).  You'd also have to consider whether it's OK
> for jobs to disappear when the server exists, and for threads to be
> killed in progress.
>
> --
> Mike Orr <[email protected]>
>
> --
> You received this message because you are subscribed to the Google Groups
> "pylons-discuss" group.
> To post to this group, send email to [email protected].
> To unsubscribe from this group, send email to
> [email protected]<pylons-discuss%[email protected]>
> .
> For more options, visit this group at
> http://groups.google.com/group/pylons-discuss?hl=en.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"pylons-discuss" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/pylons-discuss?hl=en.

Reply via email to