We have an unusual server application that is essentially a task queue: it
takes job requests via HTTP and sticks them into a thread/process pool of
workers. It works well.

However, I have always felt uncomfortable having all this process/thread
logic in our app. It looks suspiciously similar to a cluster of Pasters it
runs on. This got me thinking: what if we just do everything "in-place"? Is
it possible to do the following:

   - A request comes into Pylons controller via HTTP
   - The controller immediately tells the client "200 OK"  - so it won't
   wait.
   - ... and proceeds working on that request right there, synchronously -
   Paster&Nginx will take care of finding another available process/thread to
   handle the next request.

This would make our code much leaner and cleaner, but I am not sure how to
implement #2. It seems like the only way to end the request is to return
something (or throw an exception) from the controller.

Thoughs?
Thanks.

-- 
You received this message because you are subscribed to the Google Groups 
"pylons-discuss" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/pylons-discuss?hl=en.

Reply via email to