Thanks for your input, everybody.

Threads vs. Async is like Emacs vs. vi, and can go on forever. Let me try to put this back into the SA context a bit:


Is there some way to get a hybrid mix of threaded vs. multiprocess that would let a programmer control how a request is handled?

Serving static content using a multiprocess approach seems a waste of resources, and it seems to me that either a threaded or event-based select()/poll() approach would be much faster, and simple enough to make reliable.

Likewise, I would think those same in-process approaches for DB access should be OK -- if anyone out there is using SA in a multithreaded enviroment, can you please jump in here and let us know your experiences?

Here's my concern for where a multithreaded approach might break down: a few more complex requests that we handle involve some mix of these other libraries:
    pycrypto
    PIL
    pyopenssl
    cubictemp (templating)
    dateutil (rrule calcs)
    reportlab toolkit

I think it's probably unreasonable to expect *all* of these libraries, plus our code all to be perfectly thread-safe in all instances. Ideally, I'd like to spin requests that do more than "hit the DB and reformat the results into JSON" off into a separate process that would handle the request, and then die.

So assume we set up the new architecture like this:

    Apache / Lighttpd for static content
    FastCGI gateway to threaded "master" process with SA connection pool
    Simple HTML page and SA requests handled by master process using threads
    fork() / CreateProcess() handling for complex requests

For complex requests, is there any way for the "master" process to "claim" a DB connection from the connection pool and hand it off to the child? Ideally, this would pass off a DB engine or whatever that is becoming in 0.2....

Thanks again for your input,

Rick

Reply via email to