Small update. Solved the data injection routine using a wrapper
iterator which can be used to control the execution in a microthreaded
environment or a threaded environment. The request handler look as
follows:
def handle_request(request):
"""
This coroutine handles the request after the connection has been
setup.
The coroutine has control about how a code section is executed.
Yielding
_THREAD will put the coroutine into the threadpool, thus assuring
that the
subsequent next() call will be prefomed inside a preemptive thread.
Yielding _NO_THREAD yields control back to the microthread
scheduler.
"""
yield _THREAD # parse.request calls rfile.readline which can block
request.parse_request()
if not request.ready:
request.terminate()
raise StopIteration
response = request.wsgi_app( request.environ,
request.start_response)
for data in response:
if data == '':
yield _NO_THREAD
continue
yield _THREAD
try:
request.write(data)
except (KeyboardInterrupt, SystemExit):
request.terminate() # lets play nice and close the
connection
raise
except Exception, e:
if len(e.args) and e.args[0] in socket_errors_to_ignore: pass
traceback.printexc()
if hasattr(response, "close"):
response.close()
request.terminate()
The micro threaded version can now handle up to 1000 pending
connections with just 4% cpu usage and likely more but ab won't allow
me to test more than a 1000 concurrent connection. IO is still threaded
which seems to be the current bottleneck. So my next task will writing
a small benchmark suite around httpref and replacing the threaded IO
over to select() based IO.
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups
"TurboGears" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/turbogears
-~----------~----~----~----~------~----~------~--~---