On Dec 10, 6:05 am, Justin Davis <[email protected]> wrote:
> This is a good point -- why is that the default setting?  From flup
> code:
>
> 946  Set multiplexed to True if you want to handle multiple requests
> 947 per connection. Some FastCGI backends (namely mod_fastcgi) don't
> 948 multiplex requests at all, so by default this is off (which saves
> 949 on thread creation/locking overhead). If threads aren't available,
> 950 this keyword is ignored; it's not possible to multiplex requests
> 951 at all.
>
> A quick test with a lighttpd server shows a significant (40%) increase
> with this turned off.
>
> Someone correct me if I'm wrong, but the way I'm reading this is that
> it would handle multiple requests per client connection.  This is
> probably not a common occurrence for most web apps since static
> content is usually served outside of fastcgi code path.

It is talking about the fastcgi socket connection between web server
and fastcgi process, nothing to do with user HTTP client.

Technically, two distinct users could make requests at the same time
and requests for those two users could be multiplexed across same
socket connection between web server and fastcgi process.

In practice, few if any main stream web server modules for fastcgi
support this, so there is no good reason to have it enabled.

FWIW, Apache/mod_wsgi does not multiplex across the socket connection
between the web server processes and its daemon mode processes either.
The extra complexity in the code is just not worth it and is likely
not to be as efficient.

Overall, the only gain from multiplexing, if things even did support
it, would be keeping down the number of system file descriptors in
use. This is probably only going to be relevant to large scale shared
web hosting operations and not your average self managed site.

Although it will help in the area of use of file descriptors, it does
risk causing latency problems and reduced performance due to
additional complexity of code to handle it plus the fact you are
stuffing more data down a single socket pipe. Depending on how the
fastcgi protocol is implemented, one user HTTP client blocking on
reading response could possibly even technically block all clients for
which data is being multiplexed over the same socket from fastcgi
process. This is because it isn't going to be realistic for web server
process to buffer up data for one of the sessions just so it can keep
passing back data from another. Whether this hypothesis is true I
don't know though as have never looked at code for a web server that
tries to implement multiplexing.

Graham

> Counter arguments?
>
> On Dec 7, 3:31 am, s7v7nislands <[email protected]> wrote:
>
>
>
> > hi,all!
> >     any server support fastcgi 'mulitplexing', that allows a single
> > client-server connection to be simultaneously shared by multiple
> > requests? after google, I find apache,nginx,lighttpd all not support
> > it.and test with nginx, seem when mulitiplexed = False is faster than
> > True.
> >     def runfcgi(func, addr=('localhost', 8000)):
> >     """Runs a WSGI function as a FastCGI server."""
> >     import flup.server.fcgi as flups
> >     return flups.WSGIServer(func, multiplexed=True,
> > bindAddress=addr).run()
>
> >     so why set this value 'True'?  thanks!

--

You received this message because you are subscribed to the Google Groups 
"web.py" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/webpy?hl=en.


Reply via email to