when you add the ab -c option and multiplexing on , the flup will left
many thread and socket status 'CLOSE_WAIT',  so it looks like use many
memory. but turn off multiplexing, it rarely happen like former.

2009/12/11 Justin Davis <[email protected]>:
> Heh, you're too modest.
>
> Well, anyway, since it's a little clearer now to me, leaving
> multiplexing on is still hurting performance.  I ran my benchmarks
> using "ab -c 5 -n 1000", so this was *the* scenario where it was
> supposed to increase performance (multiple clients hitting at the same
> time), and I still got much better performance with multiplexing off
> than on.
>
> For the record:
>
> Multiplexing on
>
> Concurrency Level:      5
> Time taken for tests:   34.250515 seconds
> Complete requests:      1000
> Failed requests:        0
> Write errors:           0
> Total transferred:      9739000 bytes
> HTML transferred:       9465000 bytes
> Requests per second:    29.20 [#/sec] (mean)
> Time per request:       171.253 [ms] (mean)
> Time per request:       34.251 [ms] (mean, across all concurrent
> requests)
> Transfer rate:          277.66 [Kbytes/sec] received
>
>
>
> Multiplexing off
>
> Concurrency Level:      5
> Time taken for tests:   14.126215 seconds
> Complete requests:      1000
> Failed requests:        0
> Write errors:           0
> Total transferred:      9739000 bytes
> HTML transferred:       9465000 bytes
> Requests per second:    70.79 [#/sec] (mean)
> Time per request:       70.631 [ms] (mean)
> Time per request:       14.126 [ms] (mean, across all concurrent
> requests)
> Transfer rate:          673.22 [Kbytes/sec] received
>
> More than twice as many requests per second with it off.
>
>
> On Dec 9, 11:57 pm, Graham Dumpleton <[email protected]>
> wrote:
>> On Dec 10, 1:35 pm, Justin Davis <[email protected]> wrote:
>>
>> > > It is talking about the fastcgi socket connection between web server
>> > > and fastcgi process, nothing to do with user HTTP client.
>>
>> > Ahh, ok, that makes sense.  Good to have a real expert on this stuff
>> > around -- thanks Graham!
>>
>> I just bullshit and people are stupid enough to believe me. ;-)
>>
>> > -Justin
>>
>> > On Dec 9, 6:47 pm, Graham Dumpleton <[email protected]>
>> > wrote:
>>
>> > > On Dec 10, 6:05 am, Justin Davis <[email protected]> wrote:
>>
>> > > > This is a good point -- why is that the default setting?  From flup
>> > > > code:
>>
>> > > > 946  Set multiplexed to True if you want to handle multiple requests
>> > > > 947 per connection. Some FastCGI backends (namely mod_fastcgi) don't
>> > > > 948 multiplex requests at all, so by default this is off (which saves
>> > > > 949 on thread creation/locking overhead). If threads aren't available,
>> > > > 950 this keyword is ignored; it's not possible to multiplex requests
>> > > > 951 at all.
>>
>> > > > A quick test with a lighttpd server shows a significant (40%) increase
>> > > > with this turned off.
>>
>> > > > Someone correct me if I'm wrong, but the way I'm reading this is that
>> > > > it would handle multiple requests per client connection.  This is
>> > > > probably not a common occurrence for most web apps since static
>> > > > content is usually served outside of fastcgi code path.
>>
>> > > It is talking about the fastcgi socket connection between web server
>> > > and fastcgi process, nothing to do with user HTTP client.
>>
>> > > Technically, two distinct users could make requests at the same time
>> > > and requests for those two users could be multiplexed across same
>> > > socket connection between web server and fastcgi process.
>>
>> > > In practice, few if any main stream web server modules for fastcgi
>> > > support this, so there is no good reason to have it enabled.
>>
>> > > FWIW, Apache/mod_wsgi does not multiplex across the socket connection
>> > > between the web server processes and its daemon mode processes either.
>> > > The extra complexity in the code is just not worth it and is likely
>> > > not to be as efficient.
>>
>> > > Overall, the only gain from multiplexing, if things even did support
>> > > it, would be keeping down the number of system file descriptors in
>> > > use. This is probably only going to be relevant to large scale shared
>> > > web hosting operations and not your average self managed site.
>>
>> > > Although it will help in the area of use of file descriptors, it does
>> > > risk causing latency problems and reduced performance due to
>> > > additional complexity of code to handle it plus the fact you are
>> > > stuffing more data down a single socket pipe. Depending on how the
>> > > fastcgi protocol is implemented, one user HTTP client blocking on
>> > > reading response could possibly even technically block all clients for
>> > > which data is being multiplexed over the same socket from fastcgi
>> > > process. This is because it isn't going to be realistic for web server
>> > > process to buffer up data for one of the sessions just so it can keep
>> > > passing back data from another. Whether this hypothesis is true I
>> > > don't know though as have never looked at code for a web server that
>> > > tries to implement multiplexing.
>>
>> > > Graham
>>
>> > > > Counter arguments?
>>
>> > > > On Dec 7, 3:31 am, s7v7nislands <[email protected]> wrote:
>>
>> > > > > hi,all!
>> > > > >     any server support fastcgi 'mulitplexing', that allows a single
>> > > > > client-server connection to be simultaneously shared by multiple
>> > > > > requests? after google, I find apache,nginx,lighttpd all not support
>> > > > > it.and test with nginx, seem when mulitiplexed = False is faster than
>> > > > > True.
>> > > > >     def runfcgi(func, addr=('localhost', 8000)):
>> > > > >     """Runs a WSGI function as a FastCGI server."""
>> > > > >     import flup.server.fcgi as flups
>> > > > >     return flups.WSGIServer(func, multiplexed=True,
>> > > > > bindAddress=addr).run()
>>
>> > > > >     so why set this value 'True'?  thanks!
>>
>>
>
> --
>
> You received this message because you are subscribed to the Google Groups 
> "web.py" group.
> To post to this group, send email to [email protected].
> To unsubscribe from this group, send email to 
> [email protected].
> For more options, visit this group at 
> http://groups.google.com/group/webpy?hl=en.
>
>
>

--

You received this message because you are subscribed to the Google Groups 
"web.py" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/webpy?hl=en.


Reply via email to