On Jan 8, 2:55 am, David Cancel <[EMAIL PROTECTED]> wrote:
> I know.. I know... Simple benchmarks mean nothing but I couldn't help
> playing with the new(ish) mod_wsgi module for my favorite webserver
> Nginx.
>
> Nginx:http://nginx.net/
> Nginx mod_wsgi module:http://wiki.codemongers.com/NginxNgxWSGIModule
>
> I tested Nginx vs. the recommended setup of Lighttpd/Fastcgi. These
> very simple and flawed tests were run on Debian Etch running under
> virtualization (Parallels) on my Macbook Pro. Hey I said they were
> flawed.. :-)
>
> The results show Nginx/WSGI performing 3x as fast as Lighttpd/Fastcgi,
> over 1000 requests per second!!
>
> I tested both with Keep-Alives on and off. I'm not sure why Nginx/WSGI
> performed 2x as fast with keep-alives on.

I realise people love to show benchmarks with Keep-Alive enabled
because they make things look much better, but results from using Keep-
Alive are pretty bogus as they generally bear no relationship to
reality. You are just never going to get a browser client sending
hundreds of requests down the same socket connection for a start.

In reality what is more likely to happen is you get one request
against your WSGI application and then you might get subsequent
requests for linked images from the page. But then, if the page has
been visited before, it is likely that those linked images are already
cached by the browser and so even that will not happen. Thus, in the
majority of cases you will get one request only and the socket
connection will not even be used.

That this occurs is why in part for a high performance web site you
are better of hosting your media files on a different server. The main
server hosting the WSGI application would have Keep-Alive turned off
so that socket connections don't linger and consume resources. The
media server on the other hand could quite happily use Keep-Alive, as
it is the media files linked from a page that one is more likely to
get the potential for serialised requests on the same socket
connection.

The only time that Keep-Alive may be valid in testing for dynamic web
applications is where one is trying to remove the overhead of the
socket connection from the picture so as to evaluate the overheads of
any internal mechanisms applicable to the hosting mechanism.

For example, in contrasting the overheads of using mod_python,
mod_wsgi embedded, vs systems which need to do a subsequent proxy to a
further process such as mod_proxy, or mod_wsgi daemon, of fastcgi,
scgi, ajp solutions. Ie., use Keep-Alive so that it is easier to see
what the overhead of that proxying actually is. You really need to
know what you are looking for in doing that though.

Also be mindful that nginx mod_wsgi isn't necessarily seen as being a
front facing web server as I don't think it is really setup to also
handle static file serving at the same time. Thus, it is more seen as
being something that would sit behind mod_proxy, or maybe in parallel
to a media server. If your only option is behind mod_proxy so front
end server is serving static files, then you really need to take that
mod_proxy hop into consideration when doing testing.

Anyway, all the results are pretty meaningless anyway, as your
bottleneck isn't going to be the network, but the WSGI application
itself or any database it accesses.

Unless a solution has really bad performance relative to others, one
is always better off using the mechanism which you find easier to work
with, or which has specific features you need. :-)

Graham


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"web.py" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/webpy?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to