As I said it was a flawed benchmark as are all such attempts IMO. The
reason for my quick test was to see what the difference if any there
was between one interface option vs. another. This is probably the
only useful aspect of a benchmark, I assume that's why you've used
such benchmarks yourself when testing modwsgi.

The results were interesting (to me) given the ease of integration
with Nginx vs. Lighttpd. Also since I tested both options using keep-
alives it was not an attempt to make things look better. An
interesting observation from testing both options with keep-alives on
was Nginx's more than 2x improvement (keep-alives on vs off) vs the
much smaller improvement seen by Lighttpd. Something worth looking
into a bit more and whose results my reinforce Nginx's attractiveness
as an asset server (images and other media).

Correct that paths served by nginx's mod_wsgi isn't where you want to
serve media files but Nginx is itself a great front-facing webserver,
that is what most people use it for (proxy). My instance was
configured to serve up static files via Nginx and only pass a certain
path to mod_wsgi. In this case you can use keep-alives to serve up
both media files and dynamic content from this single front facing web
server. I agree though that if possible you should split up the two
operations for optimal performance.

Agreed Nginx is the mechanism that I find easiest to work with.

Benchmarks are an interesting thing in that everyone agrees they're
flawed but yet everyone loves to perform them and to comment on them.
Interesting indeed.

David


On Jan 7, 8:23 pm, Graham Dumpleton <[EMAIL PROTECTED]>
wrote:
> On Jan 8, 2:55 am, David Cancel <[EMAIL PROTECTED]> wrote:
>
>
>
> > I know.. I know... Simple benchmarks mean nothing but I couldn't help
> > playing with the new(ish) mod_wsgi module for my favorite webserver
> > Nginx.
>
> > Nginx:http://nginx.net/
> > Nginx mod_wsgi module:http://wiki.codemongers.com/NginxNgxWSGIModule
>
> > I tested Nginx vs. the recommended setup of Lighttpd/Fastcgi. These
> > very simple and flawed tests were run on Debian Etch running under
> > virtualization (Parallels) on my Macbook Pro. Hey I said they were
> > flawed.. :-)
>
> > The results show Nginx/WSGI performing 3x as fast as Lighttpd/Fastcgi,
> > over 1000 requests per second!!
>
> > I tested both with Keep-Alives on and off. I'm not sure why Nginx/WSGI
> > performed 2x as fast with keep-alives on.
>
> I realise people love to show benchmarks with Keep-Alive enabled
> because they make things look much better, but results from using Keep-
> Alive are pretty bogus as they generally bear no relationship to
> reality. You are just never going to get a browser client sending
> hundreds of requests down the same socket connection for a start.
>
> In reality what is more likely to happen is you get one request
> against your WSGI application and then you might get subsequent
> requests for linked images from the page. But then, if the page has
> been visited before, it is likely that those linked images are already
> cached by the browser and so even that will not happen. Thus, in the
> majority of cases you will get one request only and the socket
> connection will not even be used.
>
> That this occurs is why in part for a high performance web site you
> are better of hosting your media files on a different server. The main
> server hosting the WSGI application would have Keep-Alive turned off
> so that socket connections don't linger and consume resources. The
> media server on the other hand could quite happily use Keep-Alive, as
> it is the media files linked from a page that one is more likely to
> get the potential for serialised requests on the same socket
> connection.
>
> The only time that Keep-Alive may be valid in testing for dynamic web
> applications is where one is trying to remove the overhead of the
> socket connection from the picture so as to evaluate the overheads of
> any internal mechanisms applicable to the hosting mechanism.
>
> For example, in contrasting the overheads of using mod_python,
> mod_wsgi embedded, vs systems which need to do a subsequent proxy to a
> further process such as mod_proxy, or mod_wsgi daemon, of fastcgi,
> scgi, ajp solutions. Ie., use Keep-Alive so that it is easier to see
> what the overhead of that proxying actually is. You really need to
> know what you are looking for in doing that though.
>
> Also be mindful that nginx mod_wsgi isn't necessarily seen as being a
> front facing web server as I don't think it is really setup to also
> handle static file serving at the same time. Thus, it is more seen as
> being something that would sit behind mod_proxy, or maybe in parallel
> to a media server. If your only option is behind mod_proxy so front
> end server is serving static files, then you really need to take that
> mod_proxy hop into consideration when doing testing.
>
> Anyway, all the results are pretty meaningless anyway, as your
> bottleneck isn't going to be the network, but the WSGI application
> itself or any database it accesses.
>
> Unless a solution has really bad performance relative to others, one
> is always better off using the mechanism which you find easier to work
> with, or which has specific features you need. :-)
>
> Graham
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"web.py" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/webpy?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to