On May 22, 3:43 am, Cliff Wells <[EMAIL PROTECTED]> wrote:

> Again, I think this contrast is artificial.  You are setting up vertical
> scaling and horizontal scaling as mutually exclusive when they are
> anything but, and unless you have endlessly deep pockets, you should
> prefer to control the growth of your horizontal scaling.

horizontal scaling is often better, but much more expensive.  you need
2x the hardware ( 1 for real, one for redundancy ) and 2x the dev
hours.

it's also moot until you 'need' it - and you pretty much won't need it
until you can afford it.

> > As I originally pointed out, for Python web applications, in general
> > any solution will do as it isn't the network or the web server
> > arrangement that will be the bottleneck.
> If you try to scale a dynamic application and are going to pass part of
> the request off to Python on every request you are going to either fail
> spectacularly or spend an awful lot of money scaling horizontally.

The WebServer is often the bottleneck.  The only real bottleneck in an
app should be the DB blocking and wait times. When you have bloated
frontends , or a small pool of workers, the server can be bottleneck.
Tools like nginx help, because they can stave off the slow clients,
and just have fastcgi/apache work to handle the dynamic request at-
once -- making them more efficient.  they also give you more effective
workers, because the have less bloat.

two things to note:
 1- there is a comparison from a few years ago that shows apache +
lighty + nginx + thttpd + litespeed performance for every 100/r/s on
static content.  you got to see where their strengths were.

 2- there is a law of diminishing marginal utility with workers.  on
my mod_perl setup, every worker i add after 1 gets me 80 more r/s; the
7 & 8 workers get me 20r/s.   a 9th will get me 0.  anything more will
degrade performance.


> I'd consider "increasing memory usage" to be a bug in the application
> and outside the scope of discussion.  

perhaps not.  in many apache versions, memory is allocated to the
workers as a speed boost.  like in mod_perl: each  child will retain/
reserve memmory for each called function/variable so as not to
increase speed on future requests.  it's a tradeoff on speed vs
memory.  sometimes the speed isn't as necessary... and you'd rather
have the mem. but you can't turn that off.

> I disagree.  As I mentioned earlier, someone I know recently took an
> Apache/mod_php application consuming 1.2GB of RAM down to 200MB using
> Nginx/FastCGI with no loss in performance or functionality.  It's not
> clear to me why a Python application would be much different.

most likely that happened because of the phenomena above.  you had
each apache mod_php process bloating on ram.  running apache's php via
fastcgi can improve that, as you get better control of the memory
allocation and use... but its usually not as dramatic as going
straight to nginx.

>  By using up system resources, Apache
> limits the number of instances of the application that can be run on a
> single machine, and by extension across multiple machines.

very well articulated!

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"pylons-discuss" group.
To post to this group, send email to pylons-discuss@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/pylons-discuss?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to