Re: [web2py] Re: Rocket vs mod_wsgi

2011-12-13 Thread pbreit
You can get free CDN from CloudFlare.

[web2py] Re: Rocket vs mod_wsgi

2011-12-11 Thread rif
And now the weirdest thing:

Enabling two CPUs on the virtual machine gave me the following weird 
results:

nginx: 31.92 [#/sec]
apache: 10.63 [#/sec]
rocket: 10.36 [#/sec]

So 1000 request with a concurrency of 20 on 2 CPUs actually slows down 
apache and rocket. I thought that apache and rocket hit the memory limit so 
I raised the memory to 1024 but the results remained the same.

Is this even possible? I can understand that the performance could stay the 
same with 2CPUs (the servers are not using the second one) but to actually 
drop the performance is not very intuitive.

I guess the default configuration of apache and rocket should be improved 
since most of the servers have more than 1 CPU. Or maybe I just my 
environment.


[web2py] Re: Rocket vs mod_wsgi

2011-12-11 Thread Massimo Di Pierro
Python multithreaded programs (all of them, including rocket and
mod_wsgi) decrease performance the more CPUs you have. This is because
of the GIL. It is a well known problem and, in view, the biggest
problem with Python. In the case of apache, to improve things, you
have to configure apache to run one child per cpu.

On Dec 11, 8:04 am, rif feric...@gmail.com wrote:
 And now the weirdest thing:

 Enabling two CPUs on the virtual machine gave me the following weird
 results:

 nginx: 31.92 [#/sec]
 apache: 10.63 [#/sec]
 rocket: 10.36 [#/sec]

 So 1000 request with a concurrency of 20 on 2 CPUs actually slows down
 apache and rocket. I thought that apache and rocket hit the memory limit so
 I raised the memory to 1024 but the results remained the same.

 Is this even possible? I can understand that the performance could stay the
 same with 2CPUs (the servers are not using the second one) but to actually
 drop the performance is not very intuitive.

 I guess the default configuration of apache and rocket should be improved
 since most of the servers have more than 1 CPU. Or maybe I just my
 environment.


[web2py] Re: Rocket vs mod_wsgi

2011-12-11 Thread rif
This comparison was intended to help writing the why web2py paragraph from 
the book (https://groups.google.com/d/topic/web2py/29jdfjejwZo/discussion )

[web2py] Re: Rocket vs mod_wsgi

2011-12-11 Thread Massimo Di Pierro
I understand and it is very much appreciated. I will correct it.

massimo

On Dec 11, 10:15 am, rif feric...@gmail.com wrote:
 This comparison was intended to help writing the why web2py paragraph from
 the book (https://groups.google.com/d/topic/web2py/29jdfjejwZo/discussion)


[web2py] Re: Rocket vs mod_wsgi

2011-12-11 Thread peter
Any chance of trying uwsgi on its own, something like this

uwsgi --pythonpath /opt/web-apps/web2py --module wsgihandler --http :
80 -s /tmp/we2py.sock

Thanks
Peter

On Dec 11, 1:10 pm, rif feric...@gmail.com wrote:
 In the same environment I tested nginx configuration:

 nginx: 1.0.10
 uwsgi: 0.9.8.1

 ab -n1000 -c20http://192.168.122.187/welcome/default/index
 This is ApacheBench, Version 2.3 $Revision: 655654 $
 Copyright 1996 Adam Twiss, Zeus Technology Ltd,http://www.zeustech.net/
 Licensed to The Apache Software Foundation,http://www.apache.org/

 Benchmarking 192.168.122.187 (be patient)
 Completed 100 requests
 Completed 200 requests
 Completed 300 requests
 Completed 400 requests
 Completed 500 requests
 Completed 600 requests
 Completed 700 requests
 Completed 800 requests
 Completed 900 requests
 Completed 1000 requests
 Finished 1000 requests

 Server Software:        nginx/1.0.10
 Server Hostname:        192.168.122.187
 Server Port:            80

 Document Path:          /welcome/default/index
 Document Length:        11432 bytes

 Concurrency Level:      20
 Time taken for tests:   58.306 seconds
 Complete requests:      1000
 Failed requests:        0
 Write errors:           0
 Total transferred:      11819000 bytes
 HTML transferred:       11432000 bytes
 *Requests per second:    17.15 [#/sec] (mean)*
 Time per request:       1166.116 [ms] (mean)
 Time per request:       58.306 [ms] (mean, across all concurrent requests)
 Transfer rate:          197.96 [Kbytes/sec] received

 Connection Times (ms)
               min  mean[+/-sd] median   max
 Connect:        0    1   1.3      0      12
 Processing:   218 1155 180.3   1104    2045
 Waiting:      217 1154 180.3   1104    2045
 Total:        225 1156 180.0   1105    2046

 Percentage of the requests served within a certain time (ms)
   50%   1105
   66%   1124
   75%   1152
   80%   1177
   90%   1247
   95%   1507
   98%   2009
   99%   2022
  100%   2046 (longest request)

 Conclusion:
 Just a bit slower than Rocket.

 Nginx + wsgi is my current server configuration


[web2py] Re: Rocket vs mod_wsgi

2011-12-11 Thread rif
Compiled uwsgi 0.9.9.3 (the 0.9.8.1 did not now about pythonpath)
uwsgi --pythonpath /opt/web-apps/web2py --module wsgihandler --http :80 -s 
/tmp/we2py.sock uwsgi.log 21

1 CPU: 17.83 [#/sec] (better than rocket)
2 CPUs: 17.98 [#/sec]

uwsgi --pythonpath /opt/web-apps/web2py --module wsgihandler --http :80 -s 
/tmp/we2py.sock -M -p2 uwsgi.log 21

2 CPUs: 31.30 [#/sec]

I guess with the -p 2 enabled it was not as fast as nginx for static 
content.

Anyhow, is this a recommended setup? Doesn't it show the same behavior as 
gunicorn ( Without this (nginx) buffering Gunicorn will be easily 
susceptible to denial-of-service attacks. )


Re: [web2py] Re: Rocket vs mod_wsgi

2011-12-11 Thread Roberto De Ioris

 Compiled uwsgi 0.9.9.3 (the 0.9.8.1 did not now about pythonpath)
 uwsgi --pythonpath /opt/web-apps/web2py --module wsgihandler --http :80 -s
 /tmp/we2py.sock uwsgi.log 21

 1 CPU: 17.83 [#/sec] (better than rocket)
 2 CPUs: 17.98 [#/sec]

 uwsgi --pythonpath /opt/web-apps/web2py --module wsgihandler --http :80 -s
 /tmp/we2py.sock -M -p2 uwsgi.log 21

 2 CPUs: 31.30 [#/sec]

 I guess with the -p 2 enabled it was not as fast as nginx for static
 content.


because you are comparing a preforking (apache-style) approach with a
non-blocking one (nginx). For determistic areas (like serving static
files) there is no competition. This is why having a non blocking-server
on front of the application server is a common setup. Having a front-end
will obviously slow-down things, but the impact (as you have already
noted) is non-existant.

(by the way, --http is still a proxied-setup as the uwsgi http server will
run in another process, if you want native http you have to use
--http-socket)


In addition to this, you have to think about security: nginx, apache,
cherokee... are all developed for being front-line servers, and this
requires a higher level of security in mind.


 Anyhow, is this a recommended setup? Doesn't it show the same behavior as
 gunicorn ( Without this (nginx) buffering Gunicorn will be easily
 susceptible to denial-of-service attacks. )


as i have already said having a front line server (like --http does) is
always a good choice, independently by the front-line server. Nginx has
obviously more tolerance in high-concurrency in serving static file (have
you tried the --static-map and --check-static option in uWSGI 1.0.x ?),
but
if you are developing the next amazon, you should really start thinking
about using a CDN for your static assets instead of your own webserver.

regarding gunicorn, it does only one thing and does it well (TM), its
philosophy is towards simplicity, writing logic to check the
good-behaviour of your app (like uWSGI morbidly does) is at the opposite
of that philosophy. I am perfectly agree with this approach (programmers
should fix their apps), but i work in a ISP where customers expect their
apps being available as most as possibile and tend to blame the ISP for
their fault (yes, it is a ugly world :).
That's why uWSGI exists and that's why you cannot compare it with gunicorn
(different philosophy and different target)


-- 
Roberto De Ioris
http://unbit.it