This is information I want to share.

I guess this is not America, but it will be usefull for someone.
> Lets begin with test results
All tests done with web2py 1.96.4, apache2.2.16/mod_wsgi 3.3/python 
2.6.6/debian64, AMD64 4x 3.2Ghz
> Test #1
WSGIDaemonProcess web2py user=kmax group=kmax home=/home/kmax/web/web2py 
processes=1 threads=4 maximum-requests=1000
MEM by wsgi daemon
#1 VIRT:356m  RES:45m SHR:5348 
$ ab -n 1000 -c 4 http://web2py.ru/welcome/default/index
Document Path:          /welcome/default/index
Document Length:        10442 bytes
Concurrency Level:      4
Time taken for tests:   54.334 seconds
Complete requests:      1000
Failed requests:        0
Write errors:           0
Requests per second:    18.40 [#/sec] (mean)
Time per request:       217.334 [ms] (mean)
Time per request:       54.334 [ms] (mean, across all concurrent requests)
Transfer rate:          195.29 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.1      0       1
Processing:   153  217  23.3    216     497
Waiting:      153  216  23.2    216     496
Total:        154  217  23.3    216     498

Percentage of the requests served within a certain time (ms)
  50%    216
  66%    224
  75%    228
  80%    231
  90%    241
  95%    247
  98%    257
  99%    264
 100%    498 (longest request)

###############################################
>Test #2
WSGIDaemonProcess web2py user=kmax group=kmax home=/home/kmax/web/web2py 
processes=4 threads=1 maximum-requests=1000
MEM by wsgi daemons
#1 VIRT:201m  RES:26m SHR:5336
#2 VIRT:201m  RES:26m SHR:5336
#3 VIRT:201m  RES:26m SHR:5336
#4 VIRT:201m  RES:26m SHR:5336
$ ab -n 1000 -c 4 http://web2py.ru/welcome/default/index
Document Path:          /welcome/default/index
Document Length:        10442 bytes

Concurrency Level:      4
Time taken for tests:   5.324 seconds
Complete requests:      1000
Failed requests:        0
Write errors:           0
Total transferred:      10859098 bytes
HTML transferred:       10442000 bytes
Requests per second:    187.82 [#/sec] (mean)
Time per request:       21.297 [ms] (mean)
Time per request:       5.324 [ms] (mean, across all concurrent requests)
Transfer rate:          1991.74 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.1      0       1
Processing:    18   21  13.5     19     239
Waiting:       18   21  13.5     19     239
Total:         18   21  13.5     19     240

Percentage of the requests served within a certain time (ms)
  50%     19
  66%     20
  75%     20
  80%     20
  90%     23
  95%     25
  98%     29
  99%     46
 100%    240 (longest request)

###############################################
>Test #3
WSGIDaemonProcess web2py user=kmax group=kmax home=/home/kmax/web/web2py 
processes=1 threads=1 maximum-requests=1000
MEM by wsgi daemon
#1  VIRT:202m  RES:27m SHR:5344
kmax@PEAKTOP:~$ ab -n 1000 -c 4 http://web2py.ru/welcome/default/index
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking web2py.ru (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests


Server Software:        nginx/1.0.4
Server Hostname:        web2py.ru
Server Port:            80

Document Path:          /welcome/default/index
Document Length:        10442 bytes

Concurrency Level:      4
Time taken for tests:   18.126 seconds
Complete requests:      1000
Failed requests:        0
Write errors:           0
Total transferred:      10870000 bytes
HTML transferred:       10442000 bytes
Requests per second:    55.17 [#/sec] (mean)
Time per request:       72.503 [ms] (mean)
Time per request:       18.126 [ms] (mean, across all concurrent requests)
Transfer rate:          585.65 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.1      0       2
Processing:    68   72  12.5     70     281
Waiting:       68   72  12.5     69     281
Total:         68   72  12.5     70     281

Percentage of the requests served within a certain time (ms)
  50%     70
  66%     72
  75%     73
  80%     73
  90%     73
  95%     74
  98%     92
  99%     92
 100%    281 (longest request)
==============
> Test #4
Same configuration as above but with only one request at same time.
$ ab -n 1000 -c 1 http://web2py.ru/welcome/default/index
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking web2py.ru (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests


Server Software:        nginx/1.0.4
Server Hostname:        web2py.ru
Server Port:            80

Document Path:          /welcome/default/index
Document Length:        10442 bytes

Concurrency Level:      1
Time taken for tests:   22.554 seconds
Complete requests:      1000
Failed requests:        0
Write errors:           0
Total transferred:      10870000 bytes
HTML transferred:       10442000 bytes
Requests per second:    44.34 [#/sec] (mean)
Time per request:       22.554 [ms] (mean)
Time per request:       22.554 [ms] (mean, across all concurrent requests)
Transfer rate:          470.65 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       1
Processing:    18   22   9.5     18     251
Waiting:       17   22   9.5     18     251
Total:         18   22   9.5     19     251

Percentage of the requests served within a certain time (ms)
  50%     19
  66%     21
  75%     27
  80%     29
  90%     32
  95%     35
  98%     37
  99%     40
 100%    251 (longest request)
=============================================
> My Conclusions
python threading have big enough overheat.
This could be neglected at a controllers with lot of calculation.
But on light pages like default welcome index, this is really big deal.
One thread process, works more then twice faster then treaded configuration 
at my test condition.
One tread per process more preferable for light pages, if you sure that 
every page will be completet at garanteed time.
Lots of treads good for heavy pages, when lot of IO & DB IO, then one 
request does not blocks others.
WSGI daemon must have at least one process per core/processor of system to 
gain maximum performance from CPU.

> The Final battle:
1 process with 100 threads versus 100 processes with single tread each.
$ ab -n 1000 -c 100 http://web2py.ru/welcome/default/index
Memory: 100 treads wins ( below 100Meg for all treads versus 26Meg per 
process for 100 processes case)
Speed: 100 process wins (Time taken for tests:   10.296 sec for 100procs 
vs 59.165 sec for 100treads ) 
! Sick 54.334 sec for 4 treads with 4 concurent connection. Multitread 
overheat almost constant, does not depend on qty of running treads. Why?

PS Three days post has been completed. English is not my strong side. Hope 
this infromation will be usefull. 

Reply via email to