(Posting attempt no 3, apologies for duplicates)
I run 3 sites on my vps at www.redwoodvirtual.com

Here's some stats for a vps with 64meg ram and 64meg swap
Here's the output of top:

top - 16:31:30 up 53 days,  4:19,  1 user,  load average: 0.09, 0.35,
0.79
Tasks:  64 total,   1 running,  57 sleeping,   0 stopped,   6 zombie
Cpu(s):   0.0% user,   0.7% system,   0.0% nice,  99.3% idle
Mem:     60368k total,    55452k used,     4916k free,     4344k
buffers
Swap:    65528k total,    39296k used,    26232k free,    15992k
cached


Here's my apache config
<IfModule worker.c>
StartServers         2
MaxClients         50
MinSpareThreads    10
MaxSpareThreads     25
ThreadsPerChild     25
MaxRequestsPerChild  0
</IfModule>

It still starts up 29 processes as given by:
# ps -e | grep apache2 | wc
     29     116     928
If someone could point me to an explanation of how it gets to 29
process with that config I would appreciate it.

I ran the performance test remotely from a server with a 20ms ping
time to my vps. (Itself also vps but with another provider)

The test basically fetches the url specified for n many times using m
many threads.
Here's the code: http://dpaste.com/hold/10514/

The page i'm fetching  is about 12kb
-rw-rw-r--  1 caz caz 12801 May 17 16:32 index.html
It includes 15 database news entries. Each one chopped after a couple
words. It has a quote of the day which get fetched from the db.
Regular django templating. It uses postgresql.
No caching enabled whatsoever. No optimisation.

This obviously ignores fetching all the images and other static
content. However those kinda stats should be available. And i'm not
using django to serve them.

The times mentioned are seconds.

Using 25 threads and 40 requests per thread:
$ python http_getter.py 25 40
Some 50 requests were not served but returned 500:Server error and it
had to swap out severely.
Total requests:1000 Time:143.0

Lowering the concurrency by 5 times speeds things up and also no more
server errors and no more swapping. I need to tune my config some more
it would seem.
$ python http_getter.py 5 40
Total requests:200 Time:28.2

And it scales pretty linearly.
$ python http_getter.py 5 400
Total requests:2000 Time:214.6

Here's some calls to a similar page on the same vps but on another
virtual host. This one has file caching enabled in django tho and the
page is 7kb in size:
$ python http_getter.py 25 40
Total requests:1000 Time:38.7
Looks like about 26 requests per second.

With caching enabled I suspect serving all 10000 requests for your day
inside an hour will be fine. 10000/3600=2.7 requests per second needed
and my cached tiny vps manages 26/second. Might even make it given the
other request for static content Which should be cached after the fist
hit.

All of the above relies heavily on my little load testing script. And
the load is all from the same machine. Please have a look at the
script before taking these stats as meaningful :)


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Django users" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-users?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to