Hi Graham,

Thanks for you answer.  That link helped out quite a bit.

As for the number of threads, you're right that it most likely is
quite excessive.  The app handles most requests in 200-300 ms, but
with some caching I think that can be reduced to the 20-50ms range.
As for the number of users on a per box basis, I would be content with
a starting point being able to handle 1000 users making a req every
10s which comes out to about 100 req/s.  And if each request takes a
fraction of a sec, then yes, we don't need anywhere near that many
threads.  One thing I want to do is experiment more with an optimal
number there.


One thing I was wondering is how do you get a stack dump of the
application running within mod_wsgi?  When I enumerate through the
threads from the app, only a single thread is returned and it only
displays the wsgi request handling stack trace.  I have a bottleneck
in our app that I've narrowed down to being within our app even under
light loads and it's very difficult to see where threads are currently
waiting.


Thanks again,

Ram




On Jan 15, 10:31 pm, Graham Dumpleton <[email protected]>
wrote:
> 2010/1/16 Ram <[email protected]>:
>
>
>
> > Hello all,
>
> > I've been working on configuring apache/mod_wsgi for a Django app that
> > I expect to get high volume and I've been stumbling over some details
> > that I don't quite understand about my configuration.
>
> > I am using Apache worker with mod_wsgi in daemon mode.  Since I have
> > an 8-core box, I am setting up mod_wsgi with 6 processes and 50
> > threads per process (with a similar setup for apache worker: 300
> > clients, 50 threads per child).
>
> 50 threads per process is quite excessive. Most applications would
> quite adequately handle loads with 5 threads. Even the default of 15
> threads where 'threads' argument is not set is potentially excessive.
>
> Of course, how many you actually need is heavily dictated by the
> number of long lived requests. So, what is the average response time
> for requests? How many concurrent requests are you expecting? How long
> are you longest request times?
>
> > The questions I have are:
>
> > (1)  Do the 6 mod_wsgi processes use a shared Python interpreter and
> > heap?  For example, if my Django app is using the SessionStore
> > middleware, would all processes see the same SessionStore or are there
> > 6 separate heaps and each has it's own?
>
> > (2)  Is it possible to set the processes up in this way to not
> > conflict when writing to a single application log file (using python's
> > standard logging)?  I am getting a situation where the log files are
> > corrupted with interleaved or missing logging.  It almost appears that
> > not all the processes are even able to log to the logging file.
>
> Read:
>
>  http://code.google.com/p/modwsgi/wiki/ProcessesAndThreading
>
> It talks a bit about data sharing issues.
>
> In short, each process has its own memory and interpreter. If
> accessing shared resources, you need some form of locking.
>
> Graham
-- 
You received this message because you are subscribed to the Google Groups 
"modwsgi" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/modwsgi?hl=en.


Reply via email to