On Tue, 2004-08-17 at 17:16, Nathan Folkman wrote:
> Each addresses different issues. The thread pools are nice in that they
> give you a way to isolate (by URL I believe?) certain types of activites
> to particular pools of threads, thus preventing any one paritcular type
> of activity from utilizing all of the threads. Here's a real world example.
>
> Imagine a single Web Server that serves both an application that does a
> lot of interaction with a database, and also serves up system status
> pages and other information. Suppose you have the server configured with
> a max of 10 connection threads. Now imagine a user comes along and
> decides to do some sort of degenerate query which takes order minutes.
> The same user, seeing nothing happening, keeps hitting reload. In fact,
> he's so eager to see some results that he hits the reload button 10
> times. You've now got all 10 of your connection threads busy working on
> his long running database query. Any user wanting to see the system
> status would not be able to get to the page since the server is now
> thread maxed.
>
> Now imagine you've set things up with thread pools. The database tool
> gets a max of 5, and the system stats page gets a max of 5. Now our over
> anxious database application user can only tie up a maximum of 5
> connection threads working on his database query, leaving up to 5
> connection threads for users wishing to see the system stats page.


How do you determine how many threads to add to each pool?

Let's say you determine the webserver can process 10 requests
concurrently.  You could assign 5 threads to each pool.

In this situation, one pool may be idle and the server may have spare
capacity, and yet still reject requests get assigned to the other pool.

Alternatively, you could have one pool of 10 threads and classify
connections as they are received.  You might then specify that both
users and admins can max out at 8 conn threads each.

Obviously they can't both have 8 conn threads, so what you're
effectively specifying here is that the *other* class of connection gets
a minimum of 2 conn threads.  The effect is more pronounced the more
resource pools you have.


Also, nothing to do with the SEDA or thread pool approach but looking at
the current implementation, it's a little inflexible in that it assigns
requests to pools based on URL.  This allows you to effectively
partition your order checkout, for example.  It doesn't let you handle
quality of service situations like favouring logged in users over
anonymous users, or favouring fat-cat CEOs over common proles... :-)

A flexible callback interface would be nice to have here.


--
AOLserver - http://www.aolserver.com/

To Remove yourself from this list, simply send an email to <[EMAIL PROTECTED]> with the
body of "SIGNOFF AOLSERVER" in the email message. You can leave the Subject: field of 
your email blank.

Reply via email to