- Scheduler implementation can have m threads scheduling events for n
handlers.
- The watchdog implementation as proposed has one additional thread per
handler. That means number of available threads for handling protocols can
be at most half of the threads actually used and allocated by the VM.

2 suggestions:
- Do a comparison of scalability and performance while varying the thread
pool size. You may notice some patterns.
- Create a watchdog implementation that has m watchdog threads watching n
handlers and you may notice the same multithreading issues in it that you
are refering to.

Peter: Are you saying we should always have this strategy - One additional
thread per handler to watch for timeout ? If not, please send an
implementation that allows m watchdog to n handler mapping ?

1 watchdog thread per handler is too heavy for timeout. Timeout to some
extent can also be handled by Socket timeout settings.

If you have an m watchdogs to n handlers implementation, you will have some
datastructures shared by mulitple threads and some of the issues you are
talking about.

One way of reducing contention is to not have shared datastructure. You may
notice that the implementation does that. Datastructures per thread are
mutually exclusive and don't have contention. Scheduler contention
characteristics can be changed by having different values for thread count
and idle time parameters.


BTW. I am not saying and have never said scheduler is the right interface. I
don't even want to have a general scheduler implementation for timeouts, but
I do not think watchdog is not there yet. esp. because of the 1-1
association. Agreed there can be other implementations, but I have seen only
one so far. If you have an implementation that supports m to n mapping, I
will at least think it is a better alternative than the current 1-1
watchdog. I also suspect (but don't know) a good abstraction for such a
watchdog or timeout event scheduler would be a cross between current
scheduler abstraction and watchdog abstraction.

As I have said numerous times - if there is fix within current abstractions
we should do that for now, and talk/switch to a better abstraction post
release. Talking about release, I had requested someone post a schedule or
thoughts on it. It should help me at least. I would like to see more tests.
One gain from this discussion has been better tests. I think that is good.

Harmeet

----- Original Message -----
From: "Peter M. Goldstein" <[EMAIL PROTECTED]>
To: "'James Developers List'" <[EMAIL PROTECTED]>
Sent: Tuesday, October 15, 2002 10:57 AM
Subject: RE: [VOTE] Interface for resettable, time-guarded, operations


>
> Noel et al,
>
> > In other words, you took my idea about spreading the load over
> multiple
> > shared workers.  There are still some issues, such as:
> >
> >   (1) It doesn't work.  You need to fix that.  :-)
> >   (2) It doesn't work as a scheduler.  The problem with inserting an
> > earlier
> > time.
> >   (3) Synchronization and data structure management are still
> heavyweight
> > items.
> >
> > You should be able to resolve 1 and 2.
>
> Points 1 and 2 obviously make Harmeet's comments about his successful
> tests moot, since the code didn't actually work.  Obviously if all the
> threads exit immediately, that's going to be a problem.
>
> In addition, despite the danger of belaboring the obvious, this is a
> well-known pattern that explicitly doesn't address point 3.
>
> Specifically, consider the two limits in the number of threads in the
> scheduler.
>
> If we go to the one thread limit, we're back at the high contention
> case.   That's bad, for all the reasons we've described ad nauseum.
>
> If we go to the N thread case, where N is the number of connections,
> then we wind up with a more expensive and less elegant version of the
> Watchdog (since there will be some contention from triggers that have
> the same hash code mod N).  There will be two threads per connection,
> and all calls into the Scheduler have synchronized components.
>
> So we can disregard the two limits.  Now consider k, where 1 < k < N.
> As I've explained (and as the referenced article makes clear) contention
> scales non-linearly.  So if we have N total connections, on average each
> individual cache will have (N/k) + 1 contending threads, for a total of
> N + k threads.  Now, since the effect is non-linear, that means that
> small changes in k can lead to wide ranging effects, most of which won't
> be obvious until you get to high N.  But the situation is actually worse
> than this - (N/k) + 1 is just the mean value of contending threads.
> There will be substantial statistical deviation from this mean, and you
> get penalized more grossly for upward deviations than you get rewarded
> for downward deviations.  I'm not going to run through all the math
> here, but suffice to say that it's not good - at any given point there
> is at least a (1/k)^m chance that m threads will be contending on a
> single cache.
>
> --Peter
>
>
>
>
>
>
>
> --
> To unsubscribe, e-mail:
<mailto:[EMAIL PROTECTED]>
> For additional commands, e-mail:
<mailto:[EMAIL PROTECTED]>


--
To unsubscribe, e-mail:   <mailto:[EMAIL PROTECTED]>
For additional commands, e-mail: <mailto:[EMAIL PROTECTED]>

Reply via email to