Noel et al,

> In other words, you took my idea about spreading the load over
multiple
> shared workers.  There are still some issues, such as:
> 
>   (1) It doesn't work.  You need to fix that.  :-)
>   (2) It doesn't work as a scheduler.  The problem with inserting an
> earlier
> time.
>   (3) Synchronization and data structure management are still
heavyweight
> items.
> 
> You should be able to resolve 1 and 2.

Points 1 and 2 obviously make Harmeet's comments about his successful
tests moot, since the code didn't actually work.  Obviously if all the
threads exit immediately, that's going to be a problem.

In addition, despite the danger of belaboring the obvious, this is a
well-known pattern that explicitly doesn't address point 3.

Specifically, consider the two limits in the number of threads in the
scheduler.

If we go to the one thread limit, we're back at the high contention
case.   That's bad, for all the reasons we've described ad nauseum.

If we go to the N thread case, where N is the number of connections,
then we wind up with a more expensive and less elegant version of the
Watchdog (since there will be some contention from triggers that have
the same hash code mod N).  There will be two threads per connection,
and all calls into the Scheduler have synchronized components.

So we can disregard the two limits.  Now consider k, where 1 < k < N.
As I've explained (and as the referenced article makes clear) contention
scales non-linearly.  So if we have N total connections, on average each
individual cache will have (N/k) + 1 contending threads, for a total of
N + k threads.  Now, since the effect is non-linear, that means that
small changes in k can lead to wide ranging effects, most of which won't
be obvious until you get to high N.  But the situation is actually worse
than this - (N/k) + 1 is just the mean value of contending threads.
There will be substantial statistical deviation from this mean, and you
get penalized more grossly for upward deviations than you get rewarded
for downward deviations.  I'm not going to run through all the math
here, but suffice to say that it's not good - at any given point there
is at least a (1/k)^m chance that m threads will be contending on a
single cache.

--Peter  

 





--
To unsubscribe, e-mail:   <mailto:[EMAIL PROTECTED]>
For additional commands, e-mail: <mailto:[EMAIL PROTECTED]>

Reply via email to