On Sun, Nov 04, 2007 at 04:23:01PM -0800, Scott Lamb wrote:
> > It wasn't what I expected; I was fully confident at first that the
> > thread-pool, work-queue model would be the way to go, since it's one
> > I've implemented in many applications in the past. But the numbers said
> > otherwise.
> 
> Thanks for the case study. To rephrase (hopefully correctly), you tried
> these two models:
> 
> 1) one thread polls and puts events on a queue; a bunch of other threads
> pull from the queue. (resulted in high latency, and I'm not too
> surprised...an extra context switch before handling any events.)

So back to this..

> 2) a bunch of threads read and handle events independently. (your
> current model.)

BTW: How does this model somehow exempt itself from said context switching
issue of the former?

> Did you also tried the so-called "leader/follower" model, in which the
> thread which does the polling handles the first event and puts the rest
> on a queue; another thread takes over polling if otherwise idle while
> the first thread is still working. My impression this was a widely
> favored model, though I don't know the details of where each performs best.

Something about this just seems like smoke and mirrors to me. At the end of
the day we still only have a finite amount of CPU cores available to us and
any amount of playing with the order of things is not going to extract any
magical *more* throughput out of a given box. Yes, some of these methods
influence recv/send buffers and have a cascading effect on overall throughput,
but efficient code and algorithms are going to make the real difference - not
goofy thread games.

(and this is coming from someone who *likes* comp.programming.threads)

-cl
_______________________________________________
Libevent-users mailing list
Libevent-users@monkey.org
http://monkey.org/mailman/listinfo/libevent-users

Reply via email to