Steven Grimm wrote:
> On Nov 4, 2007, at 3:07 PM, Christopher Layne wrote:
>> The issue in itself is having multiple threads monitor the *same* fd
>> via any
>> kind of wait mechanism. It's short circuiting application layers, so
>> that a
>> thread (*any* thread in that pool) can immediately process new data. I
>> think
>> it would be much more structured, less complex (i.e. better
>> performance in
>> the long run anyways), and a cleaner design to have a set number (or even
>> 1) thread handle the "controller" task of tending to new network events,
>> push them onto a per-connection PDU queue, or pre-process in some form or
>> fashion, condsig, and let previously mentioned thread pool handle it
>> in an
>> ordered fashion.
> 
> You've just pretty accurately described my initial implementation of
> thread support in memcached. It worked, but it was both more
> CPU-intensive and had higher response latency (yes, I actually measured
> it) than the model I'm using now. The only practical downside of my
> current implementation is that when there is only one UDP packet waiting
> to be processed, some CPU time is wasted on the threads that don't end
> up winning the race to read it. But those threads were idle at that
> instant anyway (or they wouldn't have been in a position to wake up) so,
> according to my benchmarking, there doesn't turn out to be an impact on
> latency. And though I am wasting CPU cycles, my total CPU consumption
> still ends up being lower than passing messages around between threads.
> 
> It wasn't what I expected; I was fully confident at first that the
> thread-pool, work-queue model would be the way to go, since it's one
> I've implemented in many applications in the past. But the numbers said
> otherwise.

Thanks for the case study. To rephrase (hopefully correctly), you tried
these two models:

1) one thread polls and puts events on a queue; a bunch of other threads
pull from the queue. (resulted in high latency, and I'm not too
surprised...an extra context switch before handling any events.)

2) a bunch of threads read and handle events independently. (your
current model.)

Did you also tried the so-called "leader/follower" model, in which the
thread which does the polling handles the first event and puts the rest
on a queue; another thread takes over polling if otherwise idle while
the first thread is still working. My impression this was a widely
favored model, though I don't know the details of where each performs best.
_______________________________________________
Libevent-users mailing list
Libevent-users@monkey.org
http://monkey.org/mailman/listinfo/libevent-users

Reply via email to