On Sun, Nov 04, 2007 at 03:18:42PM -0800, Steven Grimm wrote:
> You've just pretty accurately described my initial implementation of  
> thread support in memcached. It worked, but it was both more CPU- 
> intensive and had higher response latency (yes, I actually measured  
> it) than the model I'm using now. The only practical downside of my  
> current implementation is that when there is only one UDP packet  
> waiting to be processed, some CPU time is wasted on the threads that  
> don't end up winning the race to read it. But those threads were idle  
> at that instant anyway (or they wouldn't have been in a position to  
> wake up) so, according to my benchmarking, there doesn't turn out to  
> be an impact on latency. And though I am wasting CPU cycles, my total  
> CPU consumption still ends up being lower than passing messages around  
> between threads.

Is this on Linux? They addressed the stampeding herd problem years ago. If
you dig deep down in the kernel you'll see their waitq implemention for
non-blocking socket work (and lots of other stuff). Only one thread is ever
woken per event.
Libevent-users mailing list

Reply via email to