On Fri, 11.10.13 07:12, Tollef Fog Heen (tfh...@err.no) wrote:

> 
> ]] Lennart Poettering 
> 
> > On Thu, 10.10.13 13:12, David Strauss (da...@davidstrauss.net) wrote:
> > 
> > > I was actually planning to rewrite on top of libuv today, but I'm
> > > happy to port to the new, native event library.
> > > 
> > > Is there any best-practice for using it with multiple threads?
> > 
> > We are pretty conservative on threads so far, but I guess in this case
> > it makes some sense to distribute work on CPUs. Here's how I would do it:
> 
> [snip long description]
> 
> fwiw, if you want really high performance, this is not at all how I'd do
> it.  Spawning threads while under load is a recipe for disaster, for a
> start.  

Note that this isn't really under heavy load, it's just when some
threshold is reached.

> I'd go with something how it's done in Varnish: Have an (or n)
> acceptor threads that schedule work to a pool of worker threads.  That

Well, then you have to bump off every accepted fd from yet another
thread, that's no recipe to make things efficient.

> scheduler should be careful about such things as treating the worker
> threads as LIFO (to preserve CPU cache).  The advice about only 2-3
> threads per CPU core looks excessively conservative. 

This proxy is exclusively bound to network IO, which is asynchronous. It
does not involve disk IO which on Unix is synchronous. This means that
we'd do threads only to make use of the available CPUs better not to
parallelize disk IO. That's why the conservative approach appears to be
the right thing here...

> Using REUSEPORT might make sense in cases where you're happy to throw
> away performance for simplicty.  That's a completely valid tradeoff.

I am pretty sure the kernel is actually better at distributing
connectiosn to SO_REUSEPORT clients than your bouncer thread ever could
be, simply since it avoids the extra bouncer thread...

Regarding the performance of this:

https://lwn.net/Articles/542629/

Lennart

-- 
Lennart Poettering - Red Hat, Inc.
_______________________________________________
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel

Reply via email to