On Sep 7, 2006, at 11:49 PM, William Ahern wrote:
On Thu, Sep 07, 2006 at 11:29:50PM -0700, Scott Lamb wrote:
I think libevent's current multithreaded behavior is not terribly
useful:

1. You can't safely share a single event_base among a pool of
threads. This is actually what I'd like to do with threads,
especially now that four-core systems are becoming cheap. (My SSL
proxy should be able to put those extra cores to use.)
It's...tricky...to get right, though.

Why would you ever want to do this? I mean, in one sense it could simplify some multi-threaded designs. The complexity it adds, however, hardly seems worth it compared to how simple this could be done on a per- application
basis using the existing API.

How would you do it with the existing API? The best I've got is to have:

(1) an "acceptor" thread which just works on the listen sockets and throws accepted sockets to other threads based on some heuristic
(2) the "worker" threads that actually handle connections

The acceptor would lock, throw something into the target's "hey, add this" queue, then send it a wakeup.

But there are a couple performance aspects that I don't like:

(1) There's really no guarantee the workers are equally busy.
(2) No connection can actually proceed without being transferred across threads.

Maybe (1) could be addressed with some sort of rebalancing scheme...but at that point, it might get as complicated as the scheme below, and each application would have to implement that complexity.

Actually, to get this right from both an aesthetic as well as efficiency perspective would require, I think, libevent to be able to poll on both a
condition variable as well as traditional descriptor objects.

At a high level, I think it would require the basic poll algorithm to be:

    lock
    loop:
        while there are events:
            dequeue one
            unlock
            handle it
            lock
        if someThreadPolling:
            condition wait
        else:
            someThreadPolling = true
            poll for events
            lock
            fire condition
    unlock

so whatever thread happens to notice that it's out of events does a poll, and the others can see the results immediately. But I haven't addressed actually putting new fds into the poll array. I'm not sure what the behavior there has to be. I admit it - this approach is complicated.

Anyway, I'm not suggesting adopting this without actual proof that it's better. I need to blow the dust off my benchmark tools, but I'm willing to put the effort into trying things out if I hear ideas I like.

Maybe the
underlying capability could feasibly be added to kqueue's.... Nonetheless
it's a pretty far fetched proposition.

Well, I'm definitely not the first person to have suggested handling events in multiple threads simultaneously. Take a look at:

* Java's nio API. I don't know if it's horribly complicated inside, and I haven't used it in this way, much less actually benchmarked it, but they have some discussion of concurrency in their API docs. From <http://java.sun.com/j2se/1.5.0/docs/api/index.html>, take a look at java.nio.channels.Selector, "Concurrency" section.

* SEDA - http://www.eecs.harvard.edu/~mdw/proj/seda/

* Jeff Darcy's design notes - http://pl.atyp.us/content/tech/ servers.html

Best regards,
Scott

--
Scott Lamb <http://www.slamb.org/>


_______________________________________________
Libevent-users mailing list
Libevent-users@monkey.org
http://monkey.org/mailman/listinfo/libevent-users

Reply via email to