>> Thanks for your fast answer. If I correctly understand what you're saying, >> the same ev_async watcher started ev_async_init()-ed and ev_async_start()-ed >> in each worker thread has to be passed to the ev_async_send() call ? It >> means it cannot be a local variable anymore and has to be accessible from the >> worker threads and the main (accept) thread ? >> >> I changed my code a little bit so that the ev_async watchers are now >> "global" and persistent, and the whole thing seems to work (but keep >> reading): > > Without analysing your code in detail, I think you might be entrenched too > much in the special semantics of ev_async. > > ev_async, really, is like any other watcher, you ev_async_init it before > use, and then you ev_async_start it with some loop. > > When you then want the callback invoked, you ev_async_send it some signal. > This can be done at any time from any other thread. > > It probably helps to think of ev_async watchers as if they were some kind > of ev_signal watchers: instead of sendign a signal, you ev_async_send.
Ok, it makes total sense now. >> So I guess I'm stuck back to my piping queue mechanism in this case, because >> a simple eventfd counter is not enough to hold an high-rate fd flow from > > What's wrong with using, well, a queue (C++ has some, there are many > libraries that have some, it's easy to make one yourself using a > doubly-linked list etc.) instead of a pipe? That way, you can easily pass > around normal data structures inside your process, without having to do > syscalls in common cases (I mean, you already use threads, so what's wrong > with using a mutex and a queue?) I guess nothing's wrong with it ;-) > Since you know what threading model you use, it's trivial to create a > queue - ev_async is still the fastets way to wake up an event loop. Yes, I looked at your implementation and eventfd (when available) is definitely the fastest way to go. > There are other designs possible, see the other replies. Also, you could > wait for your workers to finish before you give them new jobs, but I > think it's easiest to use a queue. If you are unsure about how to queue > using threads, you can look at libeio, which implements a threadpool that > handles queued requests: > > http://cvs.schmorp.de/libeio/eio.c?view=markup > > - etp_submit submits a request to the pool. > - etp_poll handles results returned from the pool. > - etp_proc is the worker thread loop that reads requests. > > The manpage (http://pod.tst.eu/http://cvs.schmorp.de/libeio/eio.pod) > briefly explains how ev_async would be plugged into this system. > > (It actually is a planned goal of libeio to be split into a reusable > threadpool-part and an io part, but it's not there yet). > > Now, as a word of advice: multithreading is (imho) extremely complicated, > expect that you will have to learn a lot. Actually, the processing I will be doing in each worker thread will greatly benefit from a multi-thread architecture (and not so much from a multi-processes one). This is why I'm trying to implement it this way. >> Is there a chance to see a similar generic API directly into libev sometime >> soon ? > > The generic api is called ev_async, really. You probably think too > complicated: If you pay the inefficiencies of threads for a shared address > space, why not use it to handle your data? > > Keep also in mind that threads are not very scalable (they are meant to > improve performance on a single cpu only and decrease performance on > multiple cores in general), and since the number of cores will increase > more and more in the near future, they might not be such a good choie. "Decrease performance on mutiple core in general" ? But what about a single-threaded single process program ? It wouldn't benefit from multiple cores (since the kernel wouldn't schedule this program on mote than one core at a time anyway), right ? Based on the replies I got, I think I will use a very simple lightweight queue (_not_ pipe-based !) and an ev_async to wake the relevant thread to read the queue. Thanks a lot for all the advices, Pierre-Yves _______________________________________________ libev mailing list [email protected] http://lists.schmorp.de/cgi-bin/mailman/listinfo/libev
