We are doing something along those lines in a project I am working on...

We are using a connected pipe pair as the fifo queue, then write pointers 
to 'work' objects from foreign threads and read work objects from the pipe 
inside the libuv thread and process work items from there. The elegance 
(imho) is that no additional thread synchronization is needed as 
reading/writing from the pipe is inherently, if not lock-free, then at 
least thread safe! - and allows for request specific data (and function) to 
be associated with each request.

We also spruced it up with a synchronous variant that will block the 
calling thread (don't use this from the libuv thread though!) and wait for 
the i/o thread to process it (this part _does_ use a condition variable for 
blocking the calling thread and for marking the completion but that's ok 
for synchronous operations), e.g.

/* called from read_cb on the libuv thread when we have a full pointer's 
worth of data */
static void process_work(work_object_t *work) {
    switch (work->work_type) { 
        case work_object_async: {
            work->work_fun(work->work_data);
            /* work was dynamically allocated so free it now */
            free(work);
            return;
        }
        case work_object_sync: {
            sync_work_object_t *sync_work = (sync_work_object_t *) work;
            work->work_fun(work->work_data);
            uv_mutex_lock(&sync_work->mux);
            sync_work->done = 1;
            uv_cond_signal(&sync_work->cond);
            uv_mutex_unlock(&sync_work->mux);
            /* don't free work object, it was allocated on the stack */
            return;
        }
    }
    assert(!"invalid work type");
}

Scheduling an async request is performed using write() :

work_object_t *work = (work_object_t *) calloc(1, sizeof(work_object_t));
work->work_type = work_object_async;
work->work_fun = work_fun; /* user specified */
work->work_data = work_data; /* user specified */
write(self->send_fd, &work, sizeof(work_object_t *)))

I am not sure if there would be any portability problems using this 
approach; we have been using it successfully on Windows (via Named Pipes), 
MAC and several Linux flavors for some years now without problems. If there 
is interest I can possibly tidy up the code and share the implementation

Cheers

Poul Thomas Lomholt

On Monday, September 12, 2016 at 12:52:37 AM UTC-7, Saúl Ibarra Corretgé 
wrote:
>
> On 09/10/2016 12:46 PM, Bernardo Ramos wrote: 
> > Hi guys! 
> > 
> > I have a suggestion for v2: 
> > 
> > To change the uv_async_send() to contain a queue inside it so we don't 
> > need to implement it in our code. 
> > 
> > In this way we could just call uv_async_send() and the callback would be 
> > fired in the main (or other) event loop with the dequeued pointer from 
> > the queue. 
> >   
> > I guess it would be much easier for the libuv users as we would not have 
> > to care about creating a queue, using mutexes and even about the 
> > coalescing of calls. All this could be handled automatically by libuv. 
> > 
> > What do you think about this? 
> > 
> > Maybe in the future it could even have a lock-free approach to this 
> > instead of using a mutex to avoid deadlocks, using *lib*lfds.org or 
> > other library (Well, I know this is a dangerous zone. We can just use 
> > mutexes for now). 
> > 
> > Also, if the queue has a limited size and it is full then function 
> > should either block until succeeded or return an error. 
> > 
> > What is better, let it the way it is or make it easier to use? 
> > 
>
> Hi, 
>
> I had some other idea a while ago, which I described here: 
> https://github.com/libuv/leps/pull/2 (search for uv_callback_t). 
>
> The end goal is the same: have a way to queue a callback from another 
> thread without coalescing, but the difference being what the user sent 
> was a request. 
>
> I don't think what you suggest would help many, because uv_async_send 
> just calls the callback, so if we wouldn't coalesce calls you'd get the 
> callback called N times, but no data can be associated with each call. 
> Hence my proposal. 
>
> Alas, I think I packed too many things in a single LEP and it stalled. 
> I should probably break it into smaller pieces so we move forward / 
> discard them. 
>
>
> Cheers, 
>
> -- 
> Saúl Ibarra Corretgé 
> bettercallsaghul.com 
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"libuv" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to libuv+unsubscr...@googlegroups.com.
To post to this group, send email to libuv@googlegroups.com.
Visit this group at https://groups.google.com/group/libuv.
For more options, visit https://groups.google.com/d/optout.

Reply via email to