2014-03-18 13:48 GMT+03:00 Konstantin Osipov <[email protected]>:
> Hi,
>
> In our program we have 2 threads, both running ev_loops, and would
> like to organize a simple producer-consumer message queue from one
> thread to the other.
> For the purpose of this discussion let's assume a message is a
> simple integer.
>
> It seems that we could implement such a data structure
> in a completely lock-less manner.
>
> Please consider the following implementation:
>
> enum { QUEUE_SIZE = 100 };
>
> struct queue {
> int q[QUEUE_SIZE];
> int wpos; /* todo: cacheline aligned. */
> int rpos; /* todo: cacheline aligned */
> ev_async async;
> };
>
> /* Done in the producer */
> void
> queue_init(struct queue *q, ev_loop *consumer)
> {
> q->rpos = q->wpos = 0;
> ev_async_init(&async, consumer);
> }
>
> /* For use in the producer thread only. */
> void
> queue_put(struct queueu *q, int i)
> {
> if (q->wpos == QUEUE_SIZE)
> q->wpos = 0;
> q->q[q->wpos++] = i;
> ev_async_send(&q->async);
> }
>
> /*
> * For use in the consumer thread only, in the event handler
> * for q->async.
> */
> int
> queue_get(struct queue *q)
> {
> if (q->rpos == QUEUE_SIZE)
> q->rpos = 0;
> return q->q[q->rpos++];
> }
>
> Let's put aside the problem of rpos and wpos running over each other,
> for simplicity.
> The question is only, provided that QUEUE_SIZE is sufficient for
> our production loads, would memory barrier built into
> ev_async_send be sufficient to ensure the correct read ordering
> of this queue?
>
>
The reading order would be ok of course. However you should take into
account that ``multiple events might get compressed into a single
callback invocation``, so consumer thread may have to consume multiple
items from the queue upon callback. You might need to create some
logic to prevent under/over consuming of items
--
lg
_______________________________________________
libev mailing list
[email protected]
http://lists.schmorp.de/cgi-bin/mailman/listinfo/libev