On Fri, Jul 16, 2010 at 05:04:39PM -0400, Mathieu Desnoyers wrote:
> * Paul E. McKenney ([email protected]) wrote:

[ . . . ]

> > Given the above implementation, the code for S2 must do something
> > like the following:
> > 
> >     for (;;) {
> >             while (np = rcu_lfq_dequeue(&Q1, freenode)) {
> >                     do_something_with(np);
> >                     np1 = copy_node(np);
> >                     rcu_lfq_enqueue(&Q2, np1);
> >                     urcu_ref_put(np);
> >             }
> >     }
> > 
> > Yes, you can reduce the copy overhead by having np and np1 reference
> > bare nodes with a pointer to the real data, so only a minimal amount
> > of data need be copied.
> 
> This is where I'd like to come up with a clever scheme to handle cascading
> through cascaded refcounting rather than do the copy. I think it might be
> doable. But it does not come to my mind at this very moment, which is busy
> packing and organising my vacation! ;) I'll be offline for 2 weeks starting 
> this
> evening, back on August 2nd.

That would be very interesting!  I know of the following schemes, each
with restrictions:

o       Use a circular array of pointers to the messages.  This works
        quite well, but handles only concurrency between a single
        enqueuer and a single dequeuer.

o       Put all of the nodes in a big array, and use indexes in place
        of pointers.  Then keep a generation number with the pointer,
        and adapt the usual DCAS algorithms.  This works as long as
        the generation number cannot overflow -- perhaps due to forcing
        a wait for grace period upon overflow.

o       Just use locks on both ends with priority boosting.  If you
        have FIFO locks and priority boosting, it works fairly well
        for real-time use, even though it isn't lock-free.  Or even
        obstruction-free.

There are probably others, but these come to mind immediately.

                                                        Thanx, Paul

_______________________________________________
ltt-dev mailing list
[email protected]
http://lists.casi.polymtl.ca/cgi-bin/mailman/listinfo/ltt-dev

Reply via email to