B) How to implement a lock-free fifo? Or rather: is there some ready to use implementation of it?



i just use RingBuffer<Event*> (assuming you know C++ syntax). pseudo code:

delivery:

        Event* event;
        event = new Event (arg1, arg2, ...);
        pending_events.write (&event, 1);

receipt:

Event* event;

        if (pending_events.read (&event, 1) == 1) {
            process_event (event);      
        }



Sorry, i don't seem to understand this completely.
What is the underlying data structure (for pending_events)?
How can be guarantied that the write and read operations are atomic?
What is the second argument of write/read ("1")?
"&event" is a pointer to a pointer? Why not store a pointer to an Event object?


Or to put it different: What is happening in the read/write methods?
My adhoc (and maybe naive) approach would look something like this (assuming ++some_ptr is atomic):


Event** read_ptr;
Event** write_ptr;

bool read(Event ** e){
   (... on "xrun" return false ...)    (*)
   *e = *read_ptr;
   ++read_ptr;
   (... wrap arround at end of array ...)
   return true;
}

void write(Event ** e){
   (... on "xrun" return false ...)    (*)
   *write_ptr = *e;
   ++write_ptr;
   (... wrap arround at end of array ...)
}

The critical line seems to be the one marked with (*).
How would this be done? Can it be implemented as an "atomic" operation?
To tell the truth, i'm getting pretty much confused when trying to determine what would be an overrrun and would be an underrun in the above implementation. I guess a general xrun occurs if either of the two pointers overtakes the other one. If in the the write method the write_ptr overtakes the read_ptr, that would be an overrun, if in the read method the read_ptr overtakes the write_ptr, this would be an underrun. Is this correct?


And of course this only works with a single reader and a single writer thread. so if communication has to happen between event sources on several different threads and a single "event delegation" thread, then there has to be one fifo for each of the threads that create events. Right?

What happens if this runs on a multi processor system? even if the operations are atomic, they could happen at the same time than, couldn't they?

Propably this are very basic issues, and propably i am not the first one to ask this questions. Please have patience with me :-)

Regards,
Lukas



Reply via email to