On 20 Sep 2007, at 23:13, Quentin Mathé wrote:

> I have a question about the ring buffer, what is the purpose of
> consumer/producer variables which are incremented or decremented?

It uses a variant of the classic ring buffer invented, I believe, by  
Keir Fraser (author of Xen).  It's a really nice concurrent data  
structure, because it removes a lot of the messy special cases you  
get with most ring buffer implementations.

There are two threads accessing the data.  The thread owning the  
ETThreadedObject, and the worker thread.  The first of these forwards  
messages to the second.  Since the first is putting messages into the  
buffer, while the second is removing them, the first is the producer  
and the second a consumer.  Every time a message is put into the  
buffer, the producer counter is incremented, every time a message is  
removed from the buffer, the consumer counter is incremented.

The producer and consumer variables are free running counters.  The  
size of the ring buffer must be a power of two (256 in this version),  
and the array index can be found from a counter with a simple mask  
(just take the lowest 8 bits).  This means you never need to worry  
about the cases when the producer wraps and the index is smaller than  
the consumer index, because the producer index is always bigger than  
(or equal to) the consumer index.

You can always find the amount of space in the buffer by doing  
(producer - consumer).  This is true even when producer has  
overflowed and wrapped around, because of the way CPUs implement  
arithmetic.  There are just a small number of requirements for  
implementing this kind of structure:

- The ring size must be a power of two.
- The counter variable must be able to store values up to 2^n+1,  
where the size of the ring buffer is 2^n.
- Only the producer thread may increment the producer counter.
- Only the consumer thread may increment the consumer counter.
- Neither counter may be decremented, except as a result of overflow.

It's very shiny, and took me a little while (and a page of maths) to  
get my head around the first time I saw it (I didn't believe it would  
work until I'd proved that it did)...

I've used a slightly modified version of this, which blocks when the  
queue is empty.

The consumer thread checks whether the queue is empty.  If it is,  
then it grabs a mutex.  It then checks again, to ensure that no data  
has been put in the thread in the meantime, and then waits on a  
condition variable (which atomically releases the mutex).

The producer thread, after inserting a message, checks to see if the  
queue was empty (i.e. contains only one message).  If it did, then it  
acquires the mutex and signals the condition variable.  Acquiring the  
mutex guarantees that the other thread is not in the bit of code  
between acquiring the mutex and waiting on the condition variable.   
If it is after, then having the condition variable signalled will  
wake it.  If it is before, then the second test in the consumer  
thread will fail, and it will not wait on the condition variable.

As you can probably see, there are a couple of corner cases where  
some mutex operations will be performed needlessly, but I think  
fixing these will complicate the code beyond my ability to debug.

> I plan to use in EtoileUI, but probably not for 0.3. I will have to
> think about the right concurrency model for the framework.
> CoreObject will probably use it too…

Message passing between threads is now faster, but it's still a lot  
more expensive than message passing in the same thread, so I'd  
recommend against it in anything latency-sensitive.

> Do you think the following proposition is possible and/or
> interesting… We very often experiment potential micro-blocking at UI
> level (of one or two seconds or even half a second) and threads
> aren't very efficient in such cases. My proposition is to have a base
> class ETActorObject, two subclasses ETCoroutinedObject and
> ETThreadedObject. Most of the time we would just instantiate an
> ETActorObject relying behind the scene (or returning) an
> ETCoroutinedObject by default. In addition to that, a rudimentary
> scheduler would monitor coroutines and turns any of them that runs
> for too long (an amount of time that can be adjusted per application)
> into an ETThreadedObject to benefits of real multiprocessing provided
> by OS and hardware.

It could be interested.  Of course, NSApp's NSRunLoop gives us  
coroutines anyway, and NSTimer gives us scheduled coroutines.  It is  
likely to be very hard to move a task from being a coroutine to being  
a thread while it is running, however (you are basically implementing  
an N:M threading library, without any kernel support).  The other  
problem is that there are some constraints that threaded objects have  
to obey that coroutines don't (which is why coroutines are so popular  
for UI programming; they are easy).

Half a second is a long time.  On my CPU, it's around 1,000,000,000  
clock cycles, which is enough to do a very respectable amount of  
work, so I would suggest that anyone who has a task that is causing  
the UI to block in this way consider moving it into a separate thread  
anyway.  If it's long enough for a user to notice, it's probably long  
enough that the overhead of setting up a thread would not be too  
great.  When using ETThreadedObject, don't forget, you can keep the  
thread around for the lifecycle of the object, and so it can be  
comparatively cheap.

David
_______________________________________________
Etoile-dev mailing list
[email protected]
https://mail.gna.org/listinfo/etoile-dev

Reply via email to