Hi Gunar,

A semaphore is a common way of implementing that. The semaphore value is the current number of elements in the queue. Reading from the queue is:


semaphore s, mutex m, element e, queue q

sem_wait(s)     /* Blocks only if the queue is empty */
mutex_lock(m)   
e = queue_pop_left(q)
mutex_unlock()

Writing consists of

mutex_lock(m)
queue_push(q, e)
mutex_unlock(m)
sem_post(s)

The mechanism is a bit more complicated if one needs to handle the case where the queue can be full and one wants to block.

Also note that between the call to sem_wait() and mutex_lock() another thread could lock the mutex. This is not a problem. It only means that the second thread to wake up gets the first element the first gets the second. Something similar happens when one is writing.

Each mutex already has a queue of threads waiting for it, meaning that if multiple threads are blocked on mutex_lock, the first one should be woken up by mutex_unlock, and "sema" implements semaphores on top of mutexes, so they should behave similarly.

I'm curious as to the application of this, since RIOT does not run on multiple processor systems, what is the advantage of this approach over serial processing in a single thread (other than bypassing blocking calls).

Regards,

Juan.

On 07/12/2018 09:58 AM, Gunar Schorcht wrote:
Hi,

what would be the best way, if there is one, to use the existing
mechanisms to implement a message queue that is not bound to the
receiving thread?

What I'm looking for is a message queue that can be used by a number of
threads to send messages to and receive messages from the shared queue,
as it is possible in FreeRTOS, for example.

Regards
Gunar



_______________________________________________
devel mailing list
devel@riot-os.org
https://lists.riot-os.org/mailman/listinfo/devel

_______________________________________________
devel mailing list
devel@riot-os.org
https://lists.riot-os.org/mailman/listinfo/devel

Reply via email to