Hello,

following the recent discussion with Jan, here is a patch that aims at allowing xnintr_lock/unlock actually do what they were supposed to do in the first instance.

in the shirq case and smp,

1) xnintr_lock/unlock()

we have to guarantee that any possible access to the shirq handlers' list is taken inside the lock / unlock() section. To this goal, the shirq->counter's modification have to be separated with a memory barrier from the mentioned above access.

so the patch takes care of it.

2) now, (1) works as long as the 2nd part (xnintr_detach() -> xnintr_irq_detach() -> deletion of the element from the list of shirq handlers)
also respects the rules. Here, a modification of the list (element deletion) has to be completed before the shirq->active counter being accessed. But the deletion takes place inside the xnlock_get/put_irq* section (xnintr_detach()) which always implies a memory barrier in place.

In case of linux, smp_mb__after_atomic_inc() and smp_mb__before_atomic_dec() would do the job. But so far, I decided not to add something like xnarch_memory_barrier__after_atomic_inc() :) , provided that both seem to end up being either mb() or barrier() at the same time (have to check more thoroughly) anyway.

Any suggestions?

--
Best regards,
Dmitry Adamushko


--- xenomai/ksrc/nucleus/intr-old.c	2006-11-12 00:17:56.000000000 +0100
+++ xenomai/ksrc/nucleus/intr.c	2006-11-12 00:22:15.000000000 +0100
@@ -135,12 +135,14 @@ static inline void xnintr_shirq_lock(xni
 {
 #ifdef CONFIG_SMP
 	xnarch_atomic_inc(&shirq->active);
+	xnarch_memory_barrier();
 #endif
 }
 
 static inline void xnintr_shirq_unlock(xnintr_shirq_t *shirq)
 {
 #ifdef CONFIG_SMP
+	xnarch_memory_barrier();
 	xnarch_atomic_dec(&shirq->active);
 #endif
 }

_______________________________________________
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core

Reply via email to