Jan Kiszka wrote:
additional barrier. Can you check this?

diff --git a/include/nucleus/sched.h b/include/nucleus/sched.h
index df56417..66b52ad 100644
--- a/include/nucleus/sched.h
+++ b/include/nucleus/sched.h
@@ -187,6 +187,7 @@ static inline int xnsched_self_resched_p(struct xnsched 
*sched)
   if (current_sched != (__sched__))    {                               \
       xnarch_cpu_set(xnsched_cpu(__sched__), current_sched->resched);       \
       setbits((__sched__)->status, XNRESCHED);                              \
+      xnarch_memory_barrier();                                         \
   }                                                                    \
 } while (0)

In progress, if nothing breaks before, I'll report status tomorrow morning.

Mmmh -- not everything. The inlined XNRESCHED entry test in
xnpod_schedule runs outside nklock. But doesn't releasing nklock imply a
memory write barrier? Let me meditate...
Wouldn't we need a read barrier then (but maybe the irq-handling takes care of
that, not familiar with the code yet)?

A read barrier is not required here as we do not need to order load
operation /wrt each other in the reschedule IRQ handler.
Only if taking the interrupt is equivalent to:

  read interrupts status
  memory_read_barrier
  execute handler

processor manuals should have the answer to this (or it might already be in the code)...

You can always help: there is a lot boring^Winteresting tracepoint
conversion waiting in Xenomai, see the few already converted nucleus
tracepoints.
As soon as I have my system running, I'll put some effort into this.

/Anders


_______________________________________________
Xenomai-core mailing list
[email protected]
https://mail.gna.org/listinfo/xenomai-core

Reply via email to