Dmitry Adamushko wrote:
> ...
> In case of linux, smp_mb__after_atomic_inc() and
> smp_mb__before_atomic_dec()
> would do the job. But so far, I decided not to add something like
> xnarch_memory_barrier__after_atomic_inc() :) , provided that both seem to
> end up being either mb() or barrier() at the same time (have to check more
> thoroughly) anyway.
> Any suggestions?

As we now know that this patch actually solves a real issue on SMP
(tested by Paolo in RTAI's RTDM variant recently), I would say /some/
form of it should quickly go into the branches.

Actually, this kind of locking via reference counter, may it be atomic
or per-cpu, is a generic pattern for many (RCU-like) use cases. We have
it in RTnet (and I guess it's broken there as well, sigh), and I could
imagine to use it more broadly in the future. This means we should think
about a generic interface (to reduce the probability to use it the wrong
way around...). And for such an interface, we will have a need of
efficient memory barriers.

I see two paths now:

A) If we go for atomic_inc/dec with such locking service,
xnarch_memory_barrier__after_atomic_inc & friends will be needed anyway
and could be already introduced now.

B) If we aim at per-cpu counters (complicates things, but SMP clearly
benefits), we may simply merge Dmitry's patch as is.

Any thoughts?


Attachment: signature.asc
Description: OpenPGP digital signature

Xenomai-core mailing list

Reply via email to