<snip>

> > > diff --git a/lib/librte_eal/x86/include/rte_atomic.h
> > > b/lib/librte_eal/x86/include/rte_atomic.h
> > > index b9dcd30..bd256e7 100644
> > > --- a/lib/librte_eal/x86/include/rte_atomic.h
> > > +++ b/lib/librte_eal/x86/include/rte_atomic.h
> > > @@ -83,6 +83,23 @@ rte_smp_mb(void)
> > >
> > >  #define rte_cio_rmb() rte_compiler_barrier()
> > >
> > > +/**
> > > + * Synchronization fence between threads based on the specified
> > > + * memory order.
> > > + *
> > > + * On x86 the __atomic_thread_fence(__ATOMIC_SEQ_CST) generates
> > > + * full 'mfence' which is quite expensive. The optimized
> > > + * implementation of rte_smp_mb is used instead.
> > > + */
> > > +static __rte_always_inline void
> > > +rte_atomic_thread_fence(int mo)
> > > +{
> > > + if (mo == __ATOMIC_SEQ_CST)
> > > +         rte_smp_mb();
> > > + else
> > > +         __atomic_thread_fence(mo);
> > > +}
> > I think __ATOMIC_SEQ_CST needs to be used rarely. IMO,
> > rte_atomic_thread_fence should be called only for __ATOMIC_SEQ_CST
> > memory order. For all others the __atomic_thread_fence can be used
> directly. This will help us to stick to using the atomic built-ins in most of 
> the
> cases.
> >
> > Konstantin, is this ok for you?
> 
> My preference is to have one generic rte_atomic_thread_fence() for all cases
> (I.E - current Phil implementation looks good to me).
> I think it is more consistent approach and would help to avoid confusion.
Ok, I am fine. The script currently checks for 
__atomic_thread_fence(__ATOMIC_SEQ_CST), we have to change it to check for 
__atomic_thread_fence

> Konstantin

Reply via email to