Reviewed-and-tested-by: Petri Savolainen <[email protected]>

> -----Original Message-----
> From: [email protected] [mailto:lng-odp-
> [email protected]] On Behalf Of ext Ola Liljedahl
> Sent: Wednesday, December 03, 2014 4:21 PM
> To: [email protected]
> Subject: Re: [lng-odp] [PATCH] linux-generic: odp_ticketlock.c:
> performance regression
> 
> Ping!
> 
> On 1 December 2014 at 14:34, Ola Liljedahl <[email protected]>
> wrote:
> > Signed-off-by: Ola Liljedahl <[email protected]>
> > ---
> > Replaced an atomic RMW add with separate load, add and store operations.
> > This avoids generating a "locked" instruction on x86 which implies
> unnecessary
> > strong memory ordering and improves performance. This change could also
> prove
> > beneficial on other architectures.
> >
> >  platform/linux-generic/odp_ticketlock.c | 5 ++++-
> >  1 file changed, 4 insertions(+), 1 deletion(-)
> >
> > diff --git a/platform/linux-generic/odp_ticketlock.c b/platform/linux-
> generic/odp_ticketlock.c
> > index 6c5e74e..1e67ff5 100644
> > --- a/platform/linux-generic/odp_ticketlock.c
> > +++ b/platform/linux-generic/odp_ticketlock.c
> > @@ -32,7 +32,10 @@ void odp_ticketlock_lock(odp_ticketlock_t
> *ticketlock)
> >
> >  void odp_ticketlock_unlock(odp_ticketlock_t *ticketlock)
> >  {
> > -       _odp_atomic_u32_add_mm(&ticketlock->cur_ticket, 1,
> _ODP_MEMMODEL_RLS);
> > +       uint32_t cur = _odp_atomic_u32_load_mm(&ticketlock->cur_ticket,
> > +                                              _ODP_MEMMODEL_RLX);
> > +       _odp_atomic_u32_store_mm(&ticketlock->cur_ticket, cur + 1,
> > +                                _ODP_MEMMODEL_RLS);
> >
> >  #if defined __OCTEON__
> >         odp_sync_stores(); /* SYNCW to flush write buffer */
> > --
> > 1.9.1
> >
> 
> _______________________________________________
> lng-odp mailing list
> [email protected]
> http://lists.linaro.org/mailman/listinfo/lng-odp

_______________________________________________
lng-odp mailing list
[email protected]
http://lists.linaro.org/mailman/listinfo/lng-odp

Reply via email to