Thanks for the clear explanation.

On Wed, Dec 3, 2014 at 2:42 PM, Mario Torrecillas Rodríguez <
[email protected]> wrote:

> Hi Bill,
>
> *cur_ticket*, in this implementation in particular at least, is only
> updated during the unlock operation (this, plus the fact that there is only
> one owner, is the reason atomicity is not required for *cur_ticket*).
> Threads attempting to acquire the lock will (atomically) increment
> *next_ticket*, and then spin on *cur_ticket* waiting for it to be equal
> to the ticket they grabbed.
> The only thing that will happen if an interrupt occurs between those two
> lines of code is that threads spinning will have to wait a bit longer, but
> this is true for any type of lock.
>
> I don’t see a potential risk of races myself so, *Ola*, you can take this
> as a review, unless there’s something I’m missing!
>
> Mario.
>
> From: Bill Fischofer <[email protected]>
> Date: Wednesday 3 December 2014 20:12
> To: Ola Liljedahl <[email protected]>
> Cc: Mario Torrecillas Rodríguez <[email protected]>, "
> [email protected]" <[email protected]>
> Subject: Re: [lng-odp] [PATCH] linux-generic: odp_ticketlock.c:
> performance regression
>
> Agreed, that correct programming would only have the owner attempt to
> unlock, but what about other threads simultaneously attempting to lock
> while the owner is unlocking?  Maybe I don't fully appreciate the memmodel
> semantics, but there can be arbitrary delays between any two instructions
> executed by a single core due to interrupts.
>
> On Wed, Dec 3, 2014 at 2:07 PM, Ola Liljedahl <[email protected]>
> wrote:
>
>> On 3 December 2014 at 18:29, Mario Torrecillas Rodriguez
>> <[email protected]> wrote:
>> > I don¹t think cur_ticket will be read/written by more than one thread
>> > simultaneously, only the one which owns the lock will attempt to update
>> it.
>> Exactly.
>>
>> Do I take that as a review Mario?
>>
>> >
>> > Mario.
>> >
>> > On 03/12/2014 17:16, "Maxim Uvarov" <[email protected]> wrote:
>> >
>> >>On 12/03/2014 05:20 PM, Ola Liljedahl wrote:
>> >>> Ping!
>> >>
>> >>Needed more review for this.
>> >>>
>> >>> On 1 December 2014 at 14:34, Ola Liljedahl <[email protected]>
>> >>>wrote:
>> >>>> Signed-off-by: Ola Liljedahl <[email protected]>
>> >>>> ---
>> >>>> Replaced an atomic RMW add with separate load, add and store
>> >>>>operations.
>> >>>> This avoids generating a "locked" instruction on x86 which implies
>> >>>>unnecessary
>> >>>> strong memory ordering and improves performance. This change could
>> >>>>also prove
>> >>>> beneficial on other architectures.
>> >>>>
>> >>>>   platform/linux-generic/odp_ticketlock.c | 5 ++++-
>> >>>>   1 file changed, 4 insertions(+), 1 deletion(-)
>> >>>>
>> >>>> diff --git a/platform/linux-generic/odp_ticketlock.c
>> >>>>b/platform/linux-generic/odp_ticketlock.c
>> >>>> index 6c5e74e..1e67ff5 100644
>> >>>> --- a/platform/linux-generic/odp_ticketlock.c
>> >>>> +++ b/platform/linux-generic/odp_ticketlock.c
>> >>>> @@ -32,7 +32,10 @@ void odp_ticketlock_lock(odp_ticketlock_t
>> >>>>*ticketlock)
>> >>>>
>> >>>>   void odp_ticketlock_unlock(odp_ticketlock_t *ticketlock)
>> >>>>   {
>> >>>> -       _odp_atomic_u32_add_mm(&ticketlock->cur_ticket, 1,
>> >>>>_ODP_MEMMODEL_RLS);
>> >>>> +       uint32_t cur =
>> _odp_atomic_u32_load_mm(&ticketlock->cur_ticket,
>> >>>> +                                              _ODP_MEMMODEL_RLX);
>> >>>> +       _odp_atomic_u32_store_mm(&ticketlock->cur_ticket, cur + 1,
>> >>>> +                                _ODP_MEMMODEL_RLS);
>> >>>>
>> >>Isn't this code racy? Some threads can read cur at the same time, then
>> >>delay (interrupt or whatever) then do store.
>> >>And as result store will have wrong cut_ticket value.
>> >>
>> >>Maxim.
>> >>
>> >>>>   #if defined __OCTEON__
>> >>>>          odp_sync_stores(); /* SYNCW to flush write buffer */
>> >>>> --
>> >>>> 1.9.1
>> >>>>
>> >>> _______________________________________________
>> >>> lng-odp mailing list
>> >>> [email protected]
>> >>> http://lists.linaro.org/mailman/listinfo/lng-odp
>> >>
>> >>
>> >>_______________________________________________
>> >>lng-odp mailing list
>> >>[email protected]
>> >>http://lists.linaro.org/mailman/listinfo/lng-odp
>> >>
>> >
>> >
>> >
>> >
>> > _______________________________________________
>> > lng-odp mailing list
>> > [email protected]
>> > http://lists.linaro.org/mailman/listinfo/lng-odp
>>
>> _______________________________________________
>> lng-odp mailing list
>> [email protected]
>> http://lists.linaro.org/mailman/listinfo/lng-odp
>>
>
>
_______________________________________________
lng-odp mailing list
[email protected]
http://lists.linaro.org/mailman/listinfo/lng-odp

Reply via email to