Le 12/08/2016 à 03:50 PM, Savolainen, Petri (Nokia - FI/Espoo) a écrit :
>
>> -----Original Message-----
>> From: lng-odp [mailto:lng-odp-boun...@lists.linaro.org] On Behalf Of
>> Nicolas Morey-Chaisemartin
>> Sent: Wednesday, December 07, 2016 10:12 AM
>> To: LNG ODP Mailman List <lng-odp@lists.linaro.org>
>> Subject: [lng-odp] odp_rwlock_read_trylock
>>
>> HI,
>>
>>
>> While looking into the test/code, I noticed something weird in the
>> implementation of the rwlock read trylock on monarch.
>>
>> int odp_rwlock_read_trylock(odp_rwlock_t *rwlock)
>> {
>>     uint32_t zero = 0;
>>
>>     return odp_atomic_cas_acq_u32(&rwlock->cnt, &zero, (uint32_t)1);
>> }
>>
>>
>> This mean the trylock only succeed if no one was using the lock. But it
>> will fail if someone was owning it *even* if it is a reader.
>> Is this something expected? If yes, it should probably be detailed in the
>> API.
> /**
>  * Try to acquire read permission to a reader/writer lock.
>  *
>  * @param rwlock Pointer to a reader/writer lock
>  *
>  * @retval  0 Lock was not available for read access
>  * @retval !0 Read access to lock acquired
>  */
> int odp_rwlock_read_trylock(odp_rwlock_t *rwlock);
>
>
> The spec just says that "read access" was not available. Implementation 
> decides when it's available and application just needs to re-try later on. 
> So, this implementation is OK by the spec. Supporting multiple readers would 
> improve parallelism, but it may not be optimal for performance if the lock is 
> free in the common case.
>
>
>>
>> The lock implementation I wrote allows to get the lock if a reader already
>> owns it. And causes the testsuite to deadlock:
>>
>>     odp_rwlock_init(rw_lock);
>>     /* CU_ASSERT(odp_rwlock_is_locked(rw_lock) == 0); */
>>
>>     odp_rwlock_read_lock(rw_lock); => Lock is owned in read
>>
>>     rc = odp_rwlock_read_trylock(rw_lock); => Locked is owned a second
>> time in read (rc = 1)
>
> You cannot do this on a single thread (lock the same lock twice). You must 
> use recursive locks if the thread holding a lock will lock it again.
>  

The issue is odp_rwlock_read_trylock fails here, but odp_rwlock_read_lock would 
succeed.
Usually rwlock are recursive on the read side as it doesn't really matter to 
the lock if the same thread owns it N times, or N threads own it once each.
IMHO, in a non-concurrent setup, read_trylock should always succeed if 
read_lock would succeed.
(I say non concurrent here to not acount for the case of contention on the lock 
where many retries could be needed to get the lock).

>
>>     CU_ASSERT(rc == 0);
>>     rc = odp_rwlock_write_trylock(rw_lock); => Expected failure
>>     CU_ASSERT(rc == 0);
>>
>>     odp_rwlock_read_unlock(rw_lock); => Lock is still owned once.
>>
>>
>> So either the API should describe more accurately what are the expected
>> success/failure case, or the test and implementation changed.
>>
>> On a side note, the API explicitly says that reader have the priority on
>> writers on rwlock. Is this really an API requirement?
>> Our rwlocks are the other way around to avoid the starvation issue. Do I
>> need to change them ? Or is it OK with the API?
>>
> Do you mean this:
>  * A reader/writer lock allows multiple simultaneous readers but only one
>  * writer at a time. A thread that wants write access will have to wait until
>  * there are no threads that want read access. This causes a risk for
>  * starvation. The trylock variants can be used to avoid blocking when
>  * the lock is not immediately available.
>
> It could be softened with "may have to wait" and "may cause a risk". 
> Different RW lock implementations should be possible.
>
> -Petri
What bothers me is this:

A thread that wants write access will have to wait until
 * there are no threads that want read access.

This clearly states that a writer cannot get the write lock if a reader *wants* 
it.
I think

A thread that wants write access will have to wait until
 * there are no threads that own read permission on the lock.


would relax this constraint and allow for write-priority rwlock.

Reply via email to