On 13/09/2018 08:56, Fam Zheng wrote:
>> +    /* No need to order poll_disable_cnt writes against other updates;
>> +     * the counter is only used to avoid wasting time and latency on
>> +     * iterated polling when the system call will be ultimately necessary.
>> +     * Changing handlers is a rare event, and a little wasted polling until
>> +     * the aio_notify below is not an issue.
>> +     */
>> +    atomic_set(&ctx->poll_disable_cnt,
>> +               atomic_read(&ctx->poll_disable_cnt) + poll_disable_change);
>
> Why not atomic_add?

This is not lockless, it's protected by list_lock, so there's no race
condition involved.  I'm just mimicking what is done for other similar
cases, for example involving seqlocks.

The alternative would be to add a full set of
atomic_{add,sub,...}_relaxed atomics.

Paolo

Reply via email to