Re: [Qemu-block] [PATCH 06/10] aio-posix: remove walking_handlers, protecting AioHandler list with list_lock

2016-12-01 Thread Paolo Bonzini


On 30/11/2016 14:36, Paolo Bonzini wrote:
> 
> 
> On 30/11/2016 14:31, Stefan Hajnoczi wrote:
>> On Tue, Nov 29, 2016 at 12:47:03PM +0100, Paolo Bonzini wrote:
>>> @@ -272,22 +275,32 @@ bool aio_prepare(AioContext *ctx)
>>>  bool aio_pending(AioContext *ctx)
>>>  {
>>>  AioHandler *node;
>>> +bool result = false;
>>>  
>>> -QLIST_FOREACH(node, >aio_handlers, node) {
>>> +/*
>>> + * We have to walk very carefully in case aio_set_fd_handler is
>>> + * called while we're walking.
>>> + */
>>> +qemu_lockcnt_inc(>list_lock);
>>> +
>>> +QLIST_FOREACH_RCU(node, >aio_handlers, node) {
>>>  int revents;
>>>  
>>>  revents = node->pfd.revents & node->pfd.events;
>>>  if (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR) && node->io_read &&
>>>  aio_node_check(ctx, node->is_external)) {
>>> -return true;
>>> +result = true;
>>> +break;
>>>  }
>>>  if (revents & (G_IO_OUT | G_IO_ERR) && node->io_write &&
>>>  aio_node_check(ctx, node->is_external)) {
>>> -return true;
>>> +result = true;
>>> +break;
>>>  }
>>>  }
>>> +qemu_lockcnt_dec(>list_lock);
>>>  
>>> -return false;
>>> +return result;
>>>  }
>>>  
>>>  bool aio_dispatch(AioContext *ctx)
>>> @@ -308,13 +321,12 @@ bool aio_dispatch(AioContext *ctx)
>>>   * We have to walk very carefully in case aio_set_fd_handler is
>>>   * called while we're walking.
>>>   */
>>> -ctx->walking_handlers++;
>>> +qemu_lockcnt_inc(>list_lock);
>>>  
>>> -QLIST_FOREACH_SAFE(node, >aio_handlers, node, tmp) {
>>> +QLIST_FOREACH_SAFE_RCU(node, >aio_handlers, node, tmp) {
>>>  int revents;
>>>  
>>> -revents = node->pfd.revents & node->pfd.events;
>>> -node->pfd.revents = 0;
>>> +revents = atomic_xchg(>pfd.revents, 0) & node->pfd.events;
>>
>> Why is node->pfd.revents accessed with atomic_*() here and in aio_poll()
>> but not in aio_pending()?
> 
> It could use atomic_read there, indeed.

Actually, thanks to the (already committed) patches that limit aio_poll
to the I/O thread, these atomic accesses are not needed anymore.

Paolo



Re: [Qemu-block] [PATCH 06/10] aio-posix: remove walking_handlers, protecting AioHandler list with list_lock

2016-11-30 Thread Paolo Bonzini


On 30/11/2016 14:31, Stefan Hajnoczi wrote:
> On Tue, Nov 29, 2016 at 12:47:03PM +0100, Paolo Bonzini wrote:
>> @@ -272,22 +275,32 @@ bool aio_prepare(AioContext *ctx)
>>  bool aio_pending(AioContext *ctx)
>>  {
>>  AioHandler *node;
>> +bool result = false;
>>  
>> -QLIST_FOREACH(node, >aio_handlers, node) {
>> +/*
>> + * We have to walk very carefully in case aio_set_fd_handler is
>> + * called while we're walking.
>> + */
>> +qemu_lockcnt_inc(>list_lock);
>> +
>> +QLIST_FOREACH_RCU(node, >aio_handlers, node) {
>>  int revents;
>>  
>>  revents = node->pfd.revents & node->pfd.events;
>>  if (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR) && node->io_read &&
>>  aio_node_check(ctx, node->is_external)) {
>> -return true;
>> +result = true;
>> +break;
>>  }
>>  if (revents & (G_IO_OUT | G_IO_ERR) && node->io_write &&
>>  aio_node_check(ctx, node->is_external)) {
>> -return true;
>> +result = true;
>> +break;
>>  }
>>  }
>> +qemu_lockcnt_dec(>list_lock);
>>  
>> -return false;
>> +return result;
>>  }
>>  
>>  bool aio_dispatch(AioContext *ctx)
>> @@ -308,13 +321,12 @@ bool aio_dispatch(AioContext *ctx)
>>   * We have to walk very carefully in case aio_set_fd_handler is
>>   * called while we're walking.
>>   */
>> -ctx->walking_handlers++;
>> +qemu_lockcnt_inc(>list_lock);
>>  
>> -QLIST_FOREACH_SAFE(node, >aio_handlers, node, tmp) {
>> +QLIST_FOREACH_SAFE_RCU(node, >aio_handlers, node, tmp) {
>>  int revents;
>>  
>> -revents = node->pfd.revents & node->pfd.events;
>> -node->pfd.revents = 0;
>> +revents = atomic_xchg(>pfd.revents, 0) & node->pfd.events;
> 
> Why is node->pfd.revents accessed with atomic_*() here and in aio_poll()
> but not in aio_pending()?

It could use atomic_read there, indeed.

Paolo



signature.asc
Description: OpenPGP digital signature


Re: [Qemu-block] [PATCH 06/10] aio-posix: remove walking_handlers, protecting AioHandler list with list_lock

2016-11-30 Thread Stefan Hajnoczi
On Tue, Nov 29, 2016 at 12:47:03PM +0100, Paolo Bonzini wrote:
> @@ -272,22 +275,32 @@ bool aio_prepare(AioContext *ctx)
>  bool aio_pending(AioContext *ctx)
>  {
>  AioHandler *node;
> +bool result = false;
>  
> -QLIST_FOREACH(node, >aio_handlers, node) {
> +/*
> + * We have to walk very carefully in case aio_set_fd_handler is
> + * called while we're walking.
> + */
> +qemu_lockcnt_inc(>list_lock);
> +
> +QLIST_FOREACH_RCU(node, >aio_handlers, node) {
>  int revents;
>  
>  revents = node->pfd.revents & node->pfd.events;
>  if (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR) && node->io_read &&
>  aio_node_check(ctx, node->is_external)) {
> -return true;
> +result = true;
> +break;
>  }
>  if (revents & (G_IO_OUT | G_IO_ERR) && node->io_write &&
>  aio_node_check(ctx, node->is_external)) {
> -return true;
> +result = true;
> +break;
>  }
>  }
> +qemu_lockcnt_dec(>list_lock);
>  
> -return false;
> +return result;
>  }
>  
>  bool aio_dispatch(AioContext *ctx)
> @@ -308,13 +321,12 @@ bool aio_dispatch(AioContext *ctx)
>   * We have to walk very carefully in case aio_set_fd_handler is
>   * called while we're walking.
>   */
> -ctx->walking_handlers++;
> +qemu_lockcnt_inc(>list_lock);
>  
> -QLIST_FOREACH_SAFE(node, >aio_handlers, node, tmp) {
> +QLIST_FOREACH_SAFE_RCU(node, >aio_handlers, node, tmp) {
>  int revents;
>  
> -revents = node->pfd.revents & node->pfd.events;
> -node->pfd.revents = 0;
> +revents = atomic_xchg(>pfd.revents, 0) & node->pfd.events;

Why is node->pfd.revents accessed with atomic_*() here and in aio_poll()
but not in aio_pending()?


signature.asc
Description: PGP signature