Markus Armbruster <arm...@redhat.com> writes:

> Just spotted this in my git-pull...
>
> Alexander Yarygin <yary...@linux.vnet.ibm.com> writes:
>
>> Each call of the virtio_blk_reset() function calls blk_drain_all(),
>> which works for all existing BlockDriverStates, while draining only
>> one is needed.
>>
>> This patch replaces blk_drain_all() by blk_drain() in
>> virtio_blk_reset(). virtio_blk_data_plane_stop() should be called
>> after draining because it restores vblk->complete_request.
>>
>> Cc: "Michael S. Tsirkin" <m...@redhat.com>
>> Cc: Christian Borntraeger <borntrae...@de.ibm.com>
>> Cc: Cornelia Huck <cornelia.h...@de.ibm.com>
>> Cc: Kevin Wolf <kw...@redhat.com>
>> Cc: Paolo Bonzini <pbonz...@redhat.com>
>> Cc: Stefan Hajnoczi <stefa...@redhat.com>
>> Signed-off-by: Alexander Yarygin <yary...@linux.vnet.ibm.com>
>> ---
>>  hw/block/virtio-blk.c | 15 ++++++++++-----
>>  1 file changed, 10 insertions(+), 5 deletions(-)
>>
>> diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
>> index e6afe97..d8a906f 100644
>> --- a/hw/block/virtio-blk.c
>> +++ b/hw/block/virtio-blk.c
>> @@ -651,16 +651,21 @@ static void virtio_blk_dma_restart_cb(void *opaque, 
>> int running,
>>  static void virtio_blk_reset(VirtIODevice *vdev)
>>  {
>>      VirtIOBlock *s = VIRTIO_BLK(vdev);
>> -
>> -    if (s->dataplane) {
>> -        virtio_blk_data_plane_stop(s->dataplane);
>> -    }
>> +    AioContext *ctx;
>>  
>>      /*
>>       * This should cancel pending requests, but can't do nicely until there
>>       * are per-device request lists.
>>       */
>> -    blk_drain_all();
>> +    ctx = blk_get_aio_context(s->blk);
>> +    aio_context_acquire(ctx);
>> +    blk_drain(s->blk);
>> +
>> +    if (s->dataplane) {
>> +        virtio_blk_data_plane_stop(s->dataplane);
>> +    }
>> +    aio_context_release(ctx);
>> +
>>      blk_set_enable_write_cache(s->blk, s->original_wce);
>>  }
>
> From bdrv_drain_all()'s comment:
>
>  * Note that completion of an asynchronous I/O operation can trigger any
>  * number of other I/O operations on other devices---for example a coroutine
>  * can be arbitrarily complex and a constant flow of I/O can come until the
>  * coroutine is complete.  Because of this, it is not possible to have a
>  * function to drain a single device's I/O queue.
>
> From bdrv_drain()'s comment:
>
>  * See the warning in bdrv_drain_all().  This function can only be called if
>  * you are sure nothing can generate I/O because you have op blockers
>  * installed.
>
> blk_drain() and blk_drain_all() are trivial wrappers.
>
> Ignorant questions:
>
> * Why does blk_drain() suffice here?
>
> * Is blk_drain() (created in PATCH 1) even a safe interface?

* We want to drain requests from only one bdrv and blk_drain() can do
  that.

* Ignorant answer: I was told that the bdrv_drain_all()'s comment is
  obsolete and we can use bdrv_drain(). Here is a link to the old
  thread: http://marc.info/?l=qemu-devel&m=143154211017926&w=2. Since I
  don't see the full picture of this area yet, I'm just relying on other
  people's opinion.


Reply via email to