Re: [Qemu-devel] [Qemu-stable] [PULL 10/25] virtio_error: don't invoke status callbacks

2018-02-14 Thread Peter Lieven
Am 13.02.2018 um 23:23 schrieb Michael S. Tsirkin:
> On Tue, Feb 13, 2018 at 09:53:58PM +0100, Peter Lieven wrote:
>> Am 21.12.2017 um 15:29 schrieb Michael S. Tsirkin:
>>> Backends don't need to know what frontend requested a reset,
>>> and notifying then from virtio_error is messy because
>>> virtio_error itself might be invoked from backend.
>>>
>>> Let's just set the status directly.
>>>
>>> Cc: qemu-sta...@nongnu.org
>>> Reported-by: Ilya Maximets 
>>> Signed-off-by: Michael S. Tsirkin 
>>> ---
>>>  hw/virtio/virtio.c | 2 +-
>>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
>>> index ad564b0..d6002ee 100644
>>> --- a/hw/virtio/virtio.c
>>> +++ b/hw/virtio/virtio.c
>>> @@ -2469,7 +2469,7 @@ void GCC_FMT_ATTR(2, 3) virtio_error(VirtIODevice 
>>> *vdev, const char *fmt, ...)
>>>  va_end(ap);
>>>  
>>>  if (virtio_vdev_has_feature(vdev, VIRTIO_F_VERSION_1)) {
>>> -virtio_set_status(vdev, vdev->status | 
>>> VIRTIO_CONFIG_S_NEEDS_RESET);
>>> +vdev->status = vdev->status | VIRTIO_CONFIG_S_NEEDS_RESET;
>>>  virtio_notify_config(vdev);
>>>  }
>>>  
>>
>> Is it possible that this patch introduces a stall in I/O and a deadlock on a 
>> drain all?
>>
>> I have seen Qemu VMs being I/O stalled and deadlocking on a vm stop command 
>> in
>>
>> blk_drain_all. This happened after a longer storage outage.
>>
>>
>> I am asking just theoretically because I have seen this behaviour first when 
>> we
>>
>> backported this patch in our stable 2.9 branch.
>>
>>
>> Thank you,
>>
>> Peter
> Well - this patch was introduced to fix a crash, but
> a well behaved VM should not trigger VIRTIO_CONFIG_S_NEEDS_RESET -
> did you see any error messages in the log when this triggered?

You mean in the guest or on the host? On the host I have seen nothing.

I actually did not know the reasoning behing this patch. I was just searching 
for an explaination
for the strange I/O stalls that I have seen.

And it was not only one guest but a few hundreds. So I think I have to search 
for another cause.

Thank you,
Peter






Re: [Qemu-devel] [Qemu-stable] [PULL 10/25] virtio_error: don't invoke status callbacks

2018-02-13 Thread Michael S. Tsirkin
On Tue, Feb 13, 2018 at 09:53:58PM +0100, Peter Lieven wrote:
> 
> Am 21.12.2017 um 15:29 schrieb Michael S. Tsirkin:
> > Backends don't need to know what frontend requested a reset,
> > and notifying then from virtio_error is messy because
> > virtio_error itself might be invoked from backend.
> >
> > Let's just set the status directly.
> >
> > Cc: qemu-sta...@nongnu.org
> > Reported-by: Ilya Maximets 
> > Signed-off-by: Michael S. Tsirkin 
> > ---
> >  hw/virtio/virtio.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
> > index ad564b0..d6002ee 100644
> > --- a/hw/virtio/virtio.c
> > +++ b/hw/virtio/virtio.c
> > @@ -2469,7 +2469,7 @@ void GCC_FMT_ATTR(2, 3) virtio_error(VirtIODevice 
> > *vdev, const char *fmt, ...)
> >  va_end(ap);
> >  
> >  if (virtio_vdev_has_feature(vdev, VIRTIO_F_VERSION_1)) {
> > -virtio_set_status(vdev, vdev->status | 
> > VIRTIO_CONFIG_S_NEEDS_RESET);
> > +vdev->status = vdev->status | VIRTIO_CONFIG_S_NEEDS_RESET;
> >  virtio_notify_config(vdev);
> >  }
> >  
> 
> 
> Is it possible that this patch introduces a stall in I/O and a deadlock on a 
> drain all?
> 
> I have seen Qemu VMs being I/O stalled and deadlocking on a vm stop command in
> 
> blk_drain_all. This happened after a longer storage outage.
> 
> 
> I am asking just theoretically because I have seen this behaviour first when 
> we
> 
> backported this patch in our stable 2.9 branch.
> 
> 
> Thank you,
> 
> Peter

Well - this patch was introduced to fix a crash, but
a well behaved VM should not trigger VIRTIO_CONFIG_S_NEEDS_RESET -
did you see any error messages in the log when this triggered?

-- 
MST



Re: [Qemu-devel] [Qemu-stable] [PULL 10/25] virtio_error: don't invoke status callbacks

2018-02-13 Thread Peter Lieven

Am 21.12.2017 um 15:29 schrieb Michael S. Tsirkin:
> Backends don't need to know what frontend requested a reset,
> and notifying then from virtio_error is messy because
> virtio_error itself might be invoked from backend.
>
> Let's just set the status directly.
>
> Cc: qemu-sta...@nongnu.org
> Reported-by: Ilya Maximets 
> Signed-off-by: Michael S. Tsirkin 
> ---
>  hw/virtio/virtio.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
> index ad564b0..d6002ee 100644
> --- a/hw/virtio/virtio.c
> +++ b/hw/virtio/virtio.c
> @@ -2469,7 +2469,7 @@ void GCC_FMT_ATTR(2, 3) virtio_error(VirtIODevice 
> *vdev, const char *fmt, ...)
>  va_end(ap);
>  
>  if (virtio_vdev_has_feature(vdev, VIRTIO_F_VERSION_1)) {
> -virtio_set_status(vdev, vdev->status | VIRTIO_CONFIG_S_NEEDS_RESET);
> +vdev->status = vdev->status | VIRTIO_CONFIG_S_NEEDS_RESET;
>  virtio_notify_config(vdev);
>  }
>  


Is it possible that this patch introduces a stall in I/O and a deadlock on a 
drain all?

I have seen Qemu VMs being I/O stalled and deadlocking on a vm stop command in

blk_drain_all. This happened after a longer storage outage.


I am asking just theoretically because I have seen this behaviour first when we

backported this patch in our stable 2.9 branch.


Thank you,

Peter