On Wed, Mar 30, 2022 at 2:33 PM Si-Wei Liu <si-wei....@oracle.com> wrote:
>
> Previous commit prevents vhost-user and vhost-vdpa from using
> userland vq handler via disable_ioeventfd_handler. The same
> needs to be done for host notifier cleanup too, as the
> virtio_queue_host_notifier_read handler still tends to read
> pending event left behind on ioeventfd and attempts to handle
> outstanding kicks from QEMU userland vq.
>
> If vq handler is not disabled on cleanup, it may lead to sigsegv
> with recursive virtio_net_set_status call on the control vq:
>
> 0  0x00007f8ce3ff3387 in raise () at /lib64/libc.so.6
> 1  0x00007f8ce3ff4a78 in abort () at /lib64/libc.so.6
> 2  0x00007f8ce3fec1a6 in __assert_fail_base () at /lib64/libc.so.6
> 3  0x00007f8ce3fec252 in  () at /lib64/libc.so.6
> 4  0x0000558f52d79421 in vhost_vdpa_get_vq_index (dev=<optimized out>, 
> idx=<optimized out>) at ../hw/virtio/vhost-vdpa.c:563
> 5  0x0000558f52d79421 in vhost_vdpa_get_vq_index (dev=<optimized out>, 
> idx=<optimized out>) at ../hw/virtio/vhost-vdpa.c:558
> 6  0x0000558f52d7329a in vhost_virtqueue_mask (hdev=0x558f55c01800, 
> vdev=0x558f568f91f0, n=2, mask=<optimized out>) at ../hw/virtio/vhost.c:1557

I feel it's probably a bug elsewhere e.g when we fail to start
vhost-vDPA, it's the charge of the Qemu to poll host notifier and we
will fallback to the userspace vq handler.

Thanks

> 7  0x0000558f52c6b89a in virtio_pci_set_guest_notifier 
> (d=d@entry=0x558f568f0f60, n=n@entry=2, assign=assign@entry=true, 
> with_irqfd=with_irqfd@entry=false)
>    at ../hw/virtio/virtio-pci.c:974
> 8  0x0000558f52c6c0d8 in virtio_pci_set_guest_notifiers (d=0x558f568f0f60, 
> nvqs=3, assign=true) at ../hw/virtio/virtio-pci.c:1019
> 9  0x0000558f52bf091d in vhost_net_start (dev=dev@entry=0x558f568f91f0, 
> ncs=0x558f56937cd0, data_queue_pairs=data_queue_pairs@entry=1, 
> cvq=cvq@entry=1)
>    at ../hw/net/vhost_net.c:361
> 10 0x0000558f52d4e5e7 in virtio_net_set_status (status=<optimized out>, 
> n=0x558f568f91f0) at ../hw/net/virtio-net.c:289
> 11 0x0000558f52d4e5e7 in virtio_net_set_status (vdev=0x558f568f91f0, 
> status=15 '\017') at ../hw/net/virtio-net.c:370
> 12 0x0000558f52d6c4b2 in virtio_set_status (vdev=vdev@entry=0x558f568f91f0, 
> val=val@entry=15 '\017') at ../hw/virtio/virtio.c:1945
> 13 0x0000558f52c69eff in virtio_pci_common_write (opaque=0x558f568f0f60, 
> addr=<optimized out>, val=<optimized out>, size=<optimized out>) at 
> ../hw/virtio/virtio-pci.c:1292
> 14 0x0000558f52d15d6e in memory_region_write_accessor (mr=0x558f568f19d0, 
> addr=20, value=<optimized out>, size=1, shift=<optimized out>, 
> mask=<optimized out>, attrs=...)
>    at ../softmmu/memory.c:492
> 15 0x0000558f52d127de in access_with_adjusted_size (addr=addr@entry=20, 
> value=value@entry=0x7f8cdbffe748, size=size@entry=1, 
> access_size_min=<optimized out>, access_size_max=<optimized out>, 
> access_fn=0x558f52d15cf0 <memory_region_write_accessor>, mr=0x558f568f19d0, 
> attrs=...) at ../softmmu/memory.c:554
> 16 0x0000558f52d157ef in memory_region_dispatch_write 
> (mr=mr@entry=0x558f568f19d0, addr=20, data=<optimized out>, op=<optimized 
> out>, attrs=attrs@entry=...)
>    at ../softmmu/memory.c:1504
> 17 0x0000558f52d078e7 in flatview_write_continue (fv=fv@entry=0x7f8accbc3b90, 
> addr=addr@entry=103079215124, attrs=..., ptr=ptr@entry=0x7f8ce6300028, 
> len=len@entry=1, addr1=<optimized out>, l=<optimized out>, mr=0x558f568f19d0) 
> at ../../../include/qemu/host-utils.h:165
> 18 0x0000558f52d07b06 in flatview_write (fv=0x7f8accbc3b90, 
> addr=103079215124, attrs=..., buf=0x7f8ce6300028, len=1) at 
> ../softmmu/physmem.c:2822
> 19 0x0000558f52d0b36b in address_space_write (as=<optimized out>, 
> addr=<optimized out>, attrs=..., buf=buf@entry=0x7f8ce6300028, len=<optimized 
> out>)
>    at ../softmmu/physmem.c:2914
> 20 0x0000558f52d0b3da in address_space_rw (as=<optimized out>, 
> addr=<optimized out>, attrs=...,
>    attrs@entry=..., buf=buf@entry=0x7f8ce6300028, len=<optimized out>, 
> is_write=<optimized out>) at ../softmmu/physmem.c:2924
> 21 0x0000558f52dced09 in kvm_cpu_exec (cpu=cpu@entry=0x558f55c2da60) at 
> ../accel/kvm/kvm-all.c:2903
> 22 0x0000558f52dcfabd in kvm_vcpu_thread_fn (arg=arg@entry=0x558f55c2da60) at 
> ../accel/kvm/kvm-accel-ops.c:49
> 23 0x0000558f52f9f04a in qemu_thread_start (args=<optimized out>) at 
> ../util/qemu-thread-posix.c:556
> 24 0x00007f8ce4392ea5 in start_thread () at /lib64/libpthread.so.0
> 25 0x00007f8ce40bb9fd in clone () at /lib64/libc.so.6
>
> Fixes: 4023784 ("vhost-vdpa: multiqueue support")
> Cc: Jason Wang <jasow...@redhat.com>
> Signed-off-by: Si-Wei Liu <si-wei....@oracle.com>
> ---
>  hw/virtio/virtio-bus.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/hw/virtio/virtio-bus.c b/hw/virtio/virtio-bus.c
> index 0f69d1c..3159b58 100644
> --- a/hw/virtio/virtio-bus.c
> +++ b/hw/virtio/virtio-bus.c
> @@ -311,7 +311,8 @@ void virtio_bus_cleanup_host_notifier(VirtioBusState 
> *bus, int n)
>      /* Test and clear notifier after disabling event,
>       * in case poll callback didn't have time to run.
>       */
> -    virtio_queue_host_notifier_read(notifier);
> +    if (!vdev->disable_ioeventfd_handler)
> +        virtio_queue_host_notifier_read(notifier);
>      event_notifier_cleanup(notifier);
>  }
>
> --
> 1.8.3.1
>


Reply via email to