On Mon, Aug 25, 2025 at 05:46:07AM -0000, Michael van Elst wrote: > cme...@cmeerw.org (Christof Meerwald) writes: > > >So it does look like some kind of race condition, also I thought that > >should be handled by qemu by first setting the vq->vq_used->flags to 0 > >and then checking vq->vq_used->idx again before relying on > >notifications being sent. > > Maybe a memory ordering issue then ?
Yes, probably - I have added an mfence and not seen any issue since then (more than 2 days now, with my sync loop running in the background). BTW, this is on a AMD Ryzen 9 9950X 16-Core Processor (with 2 CPUs assigned to the VPS). Index: dev/pci/virtio.c =================================================================== RCS file: /cvsroot/src/sys/dev/pci/virtio.c,v retrieving revision 1.63.2.6 diff -u -r1.63.2.6 virtio.c --- dev/pci/virtio.c 2 Oct 2024 18:20:48 -0000 1.63.2.6 +++ dev/pci/virtio.c 28 Aug 2025 21:42:47 -0000 @@ -38,6 +38,7 @@ #include <sys/device.h> #include <sys/kmem.h> #include <sys/module.h> +#include <x86/cpufunc.h> #define VIRTIO_PRIVATE @@ -1244,6 +1248,7 @@ vq->vq_avail->idx = virtio_rw16(sc, vq->vq_avail_idx); vq_sync_aring_header(sc, vq, BUS_DMASYNC_PREWRITE); vq->vq_queued++; + x86_mfence(); if (sc->sc_active_features & VIRTIO_F_RING_EVENT_IDX) { vq_sync_uring_avail(sc, vq, BUS_DMASYNC_POSTREAD); -- https://cmeerw.org sip:cmeerw at cmeerw.org mailto:cmeerw at cmeerw.org xmpp:cmeerw at cmeerw.org