On Thu, Mar 05, 2026 at 05:29:27PM +0800, Chaohai Chen wrote: > KCSAN detected multiple data races when accessing the split virtqueue's > used ring, which is shared memory concurrently accessed by both the CPU > and the virtio device (hypervisor). > > The races occur when reading the following fields without proper atomic > operations: > - vring.used->idx > - vring.used->flags > - vring.used->ring[].id > - vring.used->ring[].len > > These fields reside in DMA-shared memory and can be modified by the > virtio device at any time. Without READ_ONCE(), the compiler may perform > unsafe optimizations such as value caching or load tearing.
.... but does not. > Example KCSAN report: > > [ 109.277250] > ================================================================== > [ 109.283600] BUG: KCSAN: data-race in > virtqueue_enable_cb_delayed_split+0x10f/0x170 > > [ 109.295263] race at unknown origin, with read to 0xffff8b2a92ef2042 of 2 > bytes by interrupt on cpu 1: > [ 109.306934] virtqueue_enable_cb_delayed_split+0x10f/0x170 > [ 109.312880] virtqueue_enable_cb_delayed+0x3b/0x70 > [ 109.318852] start_xmit+0x315/0x860 [virtio_net] > [ 109.324532] dev_hard_start_xmit+0x85/0x380 > [ 109.329993] sch_direct_xmit+0xd3/0x680 > [ 109.335360] __dev_xmit_skb+0x4ee/0xcc0 > [ 109.340568] __dev_queue_xmit+0x560/0xe00 > [ 109.345701] ip_finish_output2+0x49a/0x9b0 > [ 109.350743] __ip_finish_output+0x131/0x250 > [ 109.355789] ip_finish_output+0x28/0x180 > [ 109.360712] ip_output+0xa0/0x1c0 > [ 109.365479] __ip_queue_xmit+0x68d/0x9e0 > [ 109.370156] ip_queue_xmit+0x33/0x40 > [ 109.374783] __tcp_transmit_skb+0x1703/0x1970 > [ 109.379467] __tcp_send_ack.part.0+0x1bb/0x320 > ... > [ 109.499585] do_idle+0x7a/0xe0 > [ 109.502979] cpu_startup_entry+0x25/0x30 > [ 109.506481] start_secondary+0x116/0x150 > [ 109.509930] common_startup_64+0x13e/0x141 > > [ 109.516626] value changed: 0x0029 -> 0x002a > > Fix these races by wrapping all reads from the used ring with READ_ONCE() > to ensure: > 1. The compiler always loads values from memory (no caching) > 2. Loads are atomic (no load tearing) > 3. The concurrent access intent is documented for KCSAN and developers > > The changes affect the following functions: > - virtqueue_kick_prepare_split(): used->flags and avail event > - virtqueue_get_buf_ctx_split(): used->ring[].id and used->ring[].len > - virtqueue_get_buf_ctx_split_in_order(): used->ring[].id and > used->ring[].len > - virtqueue_enable_cb_delayed_split(): used->idx > These are no races, these are KCSAN false positives. I am not against documenting things using these macros but i am against confusing commit log messages. > Signed-off-by: Chaohai Chen <[email protected]> > --- > drivers/virtio/virtio_ring.c | 17 +++++++++-------- > 1 file changed, 9 insertions(+), 8 deletions(-) > > diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c > index 335692d41617..a792a3f05837 100644 > --- a/drivers/virtio/virtio_ring.c > +++ b/drivers/virtio/virtio_ring.c > @@ -810,10 +810,10 @@ static bool virtqueue_kick_prepare_split(struct > vring_virtqueue *vq) > > if (vq->event) { > needs_kick = vring_need_event(virtio16_to_cpu(vq->vq.vdev, > - vring_avail_event(&vq->split.vring)), > + > READ_ONCE(vring_avail_event(&vq->split.vring))), > new, old); > } else { > - needs_kick = !(vq->split.vring.used->flags & > + needs_kick = !(READ_ONCE(vq->split.vring.used->flags) & > cpu_to_virtio16(vq->vq.vdev, > VRING_USED_F_NO_NOTIFY)); > } > @@ -940,9 +940,9 @@ static void *virtqueue_get_buf_ctx_split(struct > vring_virtqueue *vq, > > last_used = (vq->last_used_idx & (vq->split.vring.num - 1)); > i = virtio32_to_cpu(vq->vq.vdev, > - vq->split.vring.used->ring[last_used].id); > + READ_ONCE(vq->split.vring.used->ring[last_used].id)); > *len = virtio32_to_cpu(vq->vq.vdev, > - vq->split.vring.used->ring[last_used].len); > + READ_ONCE(vq->split.vring.used->ring[last_used].len)); > > if (unlikely(i >= vq->split.vring.num)) { > BAD_RING(vq, "id %u out of range\n", i); > @@ -1004,9 +1004,9 @@ static void > *virtqueue_get_buf_ctx_split_in_order(struct vring_virtqueue *vq, > virtio_rmb(vq->weak_barriers); > > vq->batch_last.id = virtio32_to_cpu(vq->vq.vdev, > - > vq->split.vring.used->ring[last_used_idx].id); > + > READ_ONCE(vq->split.vring.used->ring[last_used_idx].id)); > vq->batch_last.len = virtio32_to_cpu(vq->vq.vdev, > - > vq->split.vring.used->ring[last_used_idx].len); > + > READ_ONCE(vq->split.vring.used->ring[last_used_idx].len)); > } > > if (vq->batch_last.id == last_used) { > @@ -1112,8 +1112,9 @@ static bool virtqueue_enable_cb_delayed_split(struct > vring_virtqueue *vq) > &vring_used_event(&vq->split.vring), > cpu_to_virtio16(vq->vq.vdev, vq->last_used_idx + bufs)); > > - if (unlikely((u16)(virtio16_to_cpu(vq->vq.vdev, > vq->split.vring.used->idx) > - - vq->last_used_idx) > bufs)) { > + if (unlikely((u16)(virtio16_to_cpu(vq->vq.vdev, > + READ_ONCE(vq->split.vring.used->idx)) > + - vq->last_used_idx) > bufs)) { > END_USE(vq); > return false; I also want to know what this does to performance, or at least code size. > } > -- > 2.43.7

