Alexey Lapitsky <lex.pub...@gmail.com> writes:
> Hi,
>
> I'm hitting this bug with both ext4 and btrfs.
>
> Here's an example of the backtrace:
> https://gist.github.com/vzctl/e888a821333979120932
>
> I tried raising this BUG only for direct ring and it solved the problem:
>
>  -       BUG_ON(total_sg > vq->vring.num);
> +       BUG_ON(total_sg > vq->vring.num && !vq->indirect);
>
> Shall I submit the patch or is a more elaborate fix required?

This is wrong.  It looks like the block layer is spending down
more sg elements than we have ring entries, yet we tell it not to:

        blk_queue_max_segments(q, vblk->sg_elems-2);

If you apply this patch, what happens?  Here it prints:

[    0.616564] virtqueue elements = 128, max_segments = 126 (1 queues)
[    0.621244]  vda: vda1 vda2 < vda5 >
[    0.632290] virtqueue elements = 128, max_segments = 126 (1 queues)
[    0.683526]  vdb: vdb1 vdb2 < vdb5 >

Cheers,
Rusty.

diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index 0a581400de0f..aa9d4d313587 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -683,6 +683,13 @@ static int virtblk_probe(struct virtio_device *vdev)
        /* We can handle whatever the host told us to handle. */
        blk_queue_max_segments(q, vblk->sg_elems-2);
 
+       printk("virtqueue elements = %u, max_segments = %u (%u queues)", 
+              virtqueue_get_vring_size(vblk->vqs[0].vq),
+              vblk->sg_elems-2,
+              vblk->num_vqs);
+
+       BUG_ON(vblk->sg_elems-2 > virtqueue_get_vring_size(vblk->vqs[0].vq));
+
        /* No need to bounce any requests */
        blk_queue_bounce_limit(q, BLK_BOUNCE_ANY);
 


_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Reply via email to