The loop may exist if vq->broken is true,
virtqueue_get_buf_ctx_packed or virtqueue_get_buf_ctx_split
will return NULL, so virtnet_poll will reschedule napi to
receive packet, it will lead cpu usage(si) to 100%.

call trace as below:
virtnet_poll
        virtnet_receive
                virtqueue_get_buf_ctx
                        virtqueue_get_buf_ctx_packed
                        virtqueue_get_buf_ctx_split
        virtqueue_napi_complete
                virtqueue_poll           //return true
                virtqueue_napi_schedule //it will reschedule napi

To fix this, return false if vq is broken in virtqueue_poll.

Signed-off-by: Mao Wenan <wenan....@linux.alibaba.com>
---
 v1->v2: fix it in virtqueue_poll suggested by Michael S. Tsirkin 
<m...@redhat.com>
 drivers/virtio/virtio_ring.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index 58b96ba..4f7c73e 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -1960,6 +1960,9 @@ bool virtqueue_poll(struct virtqueue *_vq, unsigned 
last_used_idx)
 {
        struct vring_virtqueue *vq = to_vvq(_vq);
 
+       if (unlikely(vq->broken))
+               return false;
+
        virtio_mb(vq->weak_barriers);
        return vq->packed_ring ? virtqueue_poll_packed(_vq, last_used_idx) :
                                 virtqueue_poll_split(_vq, last_used_idx);
-- 
1.8.3.1

Reply via email to