We used to limit the max pending DMAs to prevent guest from pinning too many
pages. But this could be removed since:

- We have the sk_wmem_alloc check in both tun/macvtap to do the same work
- This max pending check were almost useless since it was one done when there's
  no new buffers coming from guest. Guest can easily exceeds the limitation.
- We've already check upend_idx != done_idx and switch to non zerocopy then. So
  even if all vq->heads were used, we can still does the packet transmission.

So remove this check completely.

Signed-off-by: Jason Wang <[email protected]>
---
 drivers/vhost/net.c |   13 -------------
 1 files changed, 0 insertions(+), 13 deletions(-)

diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index a035a89..ed3f165 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -38,8 +38,6 @@ MODULE_PARM_DESC(experimental_zcopytx, "Enable Zero Copy TX;"
  * Using this limit prevents one virtqueue from starving others. */
 #define VHOST_NET_WEIGHT 0x80000
 
-/* MAX number of TX used buffers for outstanding zerocopy */
-#define VHOST_MAX_PEND 128
 #define VHOST_GOODCOPY_LEN 256
 
 /*
@@ -372,17 +370,6 @@ static void handle_tx(struct vhost_net *net)
                        break;
                /* Nothing new?  Wait for eventfd to tell us they refilled. */
                if (head == vq->num) {
-                       int num_pends;
-
-                       /* If more outstanding DMAs, queue the work.
-                        * Handle upend_idx wrap around
-                        */
-                       num_pends = likely(nvq->upend_idx >= nvq->done_idx) ?
-                                   (nvq->upend_idx - nvq->done_idx) :
-                                   (nvq->upend_idx + UIO_MAXIOV -
-                                    nvq->done_idx);
-                       if (unlikely(num_pends > VHOST_MAX_PEND))
-                               break;
                        if (unlikely(vhost_enable_notify(&net->dev, vq))) {
                                vhost_disable_notify(&net->dev, vq);
                                continue;
-- 
1.7.1

_______________________________________________
Virtualization mailing list
[email protected]
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Reply via email to