The proposal to work around the problem by using multiple vhost-user
queues per port cannot solve the problem as it has two prerequisites
that are not generally fulfilled:

1. The guest application needs to be able to use multiple queues and spread its 
Tx traffic across them.
2. The OpenStack environment must support configuration of vhost multi-queue.

The work-around to increase the queue length to 1024, in contrast, is
completely transparent for applications and reduces the likelihood of
packet drops for all types of sub-ms scale load fluctuations, no matter
their cause.

In general we believe that it would be good to dimension the virtio
queue size roughly equal to the typical queue sizes of physical
interfaces (typically ~1K packets) to avoid that the virtio queues are
the weakest link in the end-to-end data path.

To this end we do support the idea of making the virtio-net queue size
configurable in both directions (Rx and Tx) in upstream Qemu.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1668829

Title:
  Performance regression from qemu 2.3 to 2.5 for vhost-user with ovs +
  dpdk

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1668829/+subscriptions

-- 
ubuntu-bugs mailing list
[email protected]
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to