On 24 February 2014 16:20, Stefan Hajnoczi <stefa...@gmail.com> wrote:

> Do you want the 1:1 mapping to achieve best performance or just to
> simplify the coding?
>

We want to keep the real-time constraints on the data plane comfortable.

The question I ask myself is: How long can I buffer packets during
processing before something is dropped?

256 buffers can be consumed in 17 microseconds on a 10G interface. That's
uncomfortably tight for me. I would like every buffer in the data path to
be dimensioned for at least 100us of traffic - ideally more like 1ms. That
gives us more flexibility for scheduling work, handling configuration
changes, etc. So I'd love to have the guest know to keep us fed with e.g.
32768 buffers at all times.

Our data plane is batch-oriented and deals with "breaths" of 100+ packets
at a time. So we're a bit more hungry for buffers than a data plane that's
optimized for minimum latency instead.

What do you think? Can I reliably get the buffers I want with
VIRTIO_RING_F_INDIRECT_DESC
or should I increase the vring size?

Since vhost_net does many Gbit/s I doubt the ring size is a limiting
> factor although there are still periodic discussions about tweaking the
> direct vs indirect descriptor heuristic.
>

FWIW the workloads I'm focused on are high rates of small packets as seen
by a switch/router/firewall/etc devices. I've found that it's possible to
struggle with these workloads even when getting solid performance on e.g.
TCP bulk transfer with TSO. So I'm prepared for the possibility that what
works well for others may well not work well for our application.

Reply via email to