Hi,

I still find the way DPDK-related topics are distributed over the various 
documentation files rather confusing:
./Documentation/intro/install/dpdk.rst
./Documentation/howto/dpdk.rst
./Documentation/topics/dpdk/index.rst
./Documentation/topics/dpdk/vhost-user.rst
./Documentation/topics/dpdk/ring.rst

Why does information like this go into intro/install/dpdk.rst rather than 
howto/dpdk.rst?
But cleaning this up is probably another exercise....

Acked-by: Jan Scheurich <[email protected]>

BR, Jan

> -----Original Message-----
> From: Ilya Maximets [mailto:[email protected]]
> Sent: Friday, 12 January, 2018 12:17
> To: [email protected]
> Cc: Heetae Ahn <[email protected]>; Bhanuprakash Bodireddy 
> <[email protected]>; Antonio Fischetti
> <[email protected]>; Eelco Chaudron <[email protected]>; Ciara 
> Loftus <[email protected]>; Kevin Traynor
> <[email protected]>; Jan Scheurich <[email protected]>; Billy 
> O'Mahony <[email protected]>; Ian Stokes
> <[email protected]>; Ilya Maximets <[email protected]>
> Subject: [PATCH v10 4/5] docs: Describe output packet batching in DPDK guide.
> 
> Added information about output packet batching and a way to
> configure 'tx-flush-interval'.
> 
> Signed-off-by: Ilya Maximets <[email protected]>
> Co-authored-by: Jan Scheurich <[email protected]>
> Signed-off-by: Jan Scheurich <[email protected]>
> ---
>  Documentation/intro/install/dpdk.rst | 58 
> ++++++++++++++++++++++++++++++++++++
>  1 file changed, 58 insertions(+)
> 
> diff --git a/Documentation/intro/install/dpdk.rst 
> b/Documentation/intro/install/dpdk.rst
> index 3fecb5c..040e62e 100644
> --- a/Documentation/intro/install/dpdk.rst
> +++ b/Documentation/intro/install/dpdk.rst
> @@ -568,6 +568,64 @@ not needed i.e. jumbo frames are not needed, it can be 
> forced off by adding
>  chains of descriptors it will make more individual virtio descriptors 
> available
>  for rx to the guest using dpdkvhost ports and this can improve performance.
> 
> +Output Packet Batching
> +~~~~~~~~~~~~~~~~~~~~~~
> +
> +To make advantage of batched transmit functions, OVS collects packets in
> +intermediate queues before sending when processing a batch of received 
> packets.
> +Even if packets are matched by different flows, OVS uses a single send
> +operation for all packets destined to the same output port.
> +
> +Furthermore, OVS is able to buffer packets in these intermediate queues for a
> +configurable amount of time to reduce the frequency of send bursts at medium
> +load levels when the packet receive rate is high, but the receive batch size
> +still very small. This is particularly beneficial for packets transmitted to
> +VMs using an interrupt-driven virtio driver, where the interrupt overhead is
> +significant for the OVS PMD, the host operating system and the guest driver.
> +
> +The ``tx-flush-interval`` parameter can be used to specify the time in
> +microseconds OVS should wait between two send bursts to a given port (default
> +is ``0``). When the intermediate queue fills up before that time is over, the
> +buffered packet batch is sent immediately::
> +
> +    $ ovs-vsctl set Open_vSwitch . other_config:tx-flush-interval=50
> +
> +This parameter influences both throughput and latency, depending on the 
> traffic
> +load on the port. In general lower values decrease latency while higher 
> values
> +may be useful to achieve higher throughput.
> +
> +Low traffic (``packet rate < 1 / tx-flush-interval``) should not experience
> +any significant latency or throughput increase as packets are forwarded
> +immediately.
> +
> +At intermediate load levels
> +(``1 / tx-flush-interval < packet rate < 32 / tx-flush-interval``) traffic
> +should experience an average latency increase of up to
> +``1 / 2 * tx-flush-interval`` and a possible throughput improvement.
> +
> +Very high traffic (``packet rate >> 32 / tx-flush-interval``) should 
> experience
> +the average latency increase equal to ``32 / (2 * packet rate)``. Most send
> +batches in this case will contain the maximum number of packets (``32``).
> +
> +A ``tx-burst-interval`` value of ``50`` microseconds has shown to provide a
> +good performance increase in a ``PHY-VM-PHY`` scenario on ``x86`` system for
> +interrupt-driven guests while keeping the latency increase at a reasonable
> +level:
> +
> +  https://mail.openvswitch.org/pipermail/ovs-dev/2017-December/341628.html
> +
> +.. note::
> +  Throughput impact of this option significantly depends on the scenario and
> +  the traffic patterns. For example: ``tx-burst-interval`` value of ``50``
> +  microseconds shows performance degradation in ``PHY-VM-PHY`` with bonded 
> PHY
> +  scenario while testing with ``256 - 1024`` packet flows:
> +
> +    https://mail.openvswitch.org/pipermail/ovs-dev/2017-December/341700.html
> +
> +The average number of packets per output batch can be checked in PMD stats::
> +
> +    $ ovs-appctl dpif-netdev/pmd-stats-show
> +
>  Limitations
>  ------------
> 
> --
> 2.7.4

_______________________________________________
dev mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to