Signed-off-by: Bruce Richardson <bruce.richard...@intel.com>
---
 doc/guides/faq/faq.rst | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/doc/guides/faq/faq.rst b/doc/guides/faq/faq.rst
index f21aff4c5..dac805099 100644
--- a/doc/guides/faq/faq.rst
+++ b/doc/guides/faq/faq.rst
@@ -115,16 +115,16 @@ but the end-to-end latency of an average packet typically 
increases as a result.
 Similarly, the application can be tuned to have, on average, a low end-to-end 
latency at the cost of lower throughput.
 
 To achieve higher throughput, the DPDK attempts to aggregate the cost of 
processing each packet individually by processing packets in bursts.
-Using the testpmd application as an example, the "burst" size can be set on 
the command line to a value of 16 (also the default value).
-This allows the application to request 16 packets at a time from the PMD.
-The testpmd application then immediately attempts to transmit all the packets 
that were received, in this case, all 16 packets.
+Using the testpmd application as an example, the "burst" size can be set on 
the command line to a value of 32 (also the default value).
+This allows the application to request 32 packets at a time from the PMD.
+The testpmd application then immediately attempts to transmit all the packets 
that were received, in this case, all 32 packets.
 The packets are not transmitted until the tail pointer is updated on the 
corresponding TX queue of the network port.
 This behavior is desirable when tuning for high throughput because the cost of 
tail pointer updates to both the RX and TX queues
-can be spread across 16 packets, effectively hiding the relatively slow MMIO 
cost of writing to the PCIe* device.
+can be spread across 32 packets, effectively hiding the relatively slow MMIO 
cost of writing to the PCIe* device.
 
-However, this is not very desirable when tuning for low latency, because the 
first packet that was received must also wait for the other 15 packets to be 
received.
-It cannot be transmitted until the other 15 packets have also been processed 
because the NIC will not know to transmit the packets until the TX tail pointer 
has been updated,
-which is not done until all 16 packets have been processed for transmission.
+However, this is not very desirable when tuning for low latency, because the 
first packet that was received must also wait for the other 31 packets to be 
received.
+It cannot be transmitted until the other 31 packets have also been processed 
because the NIC will not know to transmit the packets until the TX tail pointer 
has been updated,
+which is not done until all 32 packets have been processed for transmission.
 
 To consistently achieve low latency even under heavy system load, the 
application developer should avoid processing packets in bunches.
 The testpmd application can be configured from the command line to use a burst 
value of 1.
-- 
2.13.3

Reply via email to