On 08/01/2023 03:55, Cheng Li wrote:
In my test, if one logical core is pinned to PMD thread while the
other logical(of the same physical core) is not. The PMD
performance is affected the by the not-pinned logical core load.
This maks it difficult to estimate the loads during a dry-run.

Signed-off-by: Cheng Li <lic...@chinatelecom.cn>
---
  Documentation/topics/dpdk/pmd.rst | 4 ++++
  1 file changed, 4 insertions(+)

diff --git a/Documentation/topics/dpdk/pmd.rst 
b/Documentation/topics/dpdk/pmd.rst
index 9006fd4..b220199 100644
--- a/Documentation/topics/dpdk/pmd.rst
+++ b/Documentation/topics/dpdk/pmd.rst
@@ -312,6 +312,10 @@ If not set, the default variance improvement threshold is 
25%.
      when all PMD threads are running on cores from a single NUMA node. In this
      case cross-NUMA datapaths will not change after reassignment.
+ For the same reason, please ensure that the pmd threads are pinned to SMT
+    siblings if HyperThreading is enabled. Otherwise, PMDs within a NUMA may
+    not have the same performance.
+
  The minimum time between 2 consecutive PMD auto load balancing iterations can
  also be configured by::

I don't think it's a hard requirement as siblings should not impact as much as cross-numa might but it's probably good advice in general.

Acked-by: Kevin Traynor <ktray...@redhat.com>

_______________________________________________
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to