On 2022-06-10 17:28, Stephen Hemminger wrote:
Need to warn users of DPDK spinlocks from non-pinned threads.
This is similar wording to Linux documentation in pthread_spin_init.
Signed-off-by: Stephen Hemminger <step...@networkplumber.org>
---
doc/guides/prog_guide/env_abstraction_layer.rst | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/doc/guides/prog_guide/env_abstraction_layer.rst
b/doc/guides/prog_guide/env_abstraction_layer.rst
index 5f0748fba1c0..45d3de8d84f6 100644
--- a/doc/guides/prog_guide/env_abstraction_layer.rst
+++ b/doc/guides/prog_guide/env_abstraction_layer.rst
@@ -797,6 +797,16 @@ Known Issues
The debug statistics of rte_ring, rte_mempool and rte_timer are not supported in an unregistered non-EAL pthread.
++ locking
+
Isn't this problem more general than locks? The use of any
non-preemption safe data structures potentially causes such delays.
Regular DPDK rings for sure. The lock-less stack? The hash library?
Both actual and open-coded spinlocks internal to the APIs are also very
common.
+ If a pthread, that is not pinned to an lcore acquires a lock such as a
+ DPDK based lock (rte_spinlock, rte_rwlock, rte_ticketlock, rte_mcslock)
+ then there is a possibility of large application delays.
Pinning or not doesn't matter. What matters is if the thread is
preempted and thus is prevented from making progress for a long time.
+ The problem is that if a thread is scheduled off the CPU while it holds
+ a lock, then other threads will waste time spinning on the lock until
+ the lock holder is once more rescheduled and releases the lock.
+
+
cgroup control
~~~~~~~~~~~~~~