>From my understanding it doesn't matter if your node is slow. Only reason why 
>queue overflows is when target worker processing data slower then source 
>worker. So what is important is that each worker completes its node graph in 
>the same time. I would say this is the only rule to avoid drops caused by full 
>queue.

Regarding 20 workers - writing and reading from queues slows down with more 
workers, but since each workers pays the same constant price for handoff queue 
management, it should have no effect on rule above. In your case each worker 
has buffering of (2048/20)=100 vectors per worker, what is a lot.

Hm, actually sentence above only makes sense when you call 
vlib_buffer_enqueue_to_thread no more then 1 time in single worker loop. As 
each vlib_buffer_enqueue_to_thread call will use separated 
vlib_frame_queue_elt_t from queue per targeted worker.
So it means that if you call vlib_buffer_enqueue_to_thread multiple times per 
worker loop for this handoff queue, it multiplies amount of required queue 
size. I think such situation can happen, for example if you have multiple 
saturated interfaces. Since each interface queue will produce full vector worth 
of data.

To check this I recommend adding metric to see how many times 
vlib_buffer_enqueue_to_thread is called per worker loop.
And if indeed it happens multiple time, then it would make more sense to do 
some sort of batching and binning based on thread index (like hash of it) 
before calling vlib_buffer_enqueue_to_thread.

In any case I recommend running perf -F 4000 --sample-cpu before doing 
anything, since it almost always provides more information about such kinds of 
issues.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#26673): https://lists.fd.io/g/vpp-dev/message/26673
Mute This Topic: https://lists.fd.io/mt/116856661/21656
Group Owner: [email protected]
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/14379924/21656/631435203/xyzzy 
[[email protected]]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to