Hello VPP folks,

I have several questions/comments on memif and libmemif.

1) I believe "mq[x].last_tail = 0" is missing from 
memif_init_regions_and_queues().

2) I have a libmemif app connecting to two different memif sockets and I 
noticed that if my app fails to connect to the first socket it will not attempt
the second.  This is due to the error bailout in memif_control_fd_handler(), 
line 921.  Is this behavior intended?

3) memif ring size

3a) I see both memif plugin and libmemif set the max ring size as:
    #define  MEMIF_MAX_LOG2_RING_SIZE        14
   
However, src/plugins/memif/cli.c has the following check:

  if (ring_size > 32768)
     return clib_error_return (0, "maximum ring size is 32768");

Which is correct?  For what it's worth, I modified the #define to allow a ring 
size of 32768 and it appeared to work, but I'm not certain there wasn't
something bad (but non-fatal) happening behind the scenes.

3b) What is the source of the 32k limit?  Is this a technical limit or a "you 
shouldn't need more than this" limit?

3c) What ring size was used for the benchmarks shown at KubeCon EU? 
(https://wiki.fd.io/view/File:Fdio-memif-at-kubecon-eu-180430.pptx)

4) Is it still true that libmemif cannot act as master?  
(https://lists.fd.io/g/vpp-dev/message/9408)

5) Suppose that after calling memif_buffer_alloc(), while populating the tx 
buffer, you decide you no longer want to transmit the buffer.  How do you
"give back" the tx buffer to the ring?

6) memif_refill_queue

6a) Debugging some performance issues in my libmemif app, I benchmarked 
memif_refill_queue() and found that with a headroom of 128 the function was 
taking almost 2600 clocks per call on average:

libmemif_refill: 4971105 samples, 12799435227 clocks (avg: 2574)

But with a headroom of 0, only 22 clocks per call on average:

libmemif_refill: 4968243 samples, 111850110 clocks (avg: 22)

Is this expected?

6b) How should memif_refill_queue() be used in a scenario where the received 
packets are processed at different times?  For example, suppose I receive 16 
packets from memif_rx_burst(), p0-p15.  I want to hold on to p4, p9, p11 
temporarily and process+transmit the others via memif_tx_burst().  There's no 
way to call memif_refill_queue() for just the 13 processed packets, correct?  
Would I need to make copies of p4, p9, p11 and then refill all 16?

7) Looking for more information on the memif_buffer_enqueue_tx() function.   I 
see it used only in the zero-copy-slave example code. Is the combination of 
memif_buffer_enqueue_tx() + memif_tx_burst() the way to achieve zero-copy tx?  
If not, how/where should memif_buffer_enqueue_tx() be used?  Is zero-copy only 
possible from slave->master?

Thanks,
Jeff
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10108): https://lists.fd.io/g/vpp-dev/message/10108
Mute This Topic: https://lists.fd.io/mt/24499586/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to