> On 13 Aug 2018, at 04:43, Jeff <jct...@mykolab.com> wrote: > > Hello VPP folks,
Hello, > > I have several questions/comments on memif and libmemif. > > 1) I believe "mq[x].last_tail = 0" is missing from > memif_init_regions_and_queues(). Aer you referring to vpp or libmemif? We have same function name at both places.. In VPP it is fine as mq is vector element and vector elements are set to zero on alloc... > > 2) I have a libmemif app connecting to two different memif sockets and I > noticed that if my app fails to connect to the first socket it will not > attempt > the second. This is due to the error bailout in memif_control_fd_handler(), > line 921. Is this behavior intended? Not familiar with libmemif details, will leave to Jakub to comment.... > > 3) memif ring size > > 3a) I see both memif plugin and libmemif set the max ring size as: > #define MEMIF_MAX_LOG2_RING_SIZE 14 > > However, src/plugins/memif/cli.c has the following check: > > if (ring_size > 32768) > return clib_error_return (0, "maximum ring size is 32768"); > > Which is correct? For what it's worth, I modified the #define to allow a > ring size of 32768 and it appeared to work, but I'm not certain there wasn't > something bad (but non-fatal) happening behind the scenes. 15 should also be fine. I don't remember why i put 14... Feel free to submit patch which changes it to 15. > > 3b) What is the source of the 32k limit? Is this a technical limit or a "you > shouldn't need more than this" limit? u16 - 1bit Do you really need more? > > 3c) What ring size was used for the benchmarks shown at KubeCon EU? > (https://wiki.fd.io/view/File:Fdio-memif-at-kubecon-eu-180430.pptx) default > > 4) Is it still true that libmemif cannot act as master? > (https://lists.fd.io/g/vpp-dev/message/9408) > > 5) Suppose that after calling memif_buffer_alloc(), while populating the tx > buffer, you decide you no longer want to transmit the buffer. How do you > "give back" the tx buffer to the ring? > > 6) memif_refill_queue > > 6a) Debugging some performance issues in my libmemif app, I benchmarked > memif_refill_queue() and found that with a headroom of 128 the function was > taking almost 2600 clocks per call on average: > > libmemif_refill: 4971105 samples, 12799435227 clocks (avg: 2574) > > But with a headroom of 0, only 22 clocks per call on average: > > libmemif_refill: 4968243 samples, 111850110 clocks (avg: 22) > > Is this expected? Will leave to Jakub to comment on last 3 q .... > > 6b) How should memif_refill_queue() be used in a scenario where the received > packets are processed at different times? For example, suppose I receive 16 > packets from memif_rx_burst(), p0-p15. I want to hold on to p4, p9, p11 > temporarily and process+transmit the others via memif_tx_burst(). There's no > way to call memif_refill_queue() for just the 13 processed packets, correct? > Would I need to make copies of p4, p9, p11 and then refill all 16? I guess we don't have that supported, but makes sense to have it. > > 7) Looking for more information on the memif_buffer_enqueue_tx() function. I cannot find that function in latest master... > I see it used only in the zero-copy-slave example code. Is the combination of > memif_buffer_enqueue_tx() + memif_tx_burst() the way to achieve zero-copy tx? > If not, how/where should memif_buffer_enqueue_tx() be used? Is zero-copy > only possible from slave->master? Idea of zero-copy in memif is to avoid memcpy to shared memory by exposing buffer memory to peer. One of basic rules are that memif master never exposes his memory, so it cannot do zero-copy.
-=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#10112): https://lists.fd.io/g/vpp-dev/message/10112 Mute This Topic: https://lists.fd.io/mt/24499586/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-