[Edited Message Follows]

Hi ,

Seeing some strange issue when using memif to transfer some messages from 
client to VPP.
Each of the message that we want to transfer from client to VPP is around 64 KB.

When the client sends messages ( each of size 64 KB) to VPP in quick 
succession, we are seeing that the buffer is getting overwritten and hence VPP 
handing of the message is going haywire.

Couple of questions in this regard:,

1) When client sends message to VPP using memif, the vlib_buffer_t 
corresponding to this will still point to the shared memory only ? (or) does 
memif-input node copies separate vlib_buffer_t.  If it;s pointing to the shared 
memory, then, if client modifies the shared memory during this time, the 
vlib-buffer will get impacted, right ?

2) I am also not sure , if our ring sizes are ok to handle messages of size 
64KB . Here is the show memif output. Can you please let us know if the sizes 
seem fine.

interface memif0/1
remote-name "Client"
remote-interface "memif_conn"
socket-id 0 id 1 mode ip
flags admin-up connected
listener-fd 40 conn-fd 41
num-s2m-rings 1 num-m2s-rings 1 buffer-size 0 num-regions 2
region 0 size 65792 fd 44
region 1 size 264241152 fd 47
master-to-slave ring 0:
region 0 offset 32896 ring-size 2048 int-fd 53
head 5349 tail 3301 flags 0x0000 interrupts 3098
slave-to-master ring 0:
region 0 offset 0 ring-size 2048 int-fd 50
head 6602 tail 6602 flags 0x0001 interrupts 0

3) One more important point in this issue is: we are seeing this issue only 
when we are doing thread handoff of the received message from one worker to 
another worker.
When we handoff a message that is received from memif to another worker, do we 
need to clone the buffer ? Basically, is there anything that is localized to 
per worker in this scenario ?

Any inputs on this would really help us.


Thanks & Regards,
Links: You receive all messages sent to this group.

View/Reply Online (#16471): https://lists.fd.io/g/vpp-dev/message/16471
Mute This Topic: https://lists.fd.io/mt/74397409/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]

Reply via email to