I see vlib_buffer_reset(), but haven’t used it yet nor looked at any code that uses it.
https://docs.fd.io/vpp/17.07/d0/d83/vlib_2buffer_8h.html#a2db4de69d8fc1ff619d0ad7a45ac08fe Using vlib_buffer_advance() when the packet node-traversal is known is ok. I know what arc is my node using – l2-input, device-input, if there is a mpls-input and IP tunnel (GRE, GTPU, etc.). Hemant From: [email protected] <[email protected]> On Behalf Of David Gohberg Sent: Monday, April 12, 2021 6:51 AM To: [email protected] Subject: Re: [vpp-dev] vlib_buffer_clone behavior when trying to send to two interfaces Damjan, After looking at the vlib_buffer_clone_256 function I realize that it modifies the original buffer pointer, like you said. my packets are coming in down a custom node path (originating from an asic data plane), so they will always have the l2 header. The node that performs the cloning is the last stop before packets get sent to the hardware interface. Is there an "elegant" way to always get a buffer that points to the start of the packet data, regardless of vlan tags and other encapsulations?
smime.p7s
Description: S/MIME cryptographic signature
-=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#19179): https://lists.fd.io/g/vpp-dev/message/19179 Mute This Topic: https://lists.fd.io/mt/81938105/21656 Group Owner: [email protected] Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [[email protected]] -=-=-=-=-=-=-=-=-=-=-=-
