Hi,
sorry, you have right. I have found something terribly wrong with host1.
Patch from you works correctly. Many thanks to you. Problem solved.
PS: Host1 was MikroTik device with the current version ROS. This device
avoids the second issue. If I change host1 to debian10, everything is fine.
Test
I can't reproduce that issue.
When you ping from host2 to host1, the hop limit on the echo reply packets
would be set by host1. Is host1 using linux kernel networking? You have
specifically identified that host2 uses "Vpp+LinuxCP" and omitted that
designation from host1, so I presume host1 is not
Hi Matt,
thank you - tested, but not fully solved.
IPv4 icmp works fine
IPv6 icmp
[2a01:500:10::1/64host1]=>ether=>[host2-Vpp+LinuxCP 2a01:500:10::2/64]
host2=>host1 new issue occur, ttl change to 255 after few icmp6, example
below
host1=>host2 works now fine
Regards
Petr
64 bytes from 2a01:500:
Hi Petr,
I don't think it is related to patch 31122, this seems to happen whether
you are using that patch or not. Both ip4-icmp-echo-request and
ip6-icmp-echo-request set outbound echo replies to have a TTL/hop-limit of
64. The IPv4 node sets the VNET_BUFFER_F_LOCALLY_ORIGINATED flag on the
packe
Hi,
I have found an issue with with linux-cp patch 31122.
[192.168.15.1/24 host1]=>ether=>[host2-Vpp+LinuxCP 192.168.15.2/24]
ipv4 icmp TTL
host2=>host1 TTL64
host1=>host2 TTL64
ipv6 icmp TTL:
host2=>host1 TTL64
host1 =>host2 TTL63 (there should be the same increasing TTL mechanism as
in the ipv4)
tl;dr : VPP's doc, no more doxygen, only sphinx, automated deploys & better
fd.io site, check the preview at [0]
Hi everyone,
We spent some time during the past weeks improving VPP's documentation with
Dave Wallace & Andrew Yourtchenko. Main goals of the exercise were to make
the documentation ea
VPP uses its own buffer allocator under the hood, you should monitor the output
of 'show buffers' instead.
If you still see buffer leaks, you can turn on buffer tracing with 'set buffer
traces' and share the output of 'show buffer traces'.
Best
ben
> -Original Message-
> From: vpp-dev@l
Akash,
> I have an important query on "show dpdk buffers" allocation. I have only 1
> NUMA node on my PC and buffers-per-numa is increased from 16800 (default) to
> 128000. I am sending 1G Traffic and after some minutes there is "show dpdk
> buffer" output where free becomes 0 and allocated bec