Re: [vpp-dev] linux-cp patches 31122 ipv6 mtu TTL issue

2021-10-11 Thread Petr Boltík
Hi,

sorry, you have right. I have found something terribly wrong with host1.
Patch from you works correctly. Many thanks to you. Problem solved.

PS: Host1 was MikroTik device with the current version ROS. This device
avoids the second issue. If I change host1 to debian10, everything is fine.
Test icmpv6 packet from debian10 to ROS6.48.1 failed after some time as
mentioned before.

Regards Petr
Have a nice day


út 12. 10. 2021 v 0:00 odesílatel Matthew Smith 
napsal:

>
> I can't reproduce that issue.
>
> When you ping from host2 to host1, the hop limit on the echo reply packets
> would be set by host1. Is host1 using linux kernel networking? You have
> specifically identified that host2 uses "Vpp+LinuxCP" and omitted that
> designation from host1, so I presume host1 is not running VPP. If host1 is
> using linux kernel networking, you could run tcpdump or wireshark to
> inspect outbound packets from 2a01:500:10::1 to 2a01:500:10::2 to confirm
> what hop limit is being sset on the echo reply packets sent by host1 to
> host2.
>
> -Matt
>
>
> On Mon, Oct 11, 2021 at 4:13 PM Petr Boltík  wrote:
>
>> Hi Matt,
>> thank you - tested, but not fully solved.
>> IPv4 icmp works fine
>> IPv6 icmp
>>
>> [2a01:500:10::1/64host1]=>ether=>[host2-Vpp+LinuxCP 2a01:500:10::2/64]
>> host2=>host1 new issue occur, ttl change to 255 after few icmp6, example
>> below
>> host1=>host2 works now fine
>>
>> Regards
>> Petr
>>
>> 64 bytes from 2a01:500:10::1: icmp_seq=12 ttl=64 time=0.269 ms
>>> 64 bytes from 2a01:500:10::1: icmp_seq=13 ttl=64 time=0.213 ms
>>> 64 bytes from 2a01:500:10::1: icmp_seq=14 ttl=64 time=0.239 ms
>>> 64 bytes from 2a01:500:10::1: icmp_seq=15 ttl=64 time=0.210 ms
>>> 64 bytes from 2a01:500:10::1: icmp_seq=16 ttl=64 time=5.28 ms
>>> 64 bytes from 2a01:500:10::1: icmp_seq=17 ttl=64 time=0.240 ms
>>> 64 bytes from 2a01:500:10::1: icmp_seq=18 ttl=255 time=0.268 ms
>>> 64 bytes from 2a01:500:10::1: icmp_seq=19 ttl=255 time=0.210 ms
>>> 64 bytes from 2a01:500:10::1: icmp_seq=20 ttl=255 time=0.276 ms
>>> 64 bytes from 2a01:500:10::1: icmp_seq=21 ttl=255 time=0.484 ms
>>> 64 bytes from 2a01:500:10::1: icmp_seq=22 ttl=255 time=3.87 ms
>>> 64 bytes from 2a01:500:10::1: icmp_seq=23 ttl=255 time=9.34 ms
>>>
>>
>> po 11. 10. 2021 v 21:54 odesílatel Matthew Smith 
>> napsal:
>>
>>>
>>> Hi Petr,
>>>
>>> I don't think it is related to patch 31122, this seems to happen whether
>>> you are using that patch or not. Both ip4-icmp-echo-request and
>>> ip6-icmp-echo-request set outbound echo replies to have a TTL/hop-limit of
>>> 64. The IPv4 node sets the VNET_BUFFER_F_LOCALLY_ORIGINATED flag on the
>>> packets it sends but the IPv6 node neglects to do this. Setting that flag
>>> will prevent a rewrite node from decrementing the TTL/hop-limit later on.
>>>
>>> Here's a patch which sets the flag for outbound IPv6 echo replies -
>>> https://gerrit.fd.io/r/c/vpp/+/34040. Can you please test it and report
>>> whether it solves the problem?
>>>
>>> Thanks,
>>> -Matt
>>>
>>>
>>> On Mon, Oct 11, 2021 at 1:03 PM Petr Boltík 
>>> wrote:
>>>
 Hi,
 I have found an issue with with linux-cp patch 31122.
 [192.168.15.1/24 host1]=>ether=>[host2-Vpp+LinuxCP 192.168.15.2/24]
 ipv4 icmp TTL
 host2=>host1 TTL64
 host1=>host2 TTL64

 ipv6 icmp TTL:
 host2=>host1 TTL64
 host1 =>host2 TTL63 (there should be the same increasing TTL mechanism
 as in the ipv4)

 Thanks for report this message to the right hands.
 Regards Petr

 



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20314): https://lists.fd.io/g/vpp-dev/message/20314
Mute This Topic: https://lists.fd.io/mt/86243713/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] linux-cp patches 31122 ipv6 mtu TTL issue

2021-10-11 Thread Matthew Smith via lists.fd.io
I can't reproduce that issue.

When you ping from host2 to host1, the hop limit on the echo reply packets
would be set by host1. Is host1 using linux kernel networking? You have
specifically identified that host2 uses "Vpp+LinuxCP" and omitted that
designation from host1, so I presume host1 is not running VPP. If host1 is
using linux kernel networking, you could run tcpdump or wireshark to
inspect outbound packets from 2a01:500:10::1 to 2a01:500:10::2 to confirm
what hop limit is being sset on the echo reply packets sent by host1 to
host2.

-Matt


On Mon, Oct 11, 2021 at 4:13 PM Petr Boltík  wrote:

> Hi Matt,
> thank you - tested, but not fully solved.
> IPv4 icmp works fine
> IPv6 icmp
>
> [2a01:500:10::1/64host1]=>ether=>[host2-Vpp+LinuxCP 2a01:500:10::2/64]
> host2=>host1 new issue occur, ttl change to 255 after few icmp6, example
> below
> host1=>host2 works now fine
>
> Regards
> Petr
>
> 64 bytes from 2a01:500:10::1: icmp_seq=12 ttl=64 time=0.269 ms
>> 64 bytes from 2a01:500:10::1: icmp_seq=13 ttl=64 time=0.213 ms
>> 64 bytes from 2a01:500:10::1: icmp_seq=14 ttl=64 time=0.239 ms
>> 64 bytes from 2a01:500:10::1: icmp_seq=15 ttl=64 time=0.210 ms
>> 64 bytes from 2a01:500:10::1: icmp_seq=16 ttl=64 time=5.28 ms
>> 64 bytes from 2a01:500:10::1: icmp_seq=17 ttl=64 time=0.240 ms
>> 64 bytes from 2a01:500:10::1: icmp_seq=18 ttl=255 time=0.268 ms
>> 64 bytes from 2a01:500:10::1: icmp_seq=19 ttl=255 time=0.210 ms
>> 64 bytes from 2a01:500:10::1: icmp_seq=20 ttl=255 time=0.276 ms
>> 64 bytes from 2a01:500:10::1: icmp_seq=21 ttl=255 time=0.484 ms
>> 64 bytes from 2a01:500:10::1: icmp_seq=22 ttl=255 time=3.87 ms
>> 64 bytes from 2a01:500:10::1: icmp_seq=23 ttl=255 time=9.34 ms
>>
>
> po 11. 10. 2021 v 21:54 odesílatel Matthew Smith 
> napsal:
>
>>
>> Hi Petr,
>>
>> I don't think it is related to patch 31122, this seems to happen whether
>> you are using that patch or not. Both ip4-icmp-echo-request and
>> ip6-icmp-echo-request set outbound echo replies to have a TTL/hop-limit of
>> 64. The IPv4 node sets the VNET_BUFFER_F_LOCALLY_ORIGINATED flag on the
>> packets it sends but the IPv6 node neglects to do this. Setting that flag
>> will prevent a rewrite node from decrementing the TTL/hop-limit later on.
>>
>> Here's a patch which sets the flag for outbound IPv6 echo replies -
>> https://gerrit.fd.io/r/c/vpp/+/34040. Can you please test it and report
>> whether it solves the problem?
>>
>> Thanks,
>> -Matt
>>
>>
>> On Mon, Oct 11, 2021 at 1:03 PM Petr Boltík 
>> wrote:
>>
>>> Hi,
>>> I have found an issue with with linux-cp patch 31122.
>>> [192.168.15.1/24 host1]=>ether=>[host2-Vpp+LinuxCP 192.168.15.2/24]
>>> ipv4 icmp TTL
>>> host2=>host1 TTL64
>>> host1=>host2 TTL64
>>>
>>> ipv6 icmp TTL:
>>> host2=>host1 TTL64
>>> host1 =>host2 TTL63 (there should be the same increasing TTL mechanism
>>> as in the ipv4)
>>>
>>> Thanks for report this message to the right hands.
>>> Regards Petr
>>>
>>> 
>>>
>>>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20313): https://lists.fd.io/g/vpp-dev/message/20313
Mute This Topic: https://lists.fd.io/mt/86243713/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] linux-cp patches 31122 ipv6 mtu TTL issue

2021-10-11 Thread Petr Boltík
Hi Matt,
thank you - tested, but not fully solved.
IPv4 icmp works fine
IPv6 icmp

[2a01:500:10::1/64host1]=>ether=>[host2-Vpp+LinuxCP 2a01:500:10::2/64]
host2=>host1 new issue occur, ttl change to 255 after few icmp6, example
below
host1=>host2 works now fine

Regards
Petr

64 bytes from 2a01:500:10::1: icmp_seq=12 ttl=64 time=0.269 ms
> 64 bytes from 2a01:500:10::1: icmp_seq=13 ttl=64 time=0.213 ms
> 64 bytes from 2a01:500:10::1: icmp_seq=14 ttl=64 time=0.239 ms
> 64 bytes from 2a01:500:10::1: icmp_seq=15 ttl=64 time=0.210 ms
> 64 bytes from 2a01:500:10::1: icmp_seq=16 ttl=64 time=5.28 ms
> 64 bytes from 2a01:500:10::1: icmp_seq=17 ttl=64 time=0.240 ms
> 64 bytes from 2a01:500:10::1: icmp_seq=18 ttl=255 time=0.268 ms
> 64 bytes from 2a01:500:10::1: icmp_seq=19 ttl=255 time=0.210 ms
> 64 bytes from 2a01:500:10::1: icmp_seq=20 ttl=255 time=0.276 ms
> 64 bytes from 2a01:500:10::1: icmp_seq=21 ttl=255 time=0.484 ms
> 64 bytes from 2a01:500:10::1: icmp_seq=22 ttl=255 time=3.87 ms
> 64 bytes from 2a01:500:10::1: icmp_seq=23 ttl=255 time=9.34 ms
>

po 11. 10. 2021 v 21:54 odesílatel Matthew Smith 
napsal:

>
> Hi Petr,
>
> I don't think it is related to patch 31122, this seems to happen whether
> you are using that patch or not. Both ip4-icmp-echo-request and
> ip6-icmp-echo-request set outbound echo replies to have a TTL/hop-limit of
> 64. The IPv4 node sets the VNET_BUFFER_F_LOCALLY_ORIGINATED flag on the
> packets it sends but the IPv6 node neglects to do this. Setting that flag
> will prevent a rewrite node from decrementing the TTL/hop-limit later on.
>
> Here's a patch which sets the flag for outbound IPv6 echo replies -
> https://gerrit.fd.io/r/c/vpp/+/34040. Can you please test it and report
> whether it solves the problem?
>
> Thanks,
> -Matt
>
>
> On Mon, Oct 11, 2021 at 1:03 PM Petr Boltík  wrote:
>
>> Hi,
>> I have found an issue with with linux-cp patch 31122.
>> [192.168.15.1/24 host1]=>ether=>[host2-Vpp+LinuxCP 192.168.15.2/24]
>> ipv4 icmp TTL
>> host2=>host1 TTL64
>> host1=>host2 TTL64
>>
>> ipv6 icmp TTL:
>> host2=>host1 TTL64
>> host1 =>host2 TTL63 (there should be the same increasing TTL mechanism as
>> in the ipv4)
>>
>> Thanks for report this message to the right hands.
>> Regards Petr
>>
>> 
>>
>>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20312): https://lists.fd.io/g/vpp-dev/message/20312
Mute This Topic: https://lists.fd.io/mt/86243713/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] linux-cp patches 31122 ipv6 mtu TTL issue

2021-10-11 Thread Matthew Smith via lists.fd.io
Hi Petr,

I don't think it is related to patch 31122, this seems to happen whether
you are using that patch or not. Both ip4-icmp-echo-request and
ip6-icmp-echo-request set outbound echo replies to have a TTL/hop-limit of
64. The IPv4 node sets the VNET_BUFFER_F_LOCALLY_ORIGINATED flag on the
packets it sends but the IPv6 node neglects to do this. Setting that flag
will prevent a rewrite node from decrementing the TTL/hop-limit later on.

Here's a patch which sets the flag for outbound IPv6 echo replies -
https://gerrit.fd.io/r/c/vpp/+/34040. Can you please test it and report
whether it solves the problem?

Thanks,
-Matt


On Mon, Oct 11, 2021 at 1:03 PM Petr Boltík  wrote:

> Hi,
> I have found an issue with with linux-cp patch 31122.
> [192.168.15.1/24 host1]=>ether=>[host2-Vpp+LinuxCP 192.168.15.2/24]
> ipv4 icmp TTL
> host2=>host1 TTL64
> host1=>host2 TTL64
>
> ipv6 icmp TTL:
> host2=>host1 TTL64
> host1 =>host2 TTL63 (there should be the same increasing TTL mechanism as
> in the ipv4)
>
> Thanks for report this message to the right hands.
> Regards Petr
>
> 
>
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20311): https://lists.fd.io/g/vpp-dev/message/20311
Mute This Topic: https://lists.fd.io/mt/86243713/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] linux-cp patches 31122 ipv6 mtu TTL issue

2021-10-11 Thread Petr Boltík
Hi,
I have found an issue with with linux-cp patch 31122.
[192.168.15.1/24 host1]=>ether=>[host2-Vpp+LinuxCP 192.168.15.2/24]
ipv4 icmp TTL
host2=>host1 TTL64
host1=>host2 TTL64

ipv6 icmp TTL:
host2=>host1 TTL64
host1 =>host2 TTL63 (there should be the same increasing TTL mechanism as
in the ipv4)

Thanks for report this message to the right hands.
Regards Petr

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20310): https://lists.fd.io/g/vpp-dev/message/20310
Mute This Topic: https://lists.fd.io/mt/86243713/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] About VPP's documentation

2021-10-11 Thread Nathan Skrzypczak
tl;dr : VPP's doc, no more doxygen, only sphinx, automated deploys & better
fd.io site, check the preview at [0]

Hi everyone,

We spent some time during the past weeks improving VPP's documentation with
Dave Wallace & Andrew Yourtchenko. Main goals of the exercise were to make
the documentation easier to consume for VPP's users, friendlier to update
by contributors, and smoother from a CI standpoint. Getting documentation
published should be as easy as :
* Creating a .rst file in your code folder
* Symlinking it at the right place in the docs/ folder
* Merging the patch

This resulted in a first preview [0] sitting on top of a revamped fd.io
site. Main patches are :
- the main documentation patch [1] which now contains a
`extras/scripts/check_documentation.sh` script. It sits on top of a few
patches translating existing markdown to reStructuredText and nitfixing
things
- an evolution of the fd.io website [2]
This work also relies on evolutions to the CI [3] (thanks a lot Dave &
Andrew for this !) and to the infra hosting the docs, in order to make
docs-checking & docs-publishing fast & runable on every patch.

We're planning on discussing this during the next Community meeting, but
feel free to share comments & feedbacks in replies to this email.

Cheers
-Nathan

[0] https://deploy-preview-102--fdio.netlify.app/docs/vpp/master/index.html
[1] https://gerrit.fd.io/r/c/vpp/+/33545
[2] https://github.com/FDio/site/pull/104
[3] https://gerrit.fd.io/r/c/ci-management/+/33992

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20309): https://lists.fd.io/g/vpp-dev/message/20309
Mute This Topic: https://lists.fd.io/mt/86234608/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] show dpdk buffers are fully allocated slowly while sending 1G traffic after some minutes

2021-10-11 Thread Benoit Ganne (bganne) via lists.fd.io
VPP uses its own buffer allocator under the hood, you should monitor the output 
of 'show buffers' instead.
If you still see buffer leaks, you can turn on buffer tracing with 'set buffer 
traces' and share the output of 'show buffer traces'.

Best
ben

> -Original Message-
> From: vpp-dev@lists.fd.io  On Behalf Of Akash S R
> Sent: lundi 11 octobre 2021 08:27
> To: vpp-dev 
> Subject: [vpp-dev] show dpdk buffers are fully allocated slowly while
> sending 1G traffic after some minutes
> 
> Hello Mates/OLE/Dave,
> 
> I have an important query on "show dpdk buffers" allocation. I have only 1
> NUMA node on my PC and buffers-per-numa is increased from 16800 (default)
> to 128000. I am sending 1G Traffic and after some minutes there is "show
> dpdk buffer" output where free becomes 0 and allocated becomes full. Have
> seen this increasing slowly with time.
> Why is there no refresh of buffers and all are allocated slowly ? Please
> advise. This is a serious issue as this is a performance impact and I need
> to find out the solution so please help.
> 
> /Akash

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20308): https://lists.fd.io/g/vpp-dev/message/20308
Mute This Topic: https://lists.fd.io/mt/86230891/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] show dpdk buffers are fully allocated slowly while sending 1G traffic after some minutes

2021-10-11 Thread Ole Troan
Akash,

> I have an important query on "show dpdk buffers" allocation. I have only 1 
> NUMA node on my PC and buffers-per-numa is increased from 16800 (default) to 
> 128000. I am sending 1G Traffic and after some minutes there is "show dpdk 
> buffer" output where free becomes 0 and allocated becomes full. Have seen 
> this increasing slowly with time.
> Why is there no refresh of buffers and all are allocated slowly ? Please 
> advise. This is a serious issue as this is a performance impact and I need to 
> find out the solution so please help.

sounds like you are leaking buffers and you need to figure out why.
that is, buffers allocated somewhere that isn't freed.

on a scale from 0 to 10 how well would you rate yourself on the "provided 
enough information about the problem" axis? :-D

O.



signature.asc
Description: Message signed with OpenPGP

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20307): https://lists.fd.io/g/vpp-dev/message/20307
Mute This Topic: https://lists.fd.io/mt/86230891/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] show dpdk buffers are fully allocated slowly while sending 1G traffic after some minutes

2021-10-11 Thread Akash S R
Hello Mates/OLE/Dave,

I have an important query on "show dpdk buffers" allocation. I have only 1
NUMA node on my PC and buffers-per-numa is increased from 16800 (default)
to 128000. I am sending 1G Traffic and after some minutes there is "show
dpdk buffer" output where free becomes 0 and allocated becomes full. Have
seen this increasing slowly with time.
Why is there no refresh of buffers and all are allocated slowly ? Please
advise. This is a serious issue as this is a performance impact and I need
to find out the solution so please help.

/Akash

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20306): https://lists.fd.io/g/vpp-dev/message/20306
Mute This Topic: https://lists.fd.io/mt/86230891/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-