[vpp-dev] Another fix to avoid assertion failure related to vlib_time_now()

2020-05-26 Thread Elias Rudberg
Hello again,

Here is another fix to avoid assertion failure due to vlib_time_now()
being called with a vm corresponding to a different thread, in
nat_ipfix_logging.c:

https://gerrit.fd.io/r/c/vpp/+/27281

Please have a look and merge if it seems okay. Maybe it could be done
more elegantly, this way required changing in several places to pass
along the thread_index value.

Best regards,
Elias

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16520): https://lists.fd.io/g/vpp-dev/message/16520
Mute This Topic: https://lists.fd.io/mt/74491949/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Fix in set_ipfix_exporter_command_fn() to avoid segmentation fault crash

2020-05-26 Thread Elias Rudberg
Hello VPP experts,

When testing the current master branch for NAT with ipfix logging
enabled we encountered a problem with a segmentation fault crash. It
seems like this was caused by a bug in set_ipfix_exporter_command_fn()
in vnet/ipfix-export/flow_report.c where the variable collector_port
is declared as u16:

u16 collector_port = UDP_DST_PORT_ipfix;

and then a few lines later the address of that variable is given as
argument to unformat() with %u like this:

else if (unformat (input, "port %u", _port))

I think that is wrong because %u should correspond to a 32-bit
variable, so when passing the address of a 16-bit variable some data
next to it can get corrupted. In our case what happened was that the
"fib_index" variable that happened to be nearby on the stack got
corrupted, leading to a crash later on.

The problem only appears for release build and not for debug, perhaps
because compiler optimization affects how variables are stored on the
stack. It could be that the compiler (clang or gcc) also matters, that
could explain why the problem was not seen earlier.

Here is a fix, please check it and merge if you agree:
https://gerrit.fd.io/r/c/vpp/+/27280

Best regards,
Elias
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16519): https://lists.fd.io/g/vpp-dev/message/16519
Mute This Topic: https://lists.fd.io/mt/74491544/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP fails to start - error message EAL: FATAL: Cannot get hugepage information.

2020-05-26 Thread Manoj Iyer
$ lscpu
Architecture: aarch64
Byte Order:   Little Endian
CPU(s):   8
On-line CPU(s) list:  0
Off-line CPU(s) list: 1-7
Thread(s) per core:   1
Core(s) per socket:   1
Socket(s):1
NUMA node(s): 1
Vendor ID:ARM
Model:3
Model name:   Cortex-A72
Stepping: r0p3
BogoMIPS: 250.00
L1d cache:unknown size
L1i cache:unknown size
L2 cache: unknown size
NUMA node0 CPU(s):0
Flags:fp asimd evtstrm aes pmull sha1 sha2 crc32 cpuid

$ grep .  /sys/kernel/mm/hugepages/hugepages-*/*
/sys/kernel/mm/hugepages/hugepages-1048576kB/free_hugepages:0
/sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages:0
/sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages_mempolicy:0
/sys/kernel/mm/hugepages/hugepages-1048576kB/nr_overcommit_hugepages:0
/sys/kernel/mm/hugepages/hugepages-1048576kB/resv_hugepages:0
/sys/kernel/mm/hugepages/hugepages-1048576kB/surplus_hugepages:0
/sys/kernel/mm/hugepages/hugepages-2048kB/free_hugepages:1024
/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages:1024
/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages_mempolicy:1024
/sys/kernel/mm/hugepages/hugepages-2048kB/nr_overcommit_hugepages:0
/sys/kernel/mm/hugepages/hugepages-2048kB/resv_hugepages:0
/sys/kernel/mm/hugepages/hugepages-2048kB/surplus_hugepages:0
/sys/kernel/mm/hugepages/hugepages-32768kB/free_hugepages:0
/sys/kernel/mm/hugepages/hugepages-32768kB/nr_hugepages:0
/sys/kernel/mm/hugepages/hugepages-32768kB/nr_hugepages_mempolicy:0
/sys/kernel/mm/hugepages/hugepages-32768kB/nr_overcommit_hugepages:0
/sys/kernel/mm/hugepages/hugepages-32768kB/resv_hugepages:0
/sys/kernel/mm/hugepages/hugepages-32768kB/surplus_hugepages:0
/sys/kernel/mm/hugepages/hugepages-64kB/free_hugepages:0
/sys/kernel/mm/hugepages/hugepages-64kB/nr_hugepages:0
/sys/kernel/mm/hugepages/hugepages-64kB/nr_hugepages_mempolicy:0
/sys/kernel/mm/hugepages/hugepages-64kB/nr_overcommit_hugepages:0
/sys/kernel/mm/hugepages/hugepages-64kB/resv_hugepages:0
/sys/kernel/mm/hugepages/hugepages-64kB/surplus_hugepages:0


$ grep . /sys/devices/system/node/node*/hugepages/hugepages-*/*
/sys/devices/system/node/node0/hugepages/hugepages-1048576kB/free_hugepages:0
/sys/devices/system/node/node0/hugepages/hugepages-1048576kB/nr_hugepages:0
/sys/devices/system/node/node0/hugepages/hugepages-1048576kB/surplus_hugepages:
/sys/devices/system/node/node0/hugepages/hugepages-2048kB/free_hugepages:1024
/sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages:1024
/sys/devices/system/node/node0/hugepages/hugepages-2048kB/surplus_hugepages:0
/sys/devices/system/node/node0/hugepages/hugepages-32768kB/free_hugepages:0
/sys/devices/system/node/node0/hugepages/hugepages-32768kB/nr_hugepages:0
/sys/devices/system/node/node0/hugepages/hugepages-32768kB/surplus_hugepages:0
/sys/devices/system/node/node0/hugepages/hugepages-64kB/free_hugepages:0
/sys/devices/system/node/node0/hugepages/hugepages-64kB/nr_hugepages:0
/sys/devices/system/node/node0/hugepages/hugepages-64kB/surplus_hugepages:0
ubuntu@sst100:~$

From: Damjan Marion 
Sent: Tuesday, May 26, 2020 6:01 PM
To: Manoj Iyer 
Cc: bga...@cisco.com ; vpp-dev@lists.fd.io 
; Rodney Schmidt ; Kshitij Sudan 

Subject: Re: [vpp-dev] VPP fails to start - error message EAL: FATAL: Cannot 
get hugepage information.

Can you capture:

lscpu

grep . /sys/kernel/mm/hugepages/hugepages-*/*

grep . /sys/devices/system/node/node*/hugepages/hugepages-*/*

—
Damjan

> On 27 May 2020, at 00:50, Manoj Iyer  wrote:
>
> But the issue is VPP's dpgk plugin fails and VPP service is not started as a 
> result.
> From: Damjan Marion 
> Sent: Tuesday, May 26, 2020 5:44 PM
> To: bga...@cisco.com 
> Cc: Manoj Iyer ; vpp-dev@lists.fd.io 
> ; Rodney Schmidt ; Kshitij Sudan 
> 
> Subject: Re: [vpp-dev] VPP fails to start - error message EAL: FATAL: Cannot 
> get hugepage information.
>
>
> VPP passes  --in-memory  so there should be no hugepage files created in the 
> fliesystem.
>
>
> > On 26 May 2020, at 18:42, Benoit Ganne (bganne) via lists.fd.io 
> >  wrote:
> >
> > Hi Manoj,
> >
> > The issue is because of DPDK initialization. In the working conf, you 
> > disable DPDK plugin so DPDK is not initialized and everything is fine.
> > Can you check whether /mnt/huge is full of dpdk-created staled files? I saw 
> > this happen from time to time. The fix is simple, just remove the staled 
> > files.
> >
> > Best
> > ben
> >
> >> -Original Message-
> >> From: vpp-dev@lists.fd.io  On Behalf Of Manoj Iyer
> >> Sent: mardi 26 mai 2020 18:20
> >> To: vpp-dev@lists.fd.io
> >> Cc: Rodney Schmidt ; Kshitij Sudan
> >> 
> >> Subject: [vpp-dev] VPP fails to start - error message EAL: FATAL: Cannot
> >> get hugepage information.
> >>
> >> Hello,
> >>
> >>
> >> I am trying to start the VPP (19.04) service on an ARM64 system. VPP fails
> >> with the message:
> >>
> >>
> >> 

Re: [vpp-dev] VPP fails to start - error message EAL: FATAL: Cannot get hugepage information.

2020-05-26 Thread Damjan Marion via lists.fd.io
Can you capture:

lscpu

grep . /sys/kernel/mm/hugepages/hugepages-*/*

grep . /sys/devices/system/node/node*/hugepages/hugepages-*/*

— 
Damjan

> On 27 May 2020, at 00:50, Manoj Iyer  wrote:
> 
> But the issue is VPP's dpgk plugin fails and VPP service is not started as a 
> result. 
> From: Damjan Marion 
> Sent: Tuesday, May 26, 2020 5:44 PM
> To: bga...@cisco.com 
> Cc: Manoj Iyer ; vpp-dev@lists.fd.io 
> ; Rodney Schmidt ; Kshitij Sudan 
> 
> Subject: Re: [vpp-dev] VPP fails to start - error message EAL: FATAL: Cannot 
> get hugepage information.
>
> 
> VPP passes  --in-memory  so there should be no hugepage files created in the 
> fliesystem.
> 
> 
> > On 26 May 2020, at 18:42, Benoit Ganne (bganne) via lists.fd.io 
> >  wrote:
> > 
> > Hi Manoj,
> > 
> > The issue is because of DPDK initialization. In the working conf, you 
> > disable DPDK plugin so DPDK is not initialized and everything is fine.
> > Can you check whether /mnt/huge is full of dpdk-created staled files? I saw 
> > this happen from time to time. The fix is simple, just remove the staled 
> > files.
> > 
> > Best
> > ben
> > 
> >> -Original Message-
> >> From: vpp-dev@lists.fd.io  On Behalf Of Manoj Iyer
> >> Sent: mardi 26 mai 2020 18:20
> >> To: vpp-dev@lists.fd.io
> >> Cc: Rodney Schmidt ; Kshitij Sudan
> >> 
> >> Subject: [vpp-dev] VPP fails to start - error message EAL: FATAL: Cannot
> >> get hugepage information.
> >> 
> >> Hello,
> >> 
> >> 
> >> I am trying to start the VPP (19.04) service on an ARM64 system. VPP fails
> >> with the message:
> >> 
> >> 
> >> /usr/bin/vpp[1252]: dpdk: EAL init args: -c 1 -n 4 --in-memory --file-
> >> prefix vpp -w 0008:01:00.0 --master-lcore 0
> >> 
> >> EAL: FATAL: Cannot get hugepage information.
> >> 
> >> vpp[1252]: dpdk_config: rte_eal_init returned -1
> >> 
> >> 
> >> But when I start vpp as no-daemon manually, I can start VPP and use vppctl
> >> to get a console prompt. Could you please help me figure out why my
> >> service fails with "Cannot get hugepage information" ?
> >> 
> >> 
> >> Here is my service setup, although I am starting the service from command
> >> line, the exact setup in systemd service fails the same way:
> >> 
> >> 
> >> $ cat /proc/meminfo | grep -i huge
> >> 
> >> AnonHugePages: 0 kB
> >> 
> >> ShmemHugePages:0 kB
> >> 
> >> FileHugePages: 0 kB
> >> 
> >> HugePages_Total:1024
> >> 
> >> HugePages_Free: 1024
> >> 
> >> HugePages_Rsvd:0
> >> 
> >> HugePages_Surp:0
> >> 
> >> Hugepagesize:   2048 kB
> >> 
> >> Hugetlb: 2097152 kB
> >> 
> >> 
> >> $ mount | grep hugetlbfs
> >> 
> >> hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
> >> 
> >> nodev on /mnt/huge type hugetlbfs (rw,relatime,pagesize=2M)
> >> 
> >> 
> >> # modprobe igb_uio
> >> 
> >> # dpdk-devbind -u 0008:01:00.0
> >> 
> >> # dpdk-devbind --bind=igb_uio 0008:01:00.0
> >> 
> >> # mkdir -p /run/vpp/
> >> 
> >> # vpp -c /usr/share/vpp/vpp.conf
> >> 
> >> vlib_plugin_early_init:361: plugin path /usr/lib/aarch64-linux-
> >> gnu/vpp_plugins/
> >> 
> >> load_one_plugin:117: Plugin disabled (default): abf_plugin.so
> >> 
> >> load_one_plugin:117: Plugin disabled (default): acl_plugin.so
> >> 
> >> load_one_plugin:117: Plugin disabled (default): avf_plugin.so
> >> 
> >> load_one_plugin:117: Plugin disabled (default): cdp_plugin.so
> >> 
> >> load_one_plugin:117: Plugin disabled (default): crypto_openssl_plugin.so
> >> 
> >> load_one_plugin:117: Plugin disabled (default): ct6_plugin.so
> >> 
> >> load_one_plugin:189: Loaded plugin: dpdk_plugin.so (Data Plane Development
> >> Kit (DPDK))
> >> 
> >> load_one_plugin:117: Plugin disabled (default): flowprobe_plugin.so
> >> 
> >> load_one_plugin:117: Plugin disabled (default): gbp_plugin.so
> >> 
> >> load_one_plugin:117: Plugin disabled (default): gtpu_plugin.so
> >> 
> >> load_one_plugin:117: Plugin disabled (default): igmp_plugin.so
> >> 
> >> load_one_plugin:117: Plugin disabled (default): ikev2_plugin.so
> >> 
> >> load_one_plugin:117: Plugin disabled (default): ila_plugin.so
> >> 
> >> load_one_plugin:117: Plugin disabled (default): ioam_plugin.so
> >> 
> >> load_one_plugin:117: Plugin disabled (default): ixge_plugin.so
> >> 
> >> load_one_plugin:117: Plugin disabled (default): l2e_plugin.so
> >> 
> >> load_one_plugin:117: Plugin disabled (default): lacp_plugin.so
> >> 
> >> load_one_plugin:117: Plugin disabled (default): lb_plugin.so
> >> 
> >> load_one_plugin:117: Plugin disabled (default): mactime_plugin.so
> >> 
> >> load_one_plugin:117: Plugin disabled (default): map_plugin.so
> >> 
> >> load_one_plugin:117: Plugin disabled (default): memif_plugin.so
> >> 
> >> load_one_plugin:117: Plugin disabled (default): nat_plugin.so
> >> 
> >> load_one_plugin:117: Plugin disabled (default): nsh_plugin.so
> >> 
> >> load_one_plugin:117: Plugin disabled (default): nsim_plugin.so
> >> 
> >> load_one_plugin:117: Plugin disabled (default): perfmon_plugin.so
> >> 
> >> 

Re: [vpp-dev] VPP fails to start - error message EAL: FATAL: Cannot get hugepage information.

2020-05-26 Thread Manoj Iyer
But the issue is VPP's dpgk plugin fails and VPP service is not started as a 
result.

From: Damjan Marion 
Sent: Tuesday, May 26, 2020 5:44 PM
To: bga...@cisco.com 
Cc: Manoj Iyer ; vpp-dev@lists.fd.io ; 
Rodney Schmidt ; Kshitij Sudan 
Subject: Re: [vpp-dev] VPP fails to start - error message EAL: FATAL: Cannot 
get hugepage information.


VPP passes  --in-memory  so there should be no hugepage files created in the 
fliesystem.


> On 26 May 2020, at 18:42, Benoit Ganne (bganne) via lists.fd.io 
>  wrote:
>
> Hi Manoj,
>
> The issue is because of DPDK initialization. In the working conf, you disable 
> DPDK plugin so DPDK is not initialized and everything is fine.
> Can you check whether /mnt/huge is full of dpdk-created staled files? I saw 
> this happen from time to time. The fix is simple, just remove the staled 
> files.
>
> Best
> ben
>
>> -Original Message-
>> From: vpp-dev@lists.fd.io  On Behalf Of Manoj Iyer
>> Sent: mardi 26 mai 2020 18:20
>> To: vpp-dev@lists.fd.io
>> Cc: Rodney Schmidt ; Kshitij Sudan
>> 
>> Subject: [vpp-dev] VPP fails to start - error message EAL: FATAL: Cannot
>> get hugepage information.
>>
>> Hello,
>>
>>
>> I am trying to start the VPP (19.04) service on an ARM64 system. VPP fails
>> with the message:
>>
>>
>> /usr/bin/vpp[1252]: dpdk: EAL init args: -c 1 -n 4 --in-memory --file-
>> prefix vpp -w 0008:01:00.0 --master-lcore 0
>>
>> EAL: FATAL: Cannot get hugepage information.
>>
>> vpp[1252]: dpdk_config: rte_eal_init returned -1
>>
>>
>> But when I start vpp as no-daemon manually, I can start VPP and use vppctl
>> to get a console prompt. Could you please help me figure out why my
>> service fails with "Cannot get hugepage information" ?
>>
>>
>> Here is my service setup, although I am starting the service from command
>> line, the exact setup in systemd service fails the same way:
>>
>>
>> $ cat /proc/meminfo | grep -i huge
>>
>> AnonHugePages: 0 kB
>>
>> ShmemHugePages:0 kB
>>
>> FileHugePages: 0 kB
>>
>> HugePages_Total:1024
>>
>> HugePages_Free: 1024
>>
>> HugePages_Rsvd:0
>>
>> HugePages_Surp:0
>>
>> Hugepagesize:   2048 kB
>>
>> Hugetlb: 2097152 kB
>>
>>
>> $ mount | grep hugetlbfs
>>
>> hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
>>
>> nodev on /mnt/huge type hugetlbfs (rw,relatime,pagesize=2M)
>>
>>
>> # modprobe igb_uio
>>
>> # dpdk-devbind -u 0008:01:00.0
>>
>> # dpdk-devbind --bind=igb_uio 0008:01:00.0
>>
>> # mkdir -p /run/vpp/
>>
>> # vpp -c /usr/share/vpp/vpp.conf
>>
>> vlib_plugin_early_init:361: plugin path /usr/lib/aarch64-linux-
>> gnu/vpp_plugins/
>>
>> load_one_plugin:117: Plugin disabled (default): abf_plugin.so
>>
>> load_one_plugin:117: Plugin disabled (default): acl_plugin.so
>>
>> load_one_plugin:117: Plugin disabled (default): avf_plugin.so
>>
>> load_one_plugin:117: Plugin disabled (default): cdp_plugin.so
>>
>> load_one_plugin:117: Plugin disabled (default): crypto_openssl_plugin.so
>>
>> load_one_plugin:117: Plugin disabled (default): ct6_plugin.so
>>
>> load_one_plugin:189: Loaded plugin: dpdk_plugin.so (Data Plane Development
>> Kit (DPDK))
>>
>> load_one_plugin:117: Plugin disabled (default): flowprobe_plugin.so
>>
>> load_one_plugin:117: Plugin disabled (default): gbp_plugin.so
>>
>> load_one_plugin:117: Plugin disabled (default): gtpu_plugin.so
>>
>> load_one_plugin:117: Plugin disabled (default): igmp_plugin.so
>>
>> load_one_plugin:117: Plugin disabled (default): ikev2_plugin.so
>>
>> load_one_plugin:117: Plugin disabled (default): ila_plugin.so
>>
>> load_one_plugin:117: Plugin disabled (default): ioam_plugin.so
>>
>> load_one_plugin:117: Plugin disabled (default): ixge_plugin.so
>>
>> load_one_plugin:117: Plugin disabled (default): l2e_plugin.so
>>
>> load_one_plugin:117: Plugin disabled (default): lacp_plugin.so
>>
>> load_one_plugin:117: Plugin disabled (default): lb_plugin.so
>>
>> load_one_plugin:117: Plugin disabled (default): mactime_plugin.so
>>
>> load_one_plugin:117: Plugin disabled (default): map_plugin.so
>>
>> load_one_plugin:117: Plugin disabled (default): memif_plugin.so
>>
>> load_one_plugin:117: Plugin disabled (default): nat_plugin.so
>>
>> load_one_plugin:117: Plugin disabled (default): nsh_plugin.so
>>
>> load_one_plugin:117: Plugin disabled (default): nsim_plugin.so
>>
>> load_one_plugin:117: Plugin disabled (default): perfmon_plugin.so
>>
>> load_one_plugin:117: Plugin disabled (default): pppoe_plugin.so
>>
>> load_one_plugin:117: Plugin disabled (default): quic_plugin.so
>>
>> load_one_plugin:117: Plugin disabled (default): rdma_plugin.so
>>
>> load_one_plugin:117: Plugin disabled (default): srv6ad_plugin.so
>>
>> load_one_plugin:117: Plugin disabled (default): srv6am_plugin.so
>>
>> load_one_plugin:117: Plugin disabled (default): srv6as_plugin.so
>>
>> load_one_plugin:117: Plugin disabled (default): stn_plugin.so
>>
>> load_one_plugin:117: Plugin disabled (default): svs_plugin.so
>>
>> 

Re: [vpp-dev] VPP fails to start - error message EAL: FATAL: Cannot get hugepage information.

2020-05-26 Thread Damjan Marion via lists.fd.io

VPP passes  --in-memory  so there should be no hugepage files created in the 
fliesystem.


> On 26 May 2020, at 18:42, Benoit Ganne (bganne) via lists.fd.io 
>  wrote:
> 
> Hi Manoj,
> 
> The issue is because of DPDK initialization. In the working conf, you disable 
> DPDK plugin so DPDK is not initialized and everything is fine.
> Can you check whether /mnt/huge is full of dpdk-created staled files? I saw 
> this happen from time to time. The fix is simple, just remove the staled 
> files.
> 
> Best
> ben
> 
>> -Original Message-
>> From: vpp-dev@lists.fd.io  On Behalf Of Manoj Iyer
>> Sent: mardi 26 mai 2020 18:20
>> To: vpp-dev@lists.fd.io
>> Cc: Rodney Schmidt ; Kshitij Sudan
>> 
>> Subject: [vpp-dev] VPP fails to start - error message EAL: FATAL: Cannot
>> get hugepage information.
>> 
>> Hello,
>> 
>> 
>> I am trying to start the VPP (19.04) service on an ARM64 system. VPP fails
>> with the message:
>> 
>> 
>> /usr/bin/vpp[1252]: dpdk: EAL init args: -c 1 -n 4 --in-memory --file-
>> prefix vpp -w 0008:01:00.0 --master-lcore 0
>> 
>> EAL: FATAL: Cannot get hugepage information.
>> 
>> vpp[1252]: dpdk_config: rte_eal_init returned -1
>> 
>> 
>> But when I start vpp as no-daemon manually, I can start VPP and use vppctl
>> to get a console prompt. Could you please help me figure out why my
>> service fails with "Cannot get hugepage information" ?
>> 
>> 
>> Here is my service setup, although I am starting the service from command
>> line, the exact setup in systemd service fails the same way:
>> 
>> 
>> $ cat /proc/meminfo | grep -i huge
>> 
>> AnonHugePages: 0 kB
>> 
>> ShmemHugePages:0 kB
>> 
>> FileHugePages: 0 kB
>> 
>> HugePages_Total:1024
>> 
>> HugePages_Free: 1024
>> 
>> HugePages_Rsvd:0
>> 
>> HugePages_Surp:0
>> 
>> Hugepagesize:   2048 kB
>> 
>> Hugetlb: 2097152 kB
>> 
>> 
>> $ mount | grep hugetlbfs
>> 
>> hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
>> 
>> nodev on /mnt/huge type hugetlbfs (rw,relatime,pagesize=2M)
>> 
>> 
>> # modprobe igb_uio
>> 
>> # dpdk-devbind -u 0008:01:00.0
>> 
>> # dpdk-devbind --bind=igb_uio 0008:01:00.0
>> 
>> # mkdir -p /run/vpp/
>> 
>> # vpp -c /usr/share/vpp/vpp.conf
>> 
>> vlib_plugin_early_init:361: plugin path /usr/lib/aarch64-linux-
>> gnu/vpp_plugins/
>> 
>> load_one_plugin:117: Plugin disabled (default): abf_plugin.so
>> 
>> load_one_plugin:117: Plugin disabled (default): acl_plugin.so
>> 
>> load_one_plugin:117: Plugin disabled (default): avf_plugin.so
>> 
>> load_one_plugin:117: Plugin disabled (default): cdp_plugin.so
>> 
>> load_one_plugin:117: Plugin disabled (default): crypto_openssl_plugin.so
>> 
>> load_one_plugin:117: Plugin disabled (default): ct6_plugin.so
>> 
>> load_one_plugin:189: Loaded plugin: dpdk_plugin.so (Data Plane Development
>> Kit (DPDK))
>> 
>> load_one_plugin:117: Plugin disabled (default): flowprobe_plugin.so
>> 
>> load_one_plugin:117: Plugin disabled (default): gbp_plugin.so
>> 
>> load_one_plugin:117: Plugin disabled (default): gtpu_plugin.so
>> 
>> load_one_plugin:117: Plugin disabled (default): igmp_plugin.so
>> 
>> load_one_plugin:117: Plugin disabled (default): ikev2_plugin.so
>> 
>> load_one_plugin:117: Plugin disabled (default): ila_plugin.so
>> 
>> load_one_plugin:117: Plugin disabled (default): ioam_plugin.so
>> 
>> load_one_plugin:117: Plugin disabled (default): ixge_plugin.so
>> 
>> load_one_plugin:117: Plugin disabled (default): l2e_plugin.so
>> 
>> load_one_plugin:117: Plugin disabled (default): lacp_plugin.so
>> 
>> load_one_plugin:117: Plugin disabled (default): lb_plugin.so
>> 
>> load_one_plugin:117: Plugin disabled (default): mactime_plugin.so
>> 
>> load_one_plugin:117: Plugin disabled (default): map_plugin.so
>> 
>> load_one_plugin:117: Plugin disabled (default): memif_plugin.so
>> 
>> load_one_plugin:117: Plugin disabled (default): nat_plugin.so
>> 
>> load_one_plugin:117: Plugin disabled (default): nsh_plugin.so
>> 
>> load_one_plugin:117: Plugin disabled (default): nsim_plugin.so
>> 
>> load_one_plugin:117: Plugin disabled (default): perfmon_plugin.so
>> 
>> load_one_plugin:117: Plugin disabled (default): pppoe_plugin.so
>> 
>> load_one_plugin:117: Plugin disabled (default): quic_plugin.so
>> 
>> load_one_plugin:117: Plugin disabled (default): rdma_plugin.so
>> 
>> load_one_plugin:117: Plugin disabled (default): srv6ad_plugin.so
>> 
>> load_one_plugin:117: Plugin disabled (default): srv6am_plugin.so
>> 
>> load_one_plugin:117: Plugin disabled (default): srv6as_plugin.so
>> 
>> load_one_plugin:117: Plugin disabled (default): stn_plugin.so
>> 
>> load_one_plugin:117: Plugin disabled (default): svs_plugin.so
>> 
>> load_one_plugin:117: Plugin disabled (default): tlsmbedtls_plugin.so
>> 
>> load_one_plugin:117: Plugin disabled (default): tlsopenssl_plugin.so
>> 
>> load_one_plugin:117: Plugin disabled (default): unittest_plugin.so
>> 
>> load_one_plugin:117: Plugin disabled (default): vmxnet3_plugin.so
>> 
>> 

Re: [vpp-dev] VPP fails to start - error message EAL: FATAL: Cannot get hugepage information.

2020-05-26 Thread Manoj Iyer
I used the same arguments that vpp dpdk-config is using, and passed it to the 
dpdk-testpmd, with hugepages set to 2M.

$ cat /proc/meminfo | grep -i huge
AnonHugePages: 0 kB
ShmemHugePages:0 kB
FileHugePages: 0 kB
HugePages_Total:1024
HugePages_Free: 1024
HugePages_Rsvd:0
HugePages_Surp:0
Hugepagesize:   2048 kB
Hugetlb: 2097152 kB


$ sudo dpdk-testpmd -c 1 -n 4 --in-memory --huge-dir /mnt/huge --file-prefix 
sst100 -w 0008:01:00.0 --master-lcore 0
EAL: Detected 1 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Selected IOVA mode 'PA'
EAL: No available hugepages reported in hugepages-32768kB
EAL: No free hugepages reported in hugepages-32768kB
EAL: No available hugepages reported in hugepages-32768kB
EAL: No available hugepages reported in hugepages-64kB
EAL: No free hugepages reported in hugepages-64kB
EAL: No available hugepages reported in hugepages-64kB
EAL: No available hugepages reported in hugepages-1048576kB
EAL: No free hugepages reported in hugepages-1048576kB
EAL: No available hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: PCI device 0008:01:00.0 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 14e4:16f0 net_bnxt
EAL: Error - exiting with code: 1
  Cause: No cores defined for forwarding
Check the core mask argument


From: vpp-dev@lists.fd.io  on behalf of Manoj Iyer via 
lists.fd.io 
Sent: Tuesday, May 26, 2020 5:18 PM
To: Shmuel H ; vpp-dev@lists.fd.io ; 
bga...@cisco.com ; Manoj Iyer 
Cc: Rodney Schmidt ; Kshitij Sudan 

Subject: Re: [vpp-dev] VPP fails to start - error message EAL: FATAL: Cannot 
get hugepage information.

I also tried the nuclear options.

Set the kernel command line to

== 1G hugepages ==
default_hugepagesz=1G hugepagesz=1G hugepages=4 hugepagesz=2M hugepages=1024

$ sudo mount -t hugetlbfs -o pagesize=1G nodev /mnt/hugepages1G

$ sudo modprobe igb_uio
$ sudo dpdk-devbind.py --bind=igb_uio 0008:01:00.0
$ sudo vpp -c /usr/share/vpp/vpp.conf

and got:
vpp[882]: dpdk: EAL init args: -c 1 -n 4 --in-memory --huge-dir 
/mnt/hugepages1G --file-prefix sst100 -w 0008:01:00.0 --master-lcore 0
EAL: FATAL: Cannot get hugepage information.
vpp[882]: dpdk_config: rte_eal_init returned -1

== 2M huge pages ==
default_hugepagesz=2M hugepagesz=2M hugepages=1024

$ sudo rmmod bnxt_en bnxt_re
$ sudo modprobe igb_uio
$ sudo dpdk-devbind.py --bind=igb_uio 0008:01:00.0
$ sudo mount -t hugetlbfs  nodev /mnt/huge
$ sudo vpp -c /usr/share/vpp/vpp.conf

and got:
vpp[805]: dpdk: EAL init args: -c 1 -n 4 --in-memory --huge-dir /mnt/huge 
--file-prefix sst100 -w 0008:01:00.0 --master-lcore 0
EAL: FATAL: Cannot get hugepage information.
vpp[805]: dpdk_config: rte_eal_init returned -1


From: vpp-dev@lists.fd.io  on behalf of Manoj Iyer via 
lists.fd.io 
Sent: Tuesday, May 26, 2020 3:32 PM
To: Shmuel H ; vpp-dev@lists.fd.io ; 
bga...@cisco.com 
Cc: Rodney Schmidt ; Kshitij Sudan 

Subject: Re: [vpp-dev] VPP fails to start - error message EAL: FATAL: Cannot 
get hugepage information.

Shmuel,

When I run that command I got the following:

$ sudo dpdk-testpmd
EAL: Detected 1 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: No available hugepages reported in hugepages-32768kB
EAL: No available hugepages reported in hugepages-64kB
EAL: No available hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: PCI device 0008:00:00.0 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 14e4:16f0 net_bnxt
EAL: PCI device 0008:01:00.0 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 14e4:16f0 net_bnxt
EAL: PCI device 0008:01:00.1 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 14e4:16f0 net_bnxt
EAL: PCI device 0008:01:00.2 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 14e4:16f0 net_bnxt
EAL: PCI device 0008:01:00.3 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 14e4:16f0 net_bnxt
testpmd: No probed ethernet devices
EAL: Error - exiting with code: 1
  Cause: No cores defined for forwarding
Check the core mask argument

I set all the hugepages it was complaining about to 4 and 1024 pages, unmounted 
and remounted /mnt/huge and reran the command:

$ sudo dpdk-testpmd
EAL: Detected 1 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: 4 hugepages of size 33554432 reserved, but no mounted hugetlbfs found for 
that size
EAL: 1024 hugepages of size 65536 reserved, but no mounted hugetlbfs found for 
that size
EAL: 4 hugepages of size 1073741824 reserved, but no mounted hugetlbfs found 
for that size
EAL: Probing VFIO support...
EAL: PCI device 0008:00:00.0 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe 

Re: [vpp-dev] VPP fails to start - error message EAL: FATAL: Cannot get hugepage information.

2020-05-26 Thread Manoj Iyer
I also tried the nuclear options.

Set the kernel command line to

== 1G hugepages ==
default_hugepagesz=1G hugepagesz=1G hugepages=4 hugepagesz=2M hugepages=1024

$ sudo mount -t hugetlbfs -o pagesize=1G nodev /mnt/hugepages1G

$ sudo modprobe igb_uio
$ sudo dpdk-devbind.py --bind=igb_uio 0008:01:00.0
$ sudo vpp -c /usr/share/vpp/vpp.conf

and got:
vpp[882]: dpdk: EAL init args: -c 1 -n 4 --in-memory --huge-dir 
/mnt/hugepages1G --file-prefix sst100 -w 0008:01:00.0 --master-lcore 0
EAL: FATAL: Cannot get hugepage information.
vpp[882]: dpdk_config: rte_eal_init returned -1

== 2M huge pages ==
default_hugepagesz=2M hugepagesz=2M hugepages=1024

$ sudo rmmod bnxt_en bnxt_re
$ sudo modprobe igb_uio
$ sudo dpdk-devbind.py --bind=igb_uio 0008:01:00.0
$ sudo mount -t hugetlbfs  nodev /mnt/huge
$ sudo vpp -c /usr/share/vpp/vpp.conf

and got:
vpp[805]: dpdk: EAL init args: -c 1 -n 4 --in-memory --huge-dir /mnt/huge 
--file-prefix sst100 -w 0008:01:00.0 --master-lcore 0
EAL: FATAL: Cannot get hugepage information.
vpp[805]: dpdk_config: rte_eal_init returned -1


From: vpp-dev@lists.fd.io  on behalf of Manoj Iyer via 
lists.fd.io 
Sent: Tuesday, May 26, 2020 3:32 PM
To: Shmuel H ; vpp-dev@lists.fd.io ; 
bga...@cisco.com 
Cc: Rodney Schmidt ; Kshitij Sudan 

Subject: Re: [vpp-dev] VPP fails to start - error message EAL: FATAL: Cannot 
get hugepage information.

Shmuel,

When I run that command I got the following:

$ sudo dpdk-testpmd
EAL: Detected 1 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: No available hugepages reported in hugepages-32768kB
EAL: No available hugepages reported in hugepages-64kB
EAL: No available hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: PCI device 0008:00:00.0 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 14e4:16f0 net_bnxt
EAL: PCI device 0008:01:00.0 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 14e4:16f0 net_bnxt
EAL: PCI device 0008:01:00.1 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 14e4:16f0 net_bnxt
EAL: PCI device 0008:01:00.2 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 14e4:16f0 net_bnxt
EAL: PCI device 0008:01:00.3 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 14e4:16f0 net_bnxt
testpmd: No probed ethernet devices
EAL: Error - exiting with code: 1
  Cause: No cores defined for forwarding
Check the core mask argument

I set all the hugepages it was complaining about to 4 and 1024 pages, unmounted 
and remounted /mnt/huge and reran the command:

$ sudo dpdk-testpmd
EAL: Detected 1 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: 4 hugepages of size 33554432 reserved, but no mounted hugetlbfs found for 
that size
EAL: 1024 hugepages of size 65536 reserved, but no mounted hugetlbfs found for 
that size
EAL: 4 hugepages of size 1073741824 reserved, but no mounted hugetlbfs found 
for that size
EAL: Probing VFIO support...
EAL: PCI device 0008:00:00.0 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 14e4:16f0 net_bnxt
EAL: PCI device 0008:01:00.0 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 14e4:16f0 net_bnxt
EAL: PCI device 0008:01:00.1 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 14e4:16f0 net_bnxt
EAL: PCI device 0008:01:00.2 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 14e4:16f0 net_bnxt
EAL: PCI device 0008:01:00.3 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 14e4:16f0 net_bnxt
testpmd: No probed ethernet devices
EAL: Error - exiting with code: 1
  Cause: No cores defined for forwarding
Check the core mask argument


Re-starting the vpp service I get the same error:

/usr/bin/vpp[7774]: dpdk: EAL init args: -c 1 -n 4 --in-memory --huge-dir 
/mnt/huge --file-prefix sst100 -w 0008:01:00.0 --master-lcore 0
vpp[7774]: EAL: FATAL: Cannot get hugepage information.
 vpp[7774]: vpp[7774]: dpdk_config: rte_eal_init returned
-1

From: Shmuel H 
Sent: Tuesday, May 26, 2020 3:03 PM
To: Manoj Iyer ; vpp-dev@lists.fd.io ; 
bga...@cisco.com 
Cc: Rodney Schmidt ; Kshitij Sudan 

Subject: Re: [vpp-dev] VPP fails to start - error message EAL: FATAL: Cannot 
get hugepage information.

Strange.

I usually unmount and then mount the hugepages mount point before using it, 
just to be sure.

Maybe the two different mount points confuse DPDK somehow.

By the way, if you have dpdk-testpmd, what is its output? Does it return the 
same error message?

- Shmuel Hazan

mailto:s...@tkos.co.il | tel:+972-523-746-435 | http://tkos.co.il


On 26 May 2020 20:57:30 "Manoj Iyer"  wrote:



From: 

[vpp-dev] Getting to 40G encrypted container networking with Calico/VPP on commodity hardware

2020-05-26 Thread Jerome Tollet via lists.fd.io
Dear VPP community,
This article may be of interest to you:
https://medium.com/fd-io-vpp/getting-to-40g-encrypted-container-networking-with-calico-vpp-on-commodity-hardware-d7144e52659a
Regards,
Jerome
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16512): https://lists.fd.io/g/vpp-dev/message/16512
Mute This Topic: https://lists.fd.io/mt/74487397/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] GTPu Question

2020-05-26 Thread Andreas Schultz
Hi Govind,

I'm NOT one of the GTPu maintainers, but I do work on a UPF implementation
based on VPP.

The GTPu plugin is IMHO mostly useless and broken by design in its current
state. It assumes that TEIDs are symmetric which they never are in real
3GPP settings.
There is an outdated change in gerrit (https://gerrit.fd.io/r/c/vpp/+/13134
) that corrects that and other deficits.

AFAIK there is no GTP-C control interface that can control the GTP-U plugin
through VPPs binaries API. Only static tunnels through the CLI are doable
(I have not tested that myself). A real PGW/GGSN is therefore not doable.

The plugin also has no support for 3GPP compliant charging and QoS. This
would be a major problem in case you want to evaluate performance as those
areas are the ones that introduce the highest complexity. Performance tests
on pure GTP encap/decap are IMHO useless for real world GTP use cases.

Regards,
Andreas

Am Di., 26. Mai 2020 um 05:18 Uhr schrieb Govindarajan Mohandoss <
govindarajan.mohand...@arm.com>:

> Dear GTPu Owners,
>
>I need some help in creating GTPu Origination and Termination config in
> DUT (Running VPP) as described below.
>
>
>
> GTPu Origination:
>
>
>
>
>
> GTPu Termination:
>
>
>
> Whether GTPu plugin has the support to do such mapping (or) I need to
> write a test plugin to do such mapping ?
>
>
>
> I found some information in the below link explaining about GTPu
> Tunneling.
>
> https://wiki.fd.io/view/VPP/Per-feature_Notes#VRF.2C_Table_Ids_and_Routing
>
> As per the example, foll. are the VPP CLI commands to create a GTPu
> Tunnel.  But I don’t follow the commands. Please see inline.
>
>
>
> “
>
> loop create
>
> set int state loop0 up
>
> set int ip addr loop0 1.1.1.1/32   << Can the IP address be created on a
> physical interface connecting the next node in GTPu Origination Topology
> mentioned above ?
>
> create gtpu tunnel src 1.1.1.1 dst 10.6.6.6 teid  decap-next ip4 <<
> What does “decap-next” mean ?
>
> set int ip addr gptu_tunnel0 1.1.1.2/31 << Why the IP address is assigned
> to GTPu Tunnel interface ?
>
> “
>
>
>
> For the GTPu origination case:
>
> How can I associate the incoming Ethernet traffic to GTPu Tunnel config ?
>
>
>
> It will be great if you can share some document / CLI config (or) test
> case which is similar to Origination & Termination topology.
>
>
>
> Thanks
>
> Govind
> 
>


-- 

Andreas Schultz
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16511): https://lists.fd.io/g/vpp-dev/message/16511
Mute This Topic: https://lists.fd.io/mt/74471469/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP fails to start - error message EAL: FATAL: Cannot get hugepage information.

2020-05-26 Thread Manoj Iyer
Shmuel,

When I run that command I got the following:

$ sudo dpdk-testpmd
EAL: Detected 1 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: No available hugepages reported in hugepages-32768kB
EAL: No available hugepages reported in hugepages-64kB
EAL: No available hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: PCI device 0008:00:00.0 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 14e4:16f0 net_bnxt
EAL: PCI device 0008:01:00.0 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 14e4:16f0 net_bnxt
EAL: PCI device 0008:01:00.1 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 14e4:16f0 net_bnxt
EAL: PCI device 0008:01:00.2 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 14e4:16f0 net_bnxt
EAL: PCI device 0008:01:00.3 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 14e4:16f0 net_bnxt
testpmd: No probed ethernet devices
EAL: Error - exiting with code: 1
  Cause: No cores defined for forwarding
Check the core mask argument

I set all the hugepages it was complaining about to 4 and 1024 pages, unmounted 
and remounted /mnt/huge and reran the command:

$ sudo dpdk-testpmd
EAL: Detected 1 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: 4 hugepages of size 33554432 reserved, but no mounted hugetlbfs found for 
that size
EAL: 1024 hugepages of size 65536 reserved, but no mounted hugetlbfs found for 
that size
EAL: 4 hugepages of size 1073741824 reserved, but no mounted hugetlbfs found 
for that size
EAL: Probing VFIO support...
EAL: PCI device 0008:00:00.0 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 14e4:16f0 net_bnxt
EAL: PCI device 0008:01:00.0 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 14e4:16f0 net_bnxt
EAL: PCI device 0008:01:00.1 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 14e4:16f0 net_bnxt
EAL: PCI device 0008:01:00.2 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 14e4:16f0 net_bnxt
EAL: PCI device 0008:01:00.3 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 14e4:16f0 net_bnxt
testpmd: No probed ethernet devices
EAL: Error - exiting with code: 1
  Cause: No cores defined for forwarding
Check the core mask argument


Re-starting the vpp service I get the same error:

/usr/bin/vpp[7774]: dpdk: EAL init args: -c 1 -n 4 --in-memory --huge-dir 
/mnt/huge --file-prefix sst100 -w 0008:01:00.0 --master-lcore 0
vpp[7774]: EAL: FATAL: Cannot get hugepage information.
 vpp[7774]: vpp[7774]: dpdk_config: rte_eal_init returned
-1

From: Shmuel H 
Sent: Tuesday, May 26, 2020 3:03 PM
To: Manoj Iyer ; vpp-dev@lists.fd.io ; 
bga...@cisco.com 
Cc: Rodney Schmidt ; Kshitij Sudan 

Subject: Re: [vpp-dev] VPP fails to start - error message EAL: FATAL: Cannot 
get hugepage information.

Strange.

I usually unmount and then mount the hugepages mount point before using it, 
just to be sure.

Maybe the two different mount points confuse DPDK somehow.

By the way, if you have dpdk-testpmd, what is its output? Does it return the 
same error message?

- Shmuel Hazan

mailto:s...@tkos.co.il | tel:+972-523-746-435 | http://tkos.co.il


On 26 May 2020 20:57:30 "Manoj Iyer"  wrote:



From: Benoit Ganne (bganne) 
Sent: Tuesday, May 26, 2020 12:31 PM
To: Manoj Iyer ; vpp-dev@lists.fd.io 
Cc: Rodney Schmidt ; Kshitij Sudan 

Subject: RE: [vpp-dev] VPP fails to start - error message EAL: FATAL: Cannot 
get hugepage information.

> I removed the plugin and it starts the service!
[...]
> At this point, I think I should be able to use the VPP console to set up
> the interfaces etc. Is there any other gotcha I need to be aware of ?

Which interfaces are you going to use to rx/tx packets? VPP natively supports 
some Intel, Mellanox and virtual NICs but otherwise you'll need to use DPDK. 
And for that you'll need the DPDK plugin.


That is exactly the problem now. I need to use DPDK, and how do I get the 
plugin to work? Do we know the root cause of the issue why the plugin fails? 
Any pointers on this would be really appreciated.

Thanks
Manoj Iyer
IMPORTANT NOTICE: The contents of this email and any attachments are 
confidential and may also be privileged. If you are not the intended recipient, 
please notify the sender immediately and do not disclose the contents to any 
other person, use it for any purpose, or store or copy the information in any 
medium. Thank you.



IMPORTANT NOTICE: The contents of this email and any attachments are 
confidential and may also be privileged. If you are not the intended recipient, 
please notify the sender immediately and do not disclose the 

Re: [vpp-dev] VPP fails to start - error message EAL: FATAL: Cannot get hugepage information.

2020-05-26 Thread Shmuel H.

Strange.

I usually unmount and then mount the hugepages mount point before using it, 
just to be sure.


Maybe the two different mount points confuse DPDK somehow.

By the way, if you have dpdk-testpmd, what is its output? Does it return 
the same error message?


- Shmuel Hazan

mailto:s...@tkos.co.il | tel:+972-523-746-435 | http://tkos.co.il
On 26 May 2020 20:57:30 "Manoj Iyer"  wrote:





From: Benoit Ganne (bganne) 
Sent: Tuesday, May 26, 2020 12:31 PM
To: Manoj Iyer ; vpp-dev@lists.fd.io 
Cc: Rodney Schmidt ; Kshitij Sudan 

Subject: RE: [vpp-dev] VPP fails to start - error message EAL: FATAL: 
Cannot get hugepage information.



I removed the plugin and it starts the service!

[...]

At this point, I think I should be able to use the VPP console to set up
the interfaces etc. Is there any other gotcha I need to be aware of ?


Which interfaces are you going to use to rx/tx packets? VPP natively 
supports some Intel, Mellanox and virtual NICs but otherwise you'll need to 
use DPDK. And for that you'll need the DPDK plugin.





That is exactly the problem now. I need to use DPDK, and how do I get the 
plugin to work? Do we know the root cause of the issue why the plugin 
fails? Any pointers on this would be really appreciated.


Thanks
Manoj Iyer
IMPORTANT NOTICE: The contents of this email and any attachments are 
confidential and may also be privileged. If you are not the intended 
recipient, please notify the sender immediately and do not disclose the 
contents to any other person, use it for any purpose, or store or copy the 
information in any medium. Thank you.




-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16509): https://lists.fd.io/g/vpp-dev/message/16509
Mute This Topic: https://lists.fd.io/mt/74481192/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP fails to start - error message EAL: FATAL: Cannot get hugepage information.

2020-05-26 Thread Manoj Iyer
oops I should have mentioned that in my reply.. sorry no did not find any files 
in /mnt/huge

:/mnt/huge$ ls -lah
total 4.0K
drwxr-xr-x 2 root root0 May 26 15:57 .
drwxr-xr-x 3 root root 4.0K May 26 15:57 ..



From: vpp-dev@lists.fd.io  on behalf of Benoit Ganne 
(bganne) via lists.fd.io 
Sent: Tuesday, May 26, 2020 1:03 PM
To: Manoj Iyer ; vpp-dev@lists.fd.io 
Cc: Rodney Schmidt ; Kshitij Sudan 

Subject: Re: [vpp-dev] VPP fails to start - error message EAL: FATAL: Cannot 
get hugepage information.

> That is exactly the problem now. I need to use DPDK, and how do I get the
> plugin to work? Do we know the root cause of the issue why the plugin
> fails? Any pointers on this would be really appreciated.

As mentioned in my 1st email, can you check whether /mnt/huge is full of 
dpdk-created staled files? I saw this happen from time to time. The fix is 
simple, just remove the staled files.

ben
IMPORTANT NOTICE: The contents of this email and any attachments are 
confidential and may also be privileged. If you are not the intended recipient, 
please notify the sender immediately and do not disclose the contents to any 
other person, use it for any purpose, or store or copy the information in any 
medium. Thank you.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16508): https://lists.fd.io/g/vpp-dev/message/16508
Mute This Topic: https://lists.fd.io/mt/74481192/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP fails to start - error message EAL: FATAL: Cannot get hugepage information.

2020-05-26 Thread Benoit Ganne (bganne) via lists.fd.io
> That is exactly the problem now. I need to use DPDK, and how do I get the
> plugin to work? Do we know the root cause of the issue why the plugin
> fails? Any pointers on this would be really appreciated.

As mentioned in my 1st email, can you check whether /mnt/huge is full of 
dpdk-created staled files? I saw this happen from time to time. The fix is 
simple, just remove the staled files.

ben
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16507): https://lists.fd.io/g/vpp-dev/message/16507
Mute This Topic: https://lists.fd.io/mt/74481192/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP fails to start - error message EAL: FATAL: Cannot get hugepage information.

2020-05-26 Thread Manoj Iyer



From: Benoit Ganne (bganne) 
Sent: Tuesday, May 26, 2020 12:31 PM
To: Manoj Iyer ; vpp-dev@lists.fd.io 
Cc: Rodney Schmidt ; Kshitij Sudan 

Subject: RE: [vpp-dev] VPP fails to start - error message EAL: FATAL: Cannot 
get hugepage information.

> I removed the plugin and it starts the service!
[...]
> At this point, I think I should be able to use the VPP console to set up
> the interfaces etc. Is there any other gotcha I need to be aware of ?

Which interfaces are you going to use to rx/tx packets? VPP natively supports 
some Intel, Mellanox and virtual NICs but otherwise you'll need to use DPDK. 
And for that you'll need the DPDK plugin.


That is exactly the problem now. I need to use DPDK, and how do I get the 
plugin to work? Do we know the root cause of the issue why the plugin fails? 
Any pointers on this would be really appreciated.

Thanks
Manoj Iyer
IMPORTANT NOTICE: The contents of this email and any attachments are 
confidential and may also be privileged. If you are not the intended recipient, 
please notify the sender immediately and do not disclose the contents to any 
other person, use it for any purpose, or store or copy the information in any 
medium. Thank you.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16506): https://lists.fd.io/g/vpp-dev/message/16506
Mute This Topic: https://lists.fd.io/mt/74481192/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP fails to start - error message EAL: FATAL: Cannot get hugepage information.

2020-05-26 Thread Benoit Ganne (bganne) via lists.fd.io
> I removed the plugin and it starts the service!
[...]
> At this point, I think I should be able to use the VPP console to set up
> the interfaces etc. Is there any other gotcha I need to be aware of ?

Which interfaces are you going to use to rx/tx packets? VPP natively supports 
some Intel, Mellanox and virtual NICs but otherwise you'll need to use DPDK. 
And for that you'll need the DPDK plugin.

ben
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16505): https://lists.fd.io/g/vpp-dev/message/16505
Mute This Topic: https://lists.fd.io/mt/74481192/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP fails to start - error message EAL: FATAL: Cannot get hugepage information.

2020-05-26 Thread Manoj Iyer
Ben,

I removed the plugin and it starts the service!

ubuntu@sst100:~$ sudo systemctl status vpp
? vpp.service - Vector Packet Processing Process
   Loaded: loaded (/lib/systemd/system/vpp.service; enabled; vendor preset: enab
led)
   Active: active (running) since Tue 2020-05-26 16:45:22 UTC; 4s ago
  Process: 1507 ExecStopPost=/sbin/modprobe -r igb_uio (code=exited, status=0/SU
CCESS)
  Process: 1498 ExecStopPost=/sbin/dpdk-devbind -u 0008:01:00.0 (code=exited, st
atus=0/SUCCESS)
  Process: 1696 ExecStartPost=/bin/sleep 5 (code=exited, status=0/SUCCESS)
  Process: 1694 ExecStartPre=/bin/mkdir -p /run/vpp/ (code=exited, status=0/SUCC
ESS)
  Process: 1685 ExecStartPre=/sbin/dpdk-devbind --bind=igb_uio 0008:01:00.0 (cod
e=exited, status=0/SUCCESS)
  Process: 1676 ExecStartPre=/sbin/dpdk-devbind -u 0008:01:00.0 (code=exited, st
atus=0/SUCCESS)
  Process: 1675 ExecStartPre=/sbin/modprobe igb_uio (code=exited, status=0/SUCCE
SS)
  Process: 1674 ExecStartPre=/bin/rm -f /dev/shm/db /dev/shm/global_vm /dev/shm/
vpe-api (code=exited, status=0/SUCCESS)
 Main PID: 1695 (vpp_main)
Tasks: 1 (limit: 4915)
   CGroup: /system.slice/vpp.service
   ??1695 /usr/bin/vpp -c /usr/share/vpp/vpp.conf

At this point, I think I should be able to use the VPP console to set up the 
interfaces etc. Is there any other gotcha I need to be aware of ?

Thanks a million for your help.

Regards
Manoj Iyer



From: vpp-dev@lists.fd.io  on behalf of Benoit Ganne 
(bganne) via lists.fd.io 
Sent: Tuesday, May 26, 2020 11:42 AM
To: Manoj Iyer ; vpp-dev@lists.fd.io 
Cc: Rodney Schmidt ; Kshitij Sudan 

Subject: Re: [vpp-dev] VPP fails to start - error message EAL: FATAL: Cannot 
get hugepage information.

Hi Manoj,

The issue is because of DPDK initialization. In the working conf, you disable 
DPDK plugin so DPDK is not initialized and everything is fine.
Can you check whether /mnt/huge is full of dpdk-created staled files? I saw 
this happen from time to time. The fix is simple, just remove the staled files.

Best
ben

> -Original Message-
> From: vpp-dev@lists.fd.io  On Behalf Of Manoj Iyer
> Sent: mardi 26 mai 2020 18:20
> To: vpp-dev@lists.fd.io
> Cc: Rodney Schmidt ; Kshitij Sudan
> 
> Subject: [vpp-dev] VPP fails to start - error message EAL: FATAL: Cannot
> get hugepage information.
>
> Hello,
>
>
> I am trying to start the VPP (19.04) service on an ARM64 system. VPP fails
> with the message:
>
>
> /usr/bin/vpp[1252]: dpdk: EAL init args: -c 1 -n 4 --in-memory --file-
> prefix vpp -w 0008:01:00.0 --master-lcore 0
>
> EAL: FATAL: Cannot get hugepage information.
>
> vpp[1252]: dpdk_config: rte_eal_init returned -1
>
>
> But when I start vpp as no-daemon manually, I can start VPP and use vppctl
> to get a console prompt. Could you please help me figure out why my
> service fails with "Cannot get hugepage information" ?
>
>
> Here is my service setup, although I am starting the service from command
> line, the exact setup in systemd service fails the same way:
>
>
> $ cat /proc/meminfo | grep -i huge
>
> AnonHugePages: 0 kB
>
> ShmemHugePages:0 kB
>
> FileHugePages: 0 kB
>
> HugePages_Total:1024
>
> HugePages_Free: 1024
>
> HugePages_Rsvd:0
>
> HugePages_Surp:0
>
> Hugepagesize:   2048 kB
>
> Hugetlb: 2097152 kB
>
>
> $ mount | grep hugetlbfs
>
> hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
>
> nodev on /mnt/huge type hugetlbfs (rw,relatime,pagesize=2M)
>
>
> # modprobe igb_uio
>
> # dpdk-devbind -u 0008:01:00.0
>
> # dpdk-devbind --bind=igb_uio 0008:01:00.0
>
> # mkdir -p /run/vpp/
>
> # vpp -c /usr/share/vpp/vpp.conf
>
> vlib_plugin_early_init:361: plugin path /usr/lib/aarch64-linux-
> gnu/vpp_plugins/
>
> load_one_plugin:117: Plugin disabled (default): abf_plugin.so
>
> load_one_plugin:117: Plugin disabled (default): acl_plugin.so
>
> load_one_plugin:117: Plugin disabled (default): avf_plugin.so
>
> load_one_plugin:117: Plugin disabled (default): cdp_plugin.so
>
> load_one_plugin:117: Plugin disabled (default): crypto_openssl_plugin.so
>
> load_one_plugin:117: Plugin disabled (default): ct6_plugin.so
>
> load_one_plugin:189: Loaded plugin: dpdk_plugin.so (Data Plane Development
> Kit (DPDK))
>
> load_one_plugin:117: Plugin disabled (default): flowprobe_plugin.so
>
> load_one_plugin:117: Plugin disabled (default): gbp_plugin.so
>
> load_one_plugin:117: Plugin disabled (default): gtpu_plugin.so
>
> load_one_plugin:117: Plugin disabled (default): igmp_plugin.so
>
> load_one_plugin:117: Plugin disabled (default): ikev2_plugin.so
>
> load_one_plugin:117: Plugin disabled (default): ila_plugin.so
>
> load_one_plugin:117: Plugin disabled (default): ioam_plugin.so
>
> load_one_plugin:117: Plugin disabled (default): ixge_plugin.so
>
> load_one_plugin:117: Plugin disabled (default): l2e_plugin.so
>
> load_one_plugin:117: Plugin disabled (default): lacp_plugin.so
>
> load_one_plugin:117: Plugin disabled (default): 

Re: [vpp-dev] VPP fails to start - error message EAL: FATAL: Cannot get hugepage information.

2020-05-26 Thread Benoit Ganne (bganne) via lists.fd.io
Hi Manoj,

The issue is because of DPDK initialization. In the working conf, you disable 
DPDK plugin so DPDK is not initialized and everything is fine.
Can you check whether /mnt/huge is full of dpdk-created staled files? I saw 
this happen from time to time. The fix is simple, just remove the staled files.

Best
ben

> -Original Message-
> From: vpp-dev@lists.fd.io  On Behalf Of Manoj Iyer
> Sent: mardi 26 mai 2020 18:20
> To: vpp-dev@lists.fd.io
> Cc: Rodney Schmidt ; Kshitij Sudan
> 
> Subject: [vpp-dev] VPP fails to start - error message EAL: FATAL: Cannot
> get hugepage information.
> 
> Hello,
> 
> 
> I am trying to start the VPP (19.04) service on an ARM64 system. VPP fails
> with the message:
> 
> 
> /usr/bin/vpp[1252]: dpdk: EAL init args: -c 1 -n 4 --in-memory --file-
> prefix vpp -w 0008:01:00.0 --master-lcore 0
> 
> EAL: FATAL: Cannot get hugepage information.
> 
> vpp[1252]: dpdk_config: rte_eal_init returned -1
> 
> 
> But when I start vpp as no-daemon manually, I can start VPP and use vppctl
> to get a console prompt. Could you please help me figure out why my
> service fails with "Cannot get hugepage information" ?
> 
> 
> Here is my service setup, although I am starting the service from command
> line, the exact setup in systemd service fails the same way:
> 
> 
> $ cat /proc/meminfo | grep -i huge
> 
> AnonHugePages: 0 kB
> 
> ShmemHugePages:0 kB
> 
> FileHugePages: 0 kB
> 
> HugePages_Total:1024
> 
> HugePages_Free: 1024
> 
> HugePages_Rsvd:0
> 
> HugePages_Surp:0
> 
> Hugepagesize:   2048 kB
> 
> Hugetlb: 2097152 kB
> 
> 
> $ mount | grep hugetlbfs
> 
> hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
> 
> nodev on /mnt/huge type hugetlbfs (rw,relatime,pagesize=2M)
> 
> 
> # modprobe igb_uio
> 
> # dpdk-devbind -u 0008:01:00.0
> 
> # dpdk-devbind --bind=igb_uio 0008:01:00.0
> 
> # mkdir -p /run/vpp/
> 
> # vpp -c /usr/share/vpp/vpp.conf
> 
> vlib_plugin_early_init:361: plugin path /usr/lib/aarch64-linux-
> gnu/vpp_plugins/
> 
> load_one_plugin:117: Plugin disabled (default): abf_plugin.so
> 
> load_one_plugin:117: Plugin disabled (default): acl_plugin.so
> 
> load_one_plugin:117: Plugin disabled (default): avf_plugin.so
> 
> load_one_plugin:117: Plugin disabled (default): cdp_plugin.so
> 
> load_one_plugin:117: Plugin disabled (default): crypto_openssl_plugin.so
> 
> load_one_plugin:117: Plugin disabled (default): ct6_plugin.so
> 
> load_one_plugin:189: Loaded plugin: dpdk_plugin.so (Data Plane Development
> Kit (DPDK))
> 
> load_one_plugin:117: Plugin disabled (default): flowprobe_plugin.so
> 
> load_one_plugin:117: Plugin disabled (default): gbp_plugin.so
> 
> load_one_plugin:117: Plugin disabled (default): gtpu_plugin.so
> 
> load_one_plugin:117: Plugin disabled (default): igmp_plugin.so
> 
> load_one_plugin:117: Plugin disabled (default): ikev2_plugin.so
> 
> load_one_plugin:117: Plugin disabled (default): ila_plugin.so
> 
> load_one_plugin:117: Plugin disabled (default): ioam_plugin.so
> 
> load_one_plugin:117: Plugin disabled (default): ixge_plugin.so
> 
> load_one_plugin:117: Plugin disabled (default): l2e_plugin.so
> 
> load_one_plugin:117: Plugin disabled (default): lacp_plugin.so
> 
> load_one_plugin:117: Plugin disabled (default): lb_plugin.so
> 
> load_one_plugin:117: Plugin disabled (default): mactime_plugin.so
> 
> load_one_plugin:117: Plugin disabled (default): map_plugin.so
> 
> load_one_plugin:117: Plugin disabled (default): memif_plugin.so
> 
> load_one_plugin:117: Plugin disabled (default): nat_plugin.so
> 
> load_one_plugin:117: Plugin disabled (default): nsh_plugin.so
> 
> load_one_plugin:117: Plugin disabled (default): nsim_plugin.so
> 
> load_one_plugin:117: Plugin disabled (default): perfmon_plugin.so
> 
> load_one_plugin:117: Plugin disabled (default): pppoe_plugin.so
> 
> load_one_plugin:117: Plugin disabled (default): quic_plugin.so
> 
> load_one_plugin:117: Plugin disabled (default): rdma_plugin.so
> 
> load_one_plugin:117: Plugin disabled (default): srv6ad_plugin.so
> 
> load_one_plugin:117: Plugin disabled (default): srv6am_plugin.so
> 
> load_one_plugin:117: Plugin disabled (default): srv6as_plugin.so
> 
> load_one_plugin:117: Plugin disabled (default): stn_plugin.so
> 
> load_one_plugin:117: Plugin disabled (default): svs_plugin.so
> 
> load_one_plugin:117: Plugin disabled (default): tlsmbedtls_plugin.so
> 
> load_one_plugin:117: Plugin disabled (default): tlsopenssl_plugin.so
> 
> load_one_plugin:117: Plugin disabled (default): unittest_plugin.so
> 
> load_one_plugin:117: Plugin disabled (default): vmxnet3_plugin.so
> 
> vpp[1554]: clib_elf_parse_file: open `linux-vdso.so.1': No such file or
> directory
> 
> vpp[1554]: load_one_vat_plugin:67: Loaded plugin: mactime_test_plugin.so
> 
> vpp[1554]: load_one_vat_plugin:67: Loaded plugin: nat_test_plugin.so
> 
> vpp[1554]: load_one_vat_plugin:67: Loaded plugin: cdp_test_plugin.so
> 
> vpp[1554]: load_one_vat_plugin:67: 

[vpp-dev] VPP fails to start - error message EAL: FATAL: Cannot get hugepage information.

2020-05-26 Thread Manoj Iyer
Hello,

I am trying to start the VPP (19.04) service on an ARM64 system. VPP fails with 
the message:

/usr/bin/vpp[1252]: dpdk: EAL init args: -c 1 -n 4 --in-memory --file-prefix 
vpp -w 0008:01:00.0 --master-lcore 0
EAL: FATAL: Cannot get hugepage information.
vpp[1252]: dpdk_config: rte_eal_init returned -1

But when I start vpp as no-daemon manually, I can start VPP and use vppctl to 
get a console prompt. Could you please help me figure out why my service fails 
with "Cannot get hugepage information" ?

Here is my service setup, although I am starting the service from command line, 
the exact setup in systemd service fails the same way:

$ cat /proc/meminfo | grep -i huge
AnonHugePages: 0 kB
ShmemHugePages:0 kB
FileHugePages: 0 kB
HugePages_Total:1024
HugePages_Free: 1024
HugePages_Rsvd:0
HugePages_Surp:0
Hugepagesize:   2048 kB
Hugetlb: 2097152 kB

$ mount | grep hugetlbfs
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
nodev on /mnt/huge type hugetlbfs (rw,relatime,pagesize=2M)

# modprobe igb_uio
# dpdk-devbind -u 0008:01:00.0
# dpdk-devbind --bind=igb_uio 0008:01:00.0
# mkdir -p /run/vpp/
# vpp -c /usr/share/vpp/vpp.conf
vlib_plugin_early_init:361: plugin path /usr/lib/aarch64-linux-gnu/vpp_plugins/
load_one_plugin:117: Plugin disabled (default): abf_plugin.so
load_one_plugin:117: Plugin disabled (default): acl_plugin.so
load_one_plugin:117: Plugin disabled (default): avf_plugin.so
load_one_plugin:117: Plugin disabled (default): cdp_plugin.so
load_one_plugin:117: Plugin disabled (default): crypto_openssl_plugin.so
load_one_plugin:117: Plugin disabled (default): ct6_plugin.so
load_one_plugin:189: Loaded plugin: dpdk_plugin.so (Data Plane Development Kit 
(DPDK))
load_one_plugin:117: Plugin disabled (default): flowprobe_plugin.so
load_one_plugin:117: Plugin disabled (default): gbp_plugin.so
load_one_plugin:117: Plugin disabled (default): gtpu_plugin.so
load_one_plugin:117: Plugin disabled (default): igmp_plugin.so
load_one_plugin:117: Plugin disabled (default): ikev2_plugin.so
load_one_plugin:117: Plugin disabled (default): ila_plugin.so
load_one_plugin:117: Plugin disabled (default): ioam_plugin.so
load_one_plugin:117: Plugin disabled (default): ixge_plugin.so
load_one_plugin:117: Plugin disabled (default): l2e_plugin.so
load_one_plugin:117: Plugin disabled (default): lacp_plugin.so
load_one_plugin:117: Plugin disabled (default): lb_plugin.so
load_one_plugin:117: Plugin disabled (default): mactime_plugin.so
load_one_plugin:117: Plugin disabled (default): map_plugin.so
load_one_plugin:117: Plugin disabled (default): memif_plugin.so
load_one_plugin:117: Plugin disabled (default): nat_plugin.so
load_one_plugin:117: Plugin disabled (default): nsh_plugin.so
load_one_plugin:117: Plugin disabled (default): nsim_plugin.so
load_one_plugin:117: Plugin disabled (default): perfmon_plugin.so
load_one_plugin:117: Plugin disabled (default): pppoe_plugin.so
load_one_plugin:117: Plugin disabled (default): quic_plugin.so
load_one_plugin:117: Plugin disabled (default): rdma_plugin.so
load_one_plugin:117: Plugin disabled (default): srv6ad_plugin.so
load_one_plugin:117: Plugin disabled (default): srv6am_plugin.so
load_one_plugin:117: Plugin disabled (default): srv6as_plugin.so
load_one_plugin:117: Plugin disabled (default): stn_plugin.so
load_one_plugin:117: Plugin disabled (default): svs_plugin.so
load_one_plugin:117: Plugin disabled (default): tlsmbedtls_plugin.so
load_one_plugin:117: Plugin disabled (default): tlsopenssl_plugin.so
load_one_plugin:117: Plugin disabled (default): unittest_plugin.so
load_one_plugin:117: Plugin disabled (default): vmxnet3_plugin.so
vpp[1554]: clib_elf_parse_file: open `linux-vdso.so.1': No such file or 
directory
vpp[1554]: load_one_vat_plugin:67: Loaded plugin: mactime_test_plugin.so
vpp[1554]: load_one_vat_plugin:67: Loaded plugin: nat_test_plugin.so
vpp[1554]: load_one_vat_plugin:67: Loaded plugin: cdp_test_plugin.so
vpp[1554]: load_one_vat_plugin:67: Loaded plugin: ct6_test_plugin.so
vpp[1554]: load_one_vat_plugin:67: Loaded plugin: nsh_test_plugin.so
vpp[1554]: load_one_vat_plugin:67: Loaded plugin: stn_test_plugin.so
vpp[1554]: load_one_vat_plugin:67: Loaded plugin: lb_test_plugin.so
vpp[1554]: load_one_vat_plugin:67: Loaded plugin: avf_test_plugin.so
vpp[1554]: load_one_vat_plugin:67: Loaded plugin: flowprobe_test_plugin.so
vpp[1554]: load_one_vat_plugin:67: Loaded plugin: ikev2_test_plugin.so
vpp[1554]: load_one_vat_plugin:67: Loaded plugin: acl_test_plugin.so
vpp[1554]: load_one_vat_plugin:67: Loaded plugin: vmxnet3_test_plugin.so
vpp[1554]: load_one_vat_plugin:67: Loaded plugin: lacp_test_plugin.so
vpp[1554]: load_one_vat_plugin:67: Loaded plugin: pppoe_test_plugin.so
vpp[1554]: load_one_vat_plugin:67: Loaded plugin: memif_test_plugin.so
vpp[1554]: load_one_vat_plugin:67: Loaded plugin: gtpu_test_plugin.so
vpp[1554]: load_one_vat_plugin:67: Loaded plugin: ioam_test_plugin.so
vpp[1554]: 

Re: [vpp-dev] Some inconsistencies in API installed header files #vpp #vapi #vnet

2020-05-26 Thread pashinho1990
Hi,

That might actually be the fix :). It so happens that I was on top of 
"stable/2001" which doesn't have this commit, but I see "master" and 
"stable/2005" do have it, dammit.
However, I'll give it a try and let u know if you want to.

Thank you
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16501): https://lists.fd.io/g/vpp-dev/message/16501
Mute This Topic: https://lists.fd.io/mt/74479551/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp=1480452
Mute #vnet: https://lists.fd.io/mk?hashtag=vnet=1480452
Mute #vapi: https://lists.fd.io/mk?hashtag=vapi=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Some inconsistencies in API installed header files #vpp #vapi #vnet

2020-05-26 Thread Ole Troan
Are you sure this isn't fixed with:
commit b5c0d35f9445d4d99f2c5c7bd3175e68721a8ee5
Author: Neale Ranns 
Date:   Wed Apr 22 16:06:45 2020 +

vapi: packed enum type generation

Type: fix

if the ,api/.json specifies that a enum should be u8/u16 that the
generated c enum needs to be packed.

Cheers,
Ole


> On 26 May 2020, at 17:17, pashinho1...@gmail.com wrote:
> 
> Hi all,
> 
> I don't know if this is the right place to ask and if the "issue" here is 
> really an expected one, or if I'm just doing something really stupid. So, 
> someone please do let me know.
> 
> I found some inconsistencies in some API header files installed under 
> "/usr/include/", in my case specifically 
> "/usr/include/vnet/ip/ip_types.api_types.h" and 
> "/usr/include/vapi/ip_types.api.vapi.h". For example, "vl_api_ip_dscp_t" is 
> defined in "/usr/include/vnet/ip/ip_types.api_types.h" header file as a 
> packed enum, while "vapi_enum_ip_dscp" (the one used by a e.g. C++ client) 
> defined in "/usr/include/vapi/ip_types.api.vapi.h" as a non-packed enum. This 
> is a problem because, as in my case, the C++ client sends a big message which 
> in the middle it happens to have a vapi_enum_ip_dscp field which is 4 bytes 
> long, while VPP receives this message where it sees the same dscp field as 
> "vapi_enum_ip_dscp" which is 1 byte long, so there you have it, I leave to 
> your imagination the consequences. 
> 
> I may apply some workarounds, but I really wanted to know if this was 
> expected (although I wouldn't know the sense of it).
> 
> Thank you 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16500): https://lists.fd.io/g/vpp-dev/message/16500
Mute This Topic: https://lists.fd.io/mt/74479551/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp=1480452
Mute #vnet: https://lists.fd.io/mk?hashtag=vnet=1480452
Mute #vapi: https://lists.fd.io/mk?hashtag=vapi=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Some inconsistencies in API installed header files #vpp #vapi #vnet

2020-05-26 Thread pashinho1990
Hi all,

I don't know if this is the right place to ask and if the "issue" here is 
really an expected one, or if I'm just doing something really stupid. So, 
someone please do let me know.

I found some inconsistencies in some API header files installed under 
"/usr/include/" , in my case specifically 
"/usr/include/vnet/ip/ip_types.api_types.h" and 
"/usr/include/vapi/ip_types.api.vapi.h". For example, "vl_api_ip_dscp_t" is 
defined in "/usr/include/vnet/ip/ip_types.api_types.h" header file as a packed 
enum, while "vapi_enum_ip_dscp" (the one used by a e.g. C++ client) defined in 
"/usr/include/vapi/ip_types.api.vapi.h" as a non-packed enum. This is a problem 
because, as in my case, the C++ client sends a big message which in the middle 
it happens to have a vapi_enum_ip_dscp field which is 4 bytes long, while VPP 
receives this message where it sees the same dscp field as "vapi_enum_ip_dscp" 
which is 1 byte long, so there you have it, I leave to your imagination the 
consequences.

I may apply some workarounds, but I really wanted to know if this was expected 
(although I wouldn't know the sense of it).

Thank you
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16499): https://lists.fd.io/g/vpp-dev/message/16499
Mute This Topic: https://lists.fd.io/mt/74479551/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp=1480452
Mute #vapi: https://lists.fd.io/mk?hashtag=vapi=1480452
Mute #vnet: https://lists.fd.io/mk?hashtag=vnet=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] vpp crashes on configuring ip6 route

2020-05-26 Thread Neale Ranns via lists.fd.io

HI,

Thanks for the bug report. Here’s the patch:
  https://gerrit.fd.io/r/c/vpp/+/27270

/neale

From:  on behalf of "chu.penghong" 
Date: Monday 25 May 2020 at 05:06
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] vpp crashes on configuring ip6 route

Hello Everyone!
   When I add/delete ip6 route through cli when ipv6 packets are 
forwarding, the crash below may occur:

(gdb) bt
#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51
#1  0x7fdfe0b16801 in __GI_abort () at abort.c:79
#2  0x564c45de83ee in os_panic () at 
/home/pml/vpp/vpp_new/src/vpp/vnet/main.c:366
#3  0x7fdfe0ef9940 in debugger () at 
/home/pml/vpp/vpp_new/src/vppinfra/error.c:84
#4  0x7fdfe0ef9d15 in _clib_error (how_to_die=2, function_name=0x0, 
line_number=0, fmt=0x7fdfe26a6740 "%s:%d (%s) assertion `%s' fails") at 
/home/pml/vpp/vpp_new/src/vppinfra/error.c:143
#5  0x7fdfe1a81ba6 in ip6_fib_table_fwding_lookup (fib_index=0, 
dst=0x1002725d66) at /home/pml/vpp/vpp_new/src/vnet/fib/ip6_fib.h:100
#6  0x7fdfe1a82751 in ip6_lookup_inline (vm=0x7fdfa2bbf680, 
node=0x7fdfa3c07e40, frame=0x7fdfa39b3e00) at 
/home/pml/vpp/vpp_new/src/vnet/ip/ip6_forward.h:238
#7  0x7fdfe1a8502b in ip6_lookup_node_fn_avx2 (vm=0x7fdfa2bbf680, 
node=0x7fdfa3c07e40, frame=0x7fdfa39b3e00) at 
/home/pml/vpp/vpp_new/src/vnet/ip/ip6_forward.c:725
#8  0x7fdfe145abbb in dispatch_node (vm=0x7fdfa2bbf680, 
node=0x7fdfa3c07e40, type=VLIB_NODE_TYPE_INTERNAL, 
dispatch_state=VLIB_NODE_STATE_POLLING, frame=0x7fdfa39b3e00, 
last_time_stamp=2587675900683492)
at /home/pml/vpp/vpp_new/src/vlib/main.c:1238
#9  0x7fdfe145b37c in dispatch_pending_node (vm=0x7fdfa2bbf680, 
pending_frame_index=2, last_time_stamp=2587675900683492) at 
/home/pml/vpp/vpp_new/src/vlib/main.c:1406
#10 0x7fdfe145d01a in vlib_main_or_worker_loop (vm=0x7fdfa2bbf680, 
is_main=0) at /home/pml/vpp/vpp_new/src/vlib/main.c:1865
#11 0x7fdfe145da71 in vlib_worker_loop (vm=0x7fdfa2bbf680) at 
/home/pml/vpp/vpp_new/src/vlib/main.c:1999
#12 0x7fdfe149d13b in vlib_worker_thread_fn (arg=0x7fdf9fdec800) at 
/home/pml/vpp/vpp_new/src/vlib/threads.c:1799
#13 0x7fdfe0f18334 in clib_calljmp () at 
/home/pml/vpp/vpp_new/src/vppinfra/longjmp.S:123
#14 0x7fde27ffece0 in ?? ()
#15 0x7fdfe1497363 in vlib_worker_thread_bootstrap_fn (arg=0x7fdf9fdec800) 
at /home/pml/vpp/vpp_new/src/vlib/threads.c:588
Backtrace stopped: previous frame inner to this frame (corrupt stack?)

I read  the  code and found that the vector 'prefix_lengths_in_search_order' is 
modified  in funtion ip6_fib_table_fwding_dpo_update  when ip6 route is 
configured in main thread but accessed by function ip6_fib_table_fwding_lookup 
when ip6 packets are forwarding in worker thread without lock to proctect it . 
It may be the reason to the crash.
Thanks









-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16498): https://lists.fd.io/g/vpp-dev/message/16498
Mute This Topic: https://lists.fd.io/mt/74450068/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] GTPu Question

2020-05-26 Thread pashinho1990
Hi,

I had more or less the same question something like 2 years ago, and guess 
what, to date still no reply whatsoever. I'd think the plugin's developer is 
untraceable and/or "out of duty".
Now back to the question, if I didn't misunderstood this GTPU plugin's 
semantics (by reading the code), this plugin doesn't operate as you'd expect 
from a mobile 3GPP point of view, it actually operates very similarly to how 
VXLAN does.
What this plugin does is mainly implementing a *VTEP (Virtual Tunnel Endpoint)* 
, so its tunnel interface (i.e. gtpu_tunnelX ) needs an IP address on the same 
subnet of the actual destination of the encapped packet (i.e. the T-PDU).
I know, it's somewhat confusing, but I studied and tried it, and I can confirm 
that it works this way. If you still want to really understand and use it, I'd 
suggest you to learn how VXLAN works and its terminology (if you don't know it 
yet), it'll be a lot easier for you to grasp this plugin's operation.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16497): https://lists.fd.io/g/vpp-dev/message/16497
Mute This Topic: https://lists.fd.io/mt/74471469/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [EXTERNAL] Re: [vpp-dev] VPP sequential policy command execution giving error

2020-05-26 Thread Chris Luke
Actually, I missed one detail, if you tell vppctl to run a command using its 
command line arguments, it will always assume non-interactive.

The only way you’ll see the banner, then, is if the 1s session startup timer 
expires and it makes assumptions. In those cases that you see the output, does 
the script you run stall for a second?

Chris.

From: vpp-dev@lists.fd.io  On Behalf Of Chris Luke
Sent: Tuesday, May 26, 2020 09:40
To: Chinmaya Aggarwal ; vpp-dev@lists.fd.io
Subject: Re: [EXTERNAL] Re: [vpp-dev] VPP sequential policy command execution 
giving error

You’ll get the banner if it thinks it’s an interactive session.

Roughly vppctl does this (I am paraphrasing):

is_interactive = isatty(STDIN);
…
if (is_interactive) TERM = “vppctl”;
…
open telnet session to the CLI socket; pass the terminal information

Then in VPP:

is_interactive = (strcmp(TERM, “vppctl”) != 0);
…
if (is_interactive) emit_banner();

The passing of the terminal options in the TELNET protocol is bounded by a 
timer (1 second); if for whatever reason that information arrives late, VPP 
will make assumptions; typically that the terminal is of the ‘dumb’ type rather 
than non-interactive.

Does this correlate to what you are observing? Does the output you see make use 
of ANSI color? That would be another clue, the ‘dumb’ terminal doesn’t use 
color.

Chris.


From: vpp-dev@lists.fd.io 
mailto:vpp-dev@lists.fd.io>> On Behalf Of Chinmaya Aggarwal
Sent: Tuesday, May 26, 2020 08:19
To: vpp-dev@lists.fd.io
Subject: [EXTERNAL] Re: [vpp-dev] VPP sequential policy command execution 
giving error

Further adding to observations regarding above issue, we have created a script 
that has only vpp commands running sequentially having a constant delay between 
the execution of each commands. On running the script, we again see VPP shell 
output as shown below :-
_____   _  ___
 __/ __/ _ \  (_)__| | / / _ \/ _ \
 _/ _// // / / / _ \   | |/ / ___/ ___/
 /_/ /(_)_/\___/   |___/_/  /_/
We executed the script with delay of 10ms, 50ms and 100ms. In most of the 
cases, we get no output from vppctl command but in some random times, we get 
the vpp shell as output of the command.

Please suggest, in what scenarios does the vpp return this kind of output?
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16496): https://lists.fd.io/g/vpp-dev/message/16496
Mute This Topic: https://lists.fd.io/mt/74477804/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Host Requirement for running VPP

2020-05-26 Thread Chinmaya Aggarwal
Hi,
We want to run VPP on both VM and host. For VM, we found a link 
https://wiki.fd.io/view/VPP/How_To_Optimize_Performance_(System_Tuning) that 
states that recommended system configuration is (In section, In a VM: Configure 
KVM Parameters )
RAM = 8 GB
Cores = 4
SMP = 2
Threads  = 2

What is recommended system configuration for running VPP on a host machine?
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16495): https://lists.fd.io/g/vpp-dev/message/16495
Mute This Topic: https://lists.fd.io/mt/74477473/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [EXTERNAL] Re: [vpp-dev] VPP sequential policy command execution giving error

2020-05-26 Thread Chris Luke
You’ll get the banner if it thinks it’s an interactive session.

Roughly vppctl does this (I am paraphrasing):

is_interactive = isatty(STDIN);
…
if (is_interactive) TERM = “vppctl”;
…
open telnet session to the CLI socket; pass the terminal information

Then in VPP:

is_interactive = (strcmp(TERM, “vppctl”) != 0);
…
if (is_interactive) emit_banner();

The passing of the terminal options in the TELNET protocol is bounded by a 
timer (1 second); if for whatever reason that information arrives late, VPP 
will make assumptions; typically that the terminal is of the ‘dumb’ type rather 
than non-interactive.

Does this correlate to what you are observing? Does the output you see make use 
of ANSI color? That would be another clue, the ‘dumb’ terminal doesn’t use 
color.

Chris.


From: vpp-dev@lists.fd.io  On Behalf Of Chinmaya Aggarwal
Sent: Tuesday, May 26, 2020 08:19
To: vpp-dev@lists.fd.io
Subject: [EXTERNAL] Re: [vpp-dev] VPP sequential policy command execution 
giving error

Further adding to observations regarding above issue, we have created a script 
that has only vpp commands running sequentially having a constant delay between 
the execution of each commands. On running the script, we again see VPP shell 
output as shown below :-
_____   _  ___
 __/ __/ _ \  (_)__| | / / _ \/ _ \
 _/ _// // / / / _ \   | |/ / ___/ ___/
 /_/ /(_)_/\___/   |___/_/  /_/
We executed the script with delay of 10ms, 50ms and 100ms. In most of the 
cases, we get no output from vppctl command but in some random times, we get 
the vpp shell as output of the command.

Please suggest, in what scenarios does the vpp return this kind of output?
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16494): https://lists.fd.io/g/vpp-dev/message/16494
Mute This Topic: https://lists.fd.io/mt/74477335/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP sequential policy command execution giving error

2020-05-26 Thread Chinmaya Aggarwal
Further adding to observations regarding above issue, we have created a script 
that has only vpp commands running sequentially having a constant delay between 
the execution of each commands. On running the script, we again see VPP shell 
output as shown below :-
___    _        _   _  ___
__/ __/ _ \  (_)__    | | / / _ \/ _ \
_/ _// // / / / _ \   | |/ / ___/ ___/
/_/ /(_)_/\___/   |___/_/  /_/
We executed the script with delay of 10ms, 50ms and 100ms. I n most of the 
cases, we get no output from vppctl command but in some random times, we get 
the vpp shell as output of the command.

Please suggest, in what scenarios does the vpp return this kind of output?
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16493): https://lists.fd.io/g/vpp-dev/message/16493
Mute This Topic: https://lists.fd.io/mt/74439681/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] NAT44 does not work with fragmented ICMP packets

2020-05-26 Thread Klement Sekera via lists.fd.io
Thanks! I’ll push a patch.

Regards,
Klement

> On 26 May 2020, at 12:33, Miklos Tirpak  wrote:
> 
> Yes, it works with ip0:
> 
> vnet_buffer (b0)->ip.reass.is_non_first_fragment =
>! !ip4_get_fragment_offset (ip0);
> 
> Thanks,
> Miklos
> From: Klement Sekera -X (ksekera - PANTHEON TECH SRO at Cisco) 
> 
> Sent: Tuesday, May 26, 2020 12:14 PM
> To: Miklós Tirpák 
> Cc: vpp-dev@lists.fd.io 
> Subject: Re: [vpp-dev] NAT44 does not work with fragmented ICMP packets
>  
> CAUTION: This email originated from outside of the organization. Do not click 
> links or open attachments unless you recognize the sender and know the 
> content is safe.
> 
> 
> I think it’s enough if instead of vlib_buffer_get_current(b0) we just use ip0 
> (that already takes save_rewrite_length into consideration). Can you please 
> test with this modification?
> 
> Thanks,
> Klement
> 
> > On 26 May 2020, at 11:51, Miklos Tirpak  wrote:
> >
> > Hi,
> >
> > I think there is a problem in ip4_sv_reass_inline(), it does not consider 
> > ip.save_rewrite_length when it calculates is_non_first_fragment at line 619 
> > (master):
> >vnet_buffer (b0)->ip.reass.is_non_first_fragment =
> >   ! !ip4_get_fragment_offset (vlib_buffer_get_current (b0));
> >
> > Let me open a pull request to fix this.
> >
> > Thanks,
> > Miklos
> > From: vpp-dev@lists.fd.io  on behalf of Miklos Tirpak 
> > via lists.fd.io 
> > Sent: Tuesday, May 26, 2020 9:25 AM
> > To: vpp-dev@lists.fd.io 
> > Subject: [vpp-dev] NAT44 does not work with fragmented ICMP packets
> >
> > CAUTION: This email originated from outside of the organization. Do not 
> > click links or open attachments unless you recognize the sender and know 
> > the content is safe.
> >
> > Hi,
> >
> > we have a scenario where an ICMP packet arrives fragmented over a GTP-U 
> > tunnel. The outer IP packets are not fragmented, only the inner ones are. 
> > After GTP-U decapsulation, the packets are routed via an interface where 
> > NAT44 output-feature is configured.
> >
> > In the outgoing packets, the source IP is correctly NATed but the ICMP 
> > identifier (port) is not changed. Hence, the NAT session cannot be found 
> > for the ICMP reply. This works correctly with smaller packets, the problem 
> > is only with fragmented ones.
> >
> > I could reproduce this with both VPP 20.01 and master, and could see that 
> > ip.reass.is_non_first_fragment is true for every packet. Therefore, 
> > icmp_in2out() does not update the ICMP header I think.
> >
> > 712  if (!vnet_buffer (b0)->ip.reass.is_non_first_fragment)
> > (gdb) p ((vnet_buffer_opaque_t *) (b0)->opaque)->ip.reass
> > $17 = {{{next_index = 1056456440, error_next_index = 0}, 
> > {owner_thread_index = 270}}, {{{l4_src_port = 16120,
> > l4_dst_port = 16120, tcp_ack_number = 0, save_rewrite_length = 14 
> > '\016', ip_proto = 1 '\001',
> > icmp_type_or_tcp_flags = 8 '\b', is_non_first_fragment = 1 '\001', 
> > tcp_seq_number = 0}, {estimated_mtu = 16120}}}, {
> > fragment_first = 16120, fragment_last = 16120, range_first = 0, 
> > range_last = 0, next_range_bi = 17301774,
> > ip6_frag_hdr_offset = 0}}
> >
> > The node trace seems to be fine:
> >   ... ip4-lookup -> ip4-rewrite -> ip4-sv-reassembly-output-feature -> 
> > nat44-in2out-output -> nat44-in2out-output-slowpath
> >
> > The NAT session is also correct, it includes the new port:
> >
> > DBGvpp# sh nat44 sessions detail
> > NAT44 sessions:
> >  thread 0 vpp_main: 0 sessions 
> >  thread 1 vpp_wk_0: 1 sessions 
> >   100.64.100.1: 1 dynamic translations, 0 static translations
> > i2o 100.64.100.1 proto icmp port 63550 fib 1
> > o2i 172.16.17.2 proto icmp port 16253 fib 0
> >index 0
> >last heard 44.16
> >total pkts 80, total bytes 63040
> >dynamic translation
> >
> > Do you know if this is a configuration issue or a possible bug? Thank you!
> >
> > Thanks,
> > Miklos
> > 
> 
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16492): https://lists.fd.io/g/vpp-dev/message/16492
Mute This Topic: https://lists.fd.io/mt/74473306/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] NAT44 does not work with fragmented ICMP packets

2020-05-26 Thread Miklos Tirpak
Yes, it works with ip0:

vnet_buffer (b0)->ip.reass.is_non_first_fragment =
   ! !ip4_get_fragment_offset (ip0);

Thanks,
Miklos

From: Klement Sekera -X (ksekera - PANTHEON TECH SRO at Cisco) 

Sent: Tuesday, May 26, 2020 12:14 PM
To: Miklós Tirpák 
Cc: vpp-dev@lists.fd.io 
Subject: Re: [vpp-dev] NAT44 does not work with fragmented ICMP packets

CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you recognize the sender and know the content 
is safe.


I think it’s enough if instead of vlib_buffer_get_current(b0) we just use ip0 
(that already takes save_rewrite_length into consideration). Can you please 
test with this modification?

Thanks,
Klement

> On 26 May 2020, at 11:51, Miklos Tirpak  wrote:
>
> Hi,
>
> I think there is a problem in ip4_sv_reass_inline(), it does not consider 
> ip.save_rewrite_length when it calculates is_non_first_fragment at line 619 
> (master):
>vnet_buffer (b0)->ip.reass.is_non_first_fragment =
>   ! !ip4_get_fragment_offset (vlib_buffer_get_current (b0));
>
> Let me open a pull request to fix this.
>
> Thanks,
> Miklos
> From: vpp-dev@lists.fd.io  on behalf of Miklos Tirpak 
> via lists.fd.io 
> Sent: Tuesday, May 26, 2020 9:25 AM
> To: vpp-dev@lists.fd.io 
> Subject: [vpp-dev] NAT44 does not work with fragmented ICMP packets
>
> CAUTION: This email originated from outside of the organization. Do not click 
> links or open attachments unless you recognize the sender and know the 
> content is safe.
>
> Hi,
>
> we have a scenario where an ICMP packet arrives fragmented over a GTP-U 
> tunnel. The outer IP packets are not fragmented, only the inner ones are. 
> After GTP-U decapsulation, the packets are routed via an interface where 
> NAT44 output-feature is configured.
>
> In the outgoing packets, the source IP is correctly NATed but the ICMP 
> identifier (port) is not changed. Hence, the NAT session cannot be found for 
> the ICMP reply. This works correctly with smaller packets, the problem is 
> only with fragmented ones.
>
> I could reproduce this with both VPP 20.01 and master, and could see that 
> ip.reass.is_non_first_fragment is true for every packet. Therefore, 
> icmp_in2out() does not update the ICMP header I think.
>
> 712  if (!vnet_buffer (b0)->ip.reass.is_non_first_fragment)
> (gdb) p ((vnet_buffer_opaque_t *) (b0)->opaque)->ip.reass
> $17 = {{{next_index = 1056456440, error_next_index = 0}, {owner_thread_index 
> = 270}}, {{{l4_src_port = 16120,
> l4_dst_port = 16120, tcp_ack_number = 0, save_rewrite_length = 14 
> '\016', ip_proto = 1 '\001',
> icmp_type_or_tcp_flags = 8 '\b', is_non_first_fragment = 1 '\001', 
> tcp_seq_number = 0}, {estimated_mtu = 16120}}}, {
> fragment_first = 16120, fragment_last = 16120, range_first = 0, 
> range_last = 0, next_range_bi = 17301774,
> ip6_frag_hdr_offset = 0}}
>
> The node trace seems to be fine:
>   ... ip4-lookup -> ip4-rewrite -> ip4-sv-reassembly-output-feature -> 
> nat44-in2out-output -> nat44-in2out-output-slowpath
>
> The NAT session is also correct, it includes the new port:
>
> DBGvpp# sh nat44 sessions detail
> NAT44 sessions:
>  thread 0 vpp_main: 0 sessions 
>  thread 1 vpp_wk_0: 1 sessions 
>   100.64.100.1: 1 dynamic translations, 0 static translations
> i2o 100.64.100.1 proto icmp port 63550 fib 1
> o2i 172.16.17.2 proto icmp port 16253 fib 0
>index 0
>last heard 44.16
>total pkts 80, total bytes 63040
>dynamic translation
>
> Do you know if this is a configuration issue or a possible bug? Thank you!
>
> Thanks,
> Miklos
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16491): https://lists.fd.io/g/vpp-dev/message/16491
Mute This Topic: https://lists.fd.io/mt/74473306/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] NAT44 does not work with fragmented ICMP packets

2020-05-26 Thread Klement Sekera via lists.fd.io
I think it’s enough if instead of vlib_buffer_get_current(b0) we just use ip0 
(that already takes save_rewrite_length into consideration). Can you please 
test with this modification?

Thanks,
Klement

> On 26 May 2020, at 11:51, Miklos Tirpak  wrote:
> 
> Hi,
> 
> I think there is a problem in ip4_sv_reass_inline(), it does not consider 
> ip.save_rewrite_length when it calculates is_non_first_fragment at line 619 
> (master):
>vnet_buffer (b0)->ip.reass.is_non_first_fragment =
>   ! !ip4_get_fragment_offset (vlib_buffer_get_current (b0));
> 
> Let me open a pull request to fix this.
> 
> Thanks,
> Miklos
> From: vpp-dev@lists.fd.io  on behalf of Miklos Tirpak 
> via lists.fd.io 
> Sent: Tuesday, May 26, 2020 9:25 AM
> To: vpp-dev@lists.fd.io 
> Subject: [vpp-dev] NAT44 does not work with fragmented ICMP packets
>  
> CAUTION: This email originated from outside of the organization. Do not click 
> links or open attachments unless you recognize the sender and know the 
> content is safe.
> 
> Hi,
> 
> we have a scenario where an ICMP packet arrives fragmented over a GTP-U 
> tunnel. The outer IP packets are not fragmented, only the inner ones are. 
> After GTP-U decapsulation, the packets are routed via an interface where 
> NAT44 output-feature is configured.
> 
> In the outgoing packets, the source IP is correctly NATed but the ICMP 
> identifier (port) is not changed. Hence, the NAT session cannot be found for 
> the ICMP reply. This works correctly with smaller packets, the problem is 
> only with fragmented ones.
> 
> I could reproduce this with both VPP 20.01 and master, and could see that 
> ip.reass.is_non_first_fragment is true for every packet. Therefore, 
> icmp_in2out() does not update the ICMP header I think.
> 
> 712  if (!vnet_buffer (b0)->ip.reass.is_non_first_fragment)
> (gdb) p ((vnet_buffer_opaque_t *) (b0)->opaque)->ip.reass
> $17 = {{{next_index = 1056456440, error_next_index = 0}, {owner_thread_index 
> = 270}}, {{{l4_src_port = 16120, 
> l4_dst_port = 16120, tcp_ack_number = 0, save_rewrite_length = 14 
> '\016', ip_proto = 1 '\001', 
> icmp_type_or_tcp_flags = 8 '\b', is_non_first_fragment = 1 '\001', 
> tcp_seq_number = 0}, {estimated_mtu = 16120}}}, {
> fragment_first = 16120, fragment_last = 16120, range_first = 0, 
> range_last = 0, next_range_bi = 17301774, 
> ip6_frag_hdr_offset = 0}}
> 
> The node trace seems to be fine:
>   ... ip4-lookup -> ip4-rewrite -> ip4-sv-reassembly-output-feature -> 
> nat44-in2out-output -> nat44-in2out-output-slowpath
> 
> The NAT session is also correct, it includes the new port:
> 
> DBGvpp# sh nat44 sessions detail
> NAT44 sessions:
>  thread 0 vpp_main: 0 sessions 
>  thread 1 vpp_wk_0: 1 sessions 
>   100.64.100.1: 1 dynamic translations, 0 static translations
> i2o 100.64.100.1 proto icmp port 63550 fib 1
> o2i 172.16.17.2 proto icmp port 16253 fib 0
>index 0
>last heard 44.16
>total pkts 80, total bytes 63040
>dynamic translation
> 
> Do you know if this is a configuration issue or a possible bug? Thank you!
> 
> Thanks,
> Miklos
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16490): https://lists.fd.io/g/vpp-dev/message/16490
Mute This Topic: https://lists.fd.io/mt/74473306/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] NAT44 does not work with fragmented ICMP packets

2020-05-26 Thread Miklos Tirpak
Hi Klement,

thank you for your response.

ip.reass.is_non_first_fragment is set to 1 also for the first fragment.

(gdb) p ((vnet_buffer_opaque_t *) (b0)->opaque)->ip.reass.is_non_first_fragment
$8 = 1 '\001'

I think the fragment offset is wrongly calculated from the buffer:
(gdb) p ip4_get_fragment_offset (vlib_buffer_get_current (b0))
$9 = 766

(gdb) p ((vnet_buffer_opaque_t *) (b0)->opaque)->ip.save_rewrite_length
$1 = 14 '\016'

At the beginning of the function, the header pointer is set from the buffer 
with the rewrite length shifted:

  ip4_header_t *ip0 =
(ip4_header_t *) u8_ptr_add (vlib_buffer_get_current (b0),
 is_output_feature *
 vnet_buffer (b0)->
 ip.save_rewrite_length);

(gdb) p fragment_first
$2 = 0

Later, this rewrite length is not considered.

Thanks,
Miklos


From: Klement Sekera -X (ksekera - PANTHEON TECH SRO at Cisco) 

Sent: Tuesday, May 26, 2020 11:22 AM
To: Miklós Tirpák 
Cc: vpp-dev@lists.fd.io 
Subject: Re: [vpp-dev] NAT44 does not work with fragmented ICMP packets

CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you recognize the sender and know the content 
is safe.


Hi Miklos,

thanks for your message. If is_non_first_fragment is set to true then rewrite 
will not happen. Can you take a look at what happens in ip4_sv_reass_inline for 
the first packet/fragment?

Setting that flag should be pretty fool-proof

   498   const u32 fragment_first = ip4_get_fragment_offset_bytes (ip0);
...
   549   vnet_buffer (b0)->ip.reass.is_non_first_fragment =
   550 ! !fragment_first;
...
   619 vnet_buffer (b0)->ip.reass.is_non_first_fragment =
   620   ! !ip4_get_fragment_offset (vlib_buffer_get_current 
(b0));

Thanks,
Klement

> On 26 May 2020, at 09:25, Miklos Tirpak  wrote:
>
> Hi,
>
> we have a scenario where an ICMP packet arrives fragmented over a GTP-U 
> tunnel. The outer IP packets are not fragmented, only the inner ones are. 
> After GTP-U decapsulation, the packets are routed via an interface where 
> NAT44 output-feature is configured.
>
> In the outgoing packets, the source IP is correctly NATed but the ICMP 
> identifier (port) is not changed. Hence, the NAT session cannot be found for 
> the ICMP reply. This works correctly with smaller packets, the problem is 
> only with fragmented ones.
>
> I could reproduce this with both VPP 20.01 and master, and could see that 
> ip.reass.is_non_first_fragment is true for every packet. Therefore, 
> icmp_in2out() does not update the ICMP header I think.
>
> 712  if (!vnet_buffer (b0)->ip.reass.is_non_first_fragment)
> (gdb) p ((vnet_buffer_opaque_t *) (b0)->opaque)->ip.reass
> $17 = {{{next_index = 1056456440, error_next_index = 0}, {owner_thread_index 
> = 270}}, {{{l4_src_port = 16120,
> l4_dst_port = 16120, tcp_ack_number = 0, save_rewrite_length = 14 
> '\016', ip_proto = 1 '\001',
> icmp_type_or_tcp_flags = 8 '\b', is_non_first_fragment = 1 '\001', 
> tcp_seq_number = 0}, {estimated_mtu = 16120}}}, {
> fragment_first = 16120, fragment_last = 16120, range_first = 0, 
> range_last = 0, next_range_bi = 17301774,
> ip6_frag_hdr_offset = 0}}
>
> The node trace seems to be fine:
>   ... ip4-lookup -> ip4-rewrite -> ip4-sv-reassembly-output-feature -> 
> nat44-in2out-output -> nat44-in2out-output-slowpath
>
> The NAT session is also correct, it includes the new port:
>
> DBGvpp# sh nat44 sessions detail
> NAT44 sessions:
>  thread 0 vpp_main: 0 sessions 
>  thread 1 vpp_wk_0: 1 sessions 
>   100.64.100.1: 1 dynamic translations, 0 static translations
> i2o 100.64.100.1 proto icmp port 63550 fib 1
> o2i 172.16.17.2 proto icmp port 16253 fib 0
>index 0
>last heard 44.16
>total pkts 80, total bytes 63040
>dynamic translation
>
> Do you know if this is a configuration issue or a possible bug? Thank you!
>
> Thanks,
> Miklos
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16489): https://lists.fd.io/g/vpp-dev/message/16489
Mute This Topic: https://lists.fd.io/mt/74473306/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] NAT44 does not work with fragmented ICMP packets

2020-05-26 Thread Miklos Tirpak
Hi,

I think there is a problem in ip4_sv_reass_inline(), it does not consider 
ip.save_rewrite_length when it calculates is_non_first_fragment at line 619 
(master):
   vnet_buffer (b0)->ip.reass.is_non_first_fragment =
  ! !ip4_get_fragment_offset (vlib_buffer_get_current (b0));

Let me open a pull request to fix this.

Thanks,
Miklos

From: vpp-dev@lists.fd.io  on behalf of Miklos Tirpak via 
lists.fd.io 
Sent: Tuesday, May 26, 2020 9:25 AM
To: vpp-dev@lists.fd.io 
Subject: [vpp-dev] NAT44 does not work with fragmented ICMP packets


CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you recognize the sender and know the content 
is safe.


Hi,

we have a scenario where an ICMP packet arrives fragmented over a GTP-U tunnel. 
The outer IP packets are not fragmented, only the inner ones are. After GTP-U 
decapsulation, the packets are routed via an interface where NAT44 
output-feature is configured.

In the outgoing packets, the source IP is correctly NATed but the ICMP 
identifier (port) is not changed. Hence, the NAT session cannot be found for 
the ICMP reply. This works correctly with smaller packets, the problem is only 
with fragmented ones.

I could reproduce this with both VPP 20.01 and master, and could see that 
ip.reass.is_non_first_fragment is true for every packet. Therefore, 
icmp_in2out() does not update the ICMP header I think.

712  if (!vnet_buffer (b0)->ip.reass.is_non_first_fragment)
(gdb) p ((vnet_buffer_opaque_t *) (b0)->opaque)->ip.reass
$17 = {{{next_index = 1056456440, error_next_index = 0}, {owner_thread_index = 
270}}, {{{l4_src_port = 16120,
l4_dst_port = 16120, tcp_ack_number = 0, save_rewrite_length = 14 
'\016', ip_proto = 1 '\001',
icmp_type_or_tcp_flags = 8 '\b', is_non_first_fragment = 1 '\001', 
tcp_seq_number = 0}, {estimated_mtu = 16120}}}, {
fragment_first = 16120, fragment_last = 16120, range_first = 0, range_last 
= 0, next_range_bi = 17301774,
ip6_frag_hdr_offset = 0}}

The node trace seems to be fine:
  ... ip4-lookup -> ip4-rewrite -> ip4-sv-reassembly-output-feature -> 
nat44-in2out-output -> nat44-in2out-output-slowpath

The NAT session is also correct, it includes the new port:

DBGvpp# sh nat44 sessions detail
NAT44 sessions:
 thread 0 vpp_main: 0 sessions 
 thread 1 vpp_wk_0: 1 sessions 
  100.64.100.1: 1 dynamic translations, 0 static translations
i2o 100.64.100.1 proto icmp port 63550 fib 1
o2i 172.16.17.2 proto icmp port 16253 fib 0
   index 0
   last heard 44.16
   total pkts 80, total bytes 63040
   dynamic translation

Do you know if this is a configuration issue or a possible bug? Thank you!

Thanks,
Miklos
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16488): https://lists.fd.io/g/vpp-dev/message/16488
Mute This Topic: https://lists.fd.io/mt/74473306/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] NAT44 does not work with fragmented ICMP packets

2020-05-26 Thread Klement Sekera via lists.fd.io
Hi Miklos,

thanks for your message. If is_non_first_fragment is set to true then rewrite 
will not happen. Can you take a look at what happens in ip4_sv_reass_inline for 
the first packet/fragment?

Setting that flag should be pretty fool-proof

   498   const u32 fragment_first = ip4_get_fragment_offset_bytes 
(ip0);   
...
   549   vnet_buffer (b0)->ip.reass.is_non_first_fragment = 
  
   550 ! !fragment_first; 
...
   619 vnet_buffer (b0)->ip.reass.is_non_first_fragment =   
  
   620   ! !ip4_get_fragment_offset (vlib_buffer_get_current 
(b0)); 

Thanks,
Klement

> On 26 May 2020, at 09:25, Miklos Tirpak  wrote:
> 
> Hi,
> 
> we have a scenario where an ICMP packet arrives fragmented over a GTP-U 
> tunnel. The outer IP packets are not fragmented, only the inner ones are. 
> After GTP-U decapsulation, the packets are routed via an interface where 
> NAT44 output-feature is configured.
> 
> In the outgoing packets, the source IP is correctly NATed but the ICMP 
> identifier (port) is not changed. Hence, the NAT session cannot be found for 
> the ICMP reply. This works correctly with smaller packets, the problem is 
> only with fragmented ones.
> 
> I could reproduce this with both VPP 20.01 and master, and could see that 
> ip.reass.is_non_first_fragment is true for every packet. Therefore, 
> icmp_in2out() does not update the ICMP header I think.
> 
> 712  if (!vnet_buffer (b0)->ip.reass.is_non_first_fragment)
> (gdb) p ((vnet_buffer_opaque_t *) (b0)->opaque)->ip.reass
> $17 = {{{next_index = 1056456440, error_next_index = 0}, {owner_thread_index 
> = 270}}, {{{l4_src_port = 16120, 
> l4_dst_port = 16120, tcp_ack_number = 0, save_rewrite_length = 14 
> '\016', ip_proto = 1 '\001', 
> icmp_type_or_tcp_flags = 8 '\b', is_non_first_fragment = 1 '\001', 
> tcp_seq_number = 0}, {estimated_mtu = 16120}}}, {
> fragment_first = 16120, fragment_last = 16120, range_first = 0, 
> range_last = 0, next_range_bi = 17301774, 
> ip6_frag_hdr_offset = 0}}
> 
> The node trace seems to be fine:
>   ... ip4-lookup -> ip4-rewrite -> ip4-sv-reassembly-output-feature -> 
> nat44-in2out-output -> nat44-in2out-output-slowpath
> 
> The NAT session is also correct, it includes the new port:
> 
> DBGvpp# sh nat44 sessions detail
> NAT44 sessions:
>  thread 0 vpp_main: 0 sessions 
>  thread 1 vpp_wk_0: 1 sessions 
>   100.64.100.1: 1 dynamic translations, 0 static translations
> i2o 100.64.100.1 proto icmp port 63550 fib 1
> o2i 172.16.17.2 proto icmp port 16253 fib 0
>index 0
>last heard 44.16
>total pkts 80, total bytes 63040
>dynamic translation
> 
> Do you know if this is a configuration issue or a possible bug? Thank you!
> 
> Thanks,
> Miklos
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16487): https://lists.fd.io/g/vpp-dev/message/16487
Mute This Topic: https://lists.fd.io/mt/74473306/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] NAT44 does not work with fragmented ICMP packets

2020-05-26 Thread Miklos Tirpak
Hi,

we have a scenario where an ICMP packet arrives fragmented over a GTP-U tunnel. 
The outer IP packets are not fragmented, only the inner ones are. After GTP-U 
decapsulation, the packets are routed via an interface where NAT44 
output-feature is configured.

In the outgoing packets, the source IP is correctly NATed but the ICMP 
identifier (port) is not changed. Hence, the NAT session cannot be found for 
the ICMP reply. This works correctly with smaller packets, the problem is only 
with fragmented ones.

I could reproduce this with both VPP 20.01 and master, and could see that 
ip.reass.is_non_first_fragment is true for every packet. Therefore, 
icmp_in2out() does not update the ICMP header I think.

712  if (!vnet_buffer (b0)->ip.reass.is_non_first_fragment)
(gdb) p ((vnet_buffer_opaque_t *) (b0)->opaque)->ip.reass
$17 = {{{next_index = 1056456440, error_next_index = 0}, {owner_thread_index = 
270}}, {{{l4_src_port = 16120,
l4_dst_port = 16120, tcp_ack_number = 0, save_rewrite_length = 14 
'\016', ip_proto = 1 '\001',
icmp_type_or_tcp_flags = 8 '\b', is_non_first_fragment = 1 '\001', 
tcp_seq_number = 0}, {estimated_mtu = 16120}}}, {
fragment_first = 16120, fragment_last = 16120, range_first = 0, range_last 
= 0, next_range_bi = 17301774,
ip6_frag_hdr_offset = 0}}

The node trace seems to be fine:
  ... ip4-lookup -> ip4-rewrite -> ip4-sv-reassembly-output-feature -> 
nat44-in2out-output -> nat44-in2out-output-slowpath

The NAT session is also correct, it includes the new port:

DBGvpp# sh nat44 sessions detail
NAT44 sessions:
 thread 0 vpp_main: 0 sessions 
 thread 1 vpp_wk_0: 1 sessions 
  100.64.100.1: 1 dynamic translations, 0 static translations
i2o 100.64.100.1 proto icmp port 63550 fib 1
o2i 172.16.17.2 proto icmp port 16253 fib 0
   index 0
   last heard 44.16
   total pkts 80, total bytes 63040
   dynamic translation

Do you know if this is a configuration issue or a possible bug? Thank you!

Thanks,
Miklos
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16486): https://lists.fd.io/g/vpp-dev/message/16486
Mute This Topic: https://lists.fd.io/mt/74473306/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-