[vpp-dev] Big number VNIs across same VxLAN tunnel

2022-06-08 Thread zhang . gefan
Hi All,

I am a new user of VPP. When I was studying VxLAN tunnel support in VPP, I 
found out for each VxLAN tunnel creation, VNI id is required. It means I cannot 
create a VxLAN tunnel with Src IP and Dest IP only. I have to create a tunnel 
for each given VNI even tunnel src IP and dest IP don't change. If VNI number 
is a very few, it is okay. But if there are tons of VNI I want to deal with, it 
will be a scaling issue for tunnel maintenance. For example, I have 16K VNIs 
across a same tunnel(same src IP and same dest IP), so I have to call VxLAN 
tunnel creation API for 16000 times and create 16000 tunnel entries. The 
potential issues of this solution are: 1 unnecessary memory usage for huge 
number of tunnel entries which have almost same info except VNI. 2: complexity 
of tunnel management with such big number of tunnels.

I want to check if there is any simple way to deal such case? Some HW switch 
vendors allow user to create a single VxLAN tunnel with src IP and dest IP only 
and then user can extend VNI(s) they want to serve to that tunnel, With this 
approach, I only need to create one tunnel. So I am wondering is this possible 
with VPP or some similar method exists?

Thanks!

-Gefan

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21520): https://lists.fd.io/g/vpp-dev/message/21520
Mute This Topic: https://lists.fd.io/mt/91635870/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Support of "qos egress map" with GTP tunnels

2022-06-08 Thread Neale Ranns

Hi Martin,

QoS marking takes the buffer’s pre-classified QoS source:value and translate it 
through the egress map into the value at the layer the marking is specified. If 
we take your map id=0 as an example you are mapping a classification of IP:0 to 
a value 30 and since the marking is an IP features, this means IP DSCP of 30.
However, you don’t have a pre-classification, so the marking node doesn’t know 
what to map from (hence the use:no in the trace). To classify you need either 
the QoS record or store node. Since your map indicates you want to map from IP, 
then my guess is you want the record node as an IP input feature on the 
interface[s] from which packets are received before being transmitted into the 
GTPU tunnel.

/neale

From: vpp-dev@lists.fd.io  on behalf of Martin Dzuris via 
lists.fd.io 
Date: Wednesday, 8 June 2022 at 20:42
To: vpp-dev@lists.fd.io 
Subject: [vpp-dev] Support of "qos egress map" with GTP tunnels
Hi

We need to update DSCP value on gtp output traffic (inside and outside of the 
tunnel too ).

I'm creating GTP tunnel :

...
vppctl set in state GigabitEthernet0/9/0  up
vppctl create sub-interfaces GigabitEthernet0/9/0 5 dot1q 100 exact-match
vppctl set interface state GigabitEthernet0/9/0.5 up
vppctl set int ip address GigabitEthernet0/9/0.5  
22.22.2.2/24
vppctl create gtpu tunnel src 22.22.2.2 dst 22.22.2.4 teid 13 tteid 14 
encap-vrf-id 0 decap-next ip4
vppctl set interface ip table gtpu_tunnel0 22
vppctl set in state gtpu_tunnel0 up
vppctl set int ip address gtpu_tunnel0 50.50.50.1/24


and I tried to mark packet on gtp interface :

vppctl qos egress map id 0 [ip][0]=30
vppctl qos egress map id 1 [ip][1]=31
vppctl qos mark ext gtpu_tunnel0 id 0
vppctl qos mark ext GigabitEthernet0/9/0.5 id 1


Example from Trace :

04:16:56:227608: ip4-qos-mark
  source:ext qos:30 used:no
04:16:56:227611: gtpu4-encap
  GTPU encap to gtpu_tunnel0 tteid 14


Result is that packets are not updated . Is QOS mark feature supported on GTP 
interface and on sub-interfaces ? Can you advise me how to do it?

Martin

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21519): https://lists.fd.io/g/vpp-dev/message/21519
Mute This Topic: https://lists.fd.io/mt/91620408/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] FDIO Maintenance - 2022-06-09 15:30 UTC to 1930 UTC

2022-06-08 Thread Vanessa Valderrama

*This maintenance is pending TSC approval tomorrow.**
*

*What:*

 * Jenkins production
 o OS and security updates
 o Jenkins upgrade
 o Plugin upgrades
 o JDK upgrade
 o Jenkins performance tuning

 * Gerrit
 o OS and security updates
 o Gerrit upgrade
 o JDK upgrade

*When: *2022-06-09 15:30 UTC to 1930 UTC

*Why:*

We normally do not perform maintenance during a release cycle but we're 
seeing intermittent timeouts on VPP/CSIT jobs. I discussed this with 
Dave and we agreed we'd like to try resolve these issues before the 
start of RC2.



*Impact:*

Jenkins will be placed in shutdown mode at 1430 UTC. All running jobs 
will be terminated at 15:30 UTC.


The following services will be unavailable during the maintenance window:

 * Jenkins sandbox and production
 * Gerrit

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21518): https://lists.fd.io/g/vpp-dev/message/21518
Mute This Topic: https://lists.fd.io/mt/91630860/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] 1 week to VPP 22.06 RC2 milestone

2022-06-08 Thread Andrew Yourtchenko
Hello all,

Just a kind reminder - the VPP 22.06 RC2 milestone will happen in one week from 
now - on 15 June 2022, at 12:00 UTC.

 After that only the critical fixes will be accepted into the stable/2206 
branch in preparation for the release.

--a /* your friendly 22.06 release manager */
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21517): https://lists.fd.io/g/vpp-dev/message/21517
Mute This Topic: https://lists.fd.io/mt/91628254/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Crash in VPP 21.06 #vnet

2022-06-08 Thread Benoit Ganne (bganne) via lists.fd.io
Looks like the crash is happening in the host stack...
Can you try to reproduce it in debug mode, and maybe even with Address 
Sanitizer:
https://s3-docs.fd.io/vpp/22.06/gettingstarted/troubleshooting/sanitizer.html#id2

best
ben

> -Original Message-
> From: vpp-dev@lists.fd.io  On Behalf Of Raj Kumar
> Sent: Wednesday, June 8, 2022 15:47
> To: vpp-dev@lists.fd.io
> Subject: [vpp-dev] Crash in VPP 21.06 #vnet
> 
> Hi ,
> I am observing some infrequent crash in VPP.  I am using VCL for receiving
> UDP packets in my application. We compiled VPP with DPDP and using
> Mellanox NIC to receive UDP packets ( the MTU is set to 9000). Attached is
> the startup.conf file.
> 
> 
> #0  0x7efd06b8837f in raise () from /lib64/libc.so.6
> #1  0x7efd06b72db5 in abort () from /lib64/libc.so.6
> #2  0x55ffa1ed1134 in os_exit (code=code@entry=1) at
> /usr/src/debug/vpp-21.06.0-
> 4~g2a4131173_dirty.x86_64/src/vpp/vnet/main.c:431
> #3  0x7efd07de215a in unix_signal_handler (signum=11, si= out>, uc=)
> at /usr/src/debug/vpp-21.06.0-
> 4~g2a4131173_dirty.x86_64/src/vlib/unix/main.c:187
> #4  
> #5  0x7efd07b4cbc6 in _mm_storeu_si128 (__B=..., __P=)
> at /usr/lib/gcc/x86_64-redhat-linux/8/include/emmintrin.h:721
> #6  clib_mov16 (src=0x7efcb99f8c00 "`", dst=) at
> /usr/src/debug/vpp-21.06.0-
> 4~g2a4131173_dirty.x86_64/src/vppinfra/memcpy_avx2.h:65
> #7  clib_memcpy_fast_avx2 (n=45, src=0x7efcb99f8c00, dst=)
> at /usr/src/debug/vpp-21.06.0-
> 4~g2a4131173_dirty.x86_64/src/vppinfra/memcpy_avx2.h:166
> #8  clib_memcpy_fast (n=45, src=0x7efcb99f8c00, dst=) at
> /usr/src/debug/vpp-21.06.0-
> 4~g2a4131173_dirty.x86_64/src/vppinfra/string.h:97
> #9  svm_fifo_copy_to_chunk_ma_skx (f=0x7efccb716040, c=0x7ef44e1e2cb0,
> tail_idx=, src=0x7efcb99f8c00 "`", len=45,
> last=0x7ef44e000c08)
> at /usr/src/debug/vpp-21.06.0-
> 4~g2a4131173_dirty.x86_64/src/svm/svm_fifo.c:53
> #10 0x7efd07b49b73 in svm_fifo_copy_to_chunk (last=,
> len=, src=, tail_idx=,
> c=, f=0x7efccb716040)
> at /usr/src/debug/vpp-21.06.0-
> 4~g2a4131173_dirty.x86_64/src/svm/svm_fifo.c:96
> #11 svm_fifo_enqueue_segments (f=0x7efccb716040,
> segs=segs@entry=0x7efcb99f7fb0, n_segs=n_segs@entry=2,
> allow_partial=allow_partial@entry=0 '\000')
> at /usr/src/debug/vpp-21.06.0-
> 4~g2a4131173_dirty.x86_64/src/svm/svm_fifo.c:1006
> #12 0x7efd0883cd25 in session_enqueue_dgram_connection
> (s=s@entry=0x7efccb6d3580, hdr=0x7efcb99f8c00, b=0x1005415480,
> proto=proto@entry=1 '\001',
> queue_event=queue_event@entry=1 '\001') at /usr/src/debug/vpp-21.06.0-
> 4~g2a4131173_dirty.x86_64/src/vnet/session/session.c:637
> #13 0x7efd0869e089 in udp_connection_enqueue (uc0=0x7efccb671e40,
> s0=0x7efccb6d3580, hdr0=hdr0@entry=0x7efcb99f8c00,
> thread_index=thread_index@entry=4,
> b=b@entry=0x1005415480, queue_event=queue_event@entry=1 '\001',
> error0=0x7efcb99f80d4) at /usr/src/debug/vpp-21.06.0-
> 4~g2a4131173_dirty.x86_64/src/vnet/udp/udp_input.c:159
> #14 0x7efd0869e4d7 in udp46_input_inline (is_ip4=0 '\000',
> frame=, node=, vm=)
> at /usr/src/debug/vpp-21.06.0-
> 4~g2a4131173_dirty.x86_64/src/vnet/udp/udp_input.c:311
> #15 udp6_input (vm=, node=, frame= out>) at /usr/src/debug/vpp-21.06.0-
> 4~g2a4131173_dirty.x86_64/src/vnet/udp/udp_input.c:368
> #16 0x7efd07d8e172 in dispatch_pending_node (vm=0x7efcca76a840,
> pending_frame_index=, last_time_stamp=)
> at /usr/src/debug/vpp-21.06.0-
> 4~g2a4131173_dirty.x86_64/src/vlib/main.c:1024
> #17 0x7efd07d8fc6f in vlib_worker_loop (vm=vm@entry=0x7efcca76a840) at
> /usr/src/debug/vpp-21.06.0-4~g2a4131173_dirty.x86_64/src/vlib/main.c:1649
> #18 0x7efd07dc8738 in vlib_worker_thread_fn (arg=) at
> /usr/src/debug/vpp-21.06.0-
> 4~g2a4131173_dirty.x86_64/src/vlib/threads.c:1560
> #19 0x7efd072efee8 in clib_calljmp () at /usr/src/debug/vpp-21.06.0-
> 4~g2a4131173_dirty.x86_64/src/vppinfra/longjmp.S:123
> #20 0x7efcba7fbd20 in ?? ()
> #21 0x7efcc4ff41c5 in eal_thread_loop.cold () from
> /usr/lib/vpp_plugins/dpdk_plugin.so
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21516): https://lists.fd.io/g/vpp-dev/message/21516
Mute This Topic: https://lists.fd.io/mt/91623364/21656
Mute #vnet:https://lists.fd.io/g/vpp-dev/mutehashtag/vnet
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] how BIRD routes integrate into vpp

2022-06-08 Thread Petr Boltík
Hi,
1. instead of "ExecStart=ip netns exec "  you should use the systemd
option to run in specific netns:
NetworkNamespacePath=/var/run/netns/yourNamespace
2. a) imho LCP plugin (vpp 22.06) configuration option "lcp default netns"
is useless. This setting only specifies where lcp should creare interfaces.
If VPP runs in a specific namespace (systemd NetworkNamespacePath...), LCP
does work inside this namespace.
2. b) VPP linux_nl_plugin plugin sync routes with namespace where VPP runs
=> VPP should run in a specific namespace (systemd
NetworkNamespacePath...). In my test,  linux_nl_plugin plugin ignore "lcp
default netns".

Petr B.

st 8. 6. 2022 v 11:22 odesílatel Pim van Pelt  napsal:

> Hoi,
>
> Please take a look at the following - it describes how to add Bird into a
> dedicated *network namespace* (typically 'dataplane') precisely so that
> the default kernel table is not impacted. From there, the LinuxCP plugin
> listens to that 'dataplane' namespace instead and installs all routes.
> https://ipng.ch/s/articles/2021/09/02/vpp-5.html
>
> To take an Ubuntu server and walk through the entire install, so that it
> is built/configured as a full router, take a look at this article:
> https://ipng.ch/s/articles/2021/09/21/vpp-7.html
>
> The thing you're missing, and may not be obvious from the articles, is
> that you will run bird (and other things, like sshd or snmpd, etc) in this
> new namespace, like so (note the bold part in ExecStart):
>
> pim@nlams0:~$ cat << EOF | sudo tee
> /usr/lib/systemd/system/bird-dataplane.service
> [Unit]
> Description=BIRD Internet Routing Daemon
> After=network.target netns-dataplane.service
>
> [Service]
> EnvironmentFile=/etc/bird/envvars
> ExecStartPre=/usr/lib/bird/prepare-environment
> ExecStartPre=/usr/sbin/bird -p
> ExecReload=/usr/sbin/birdc configure
> ExecStart=*/sbin/ip netns exec dataplane */usr/sbin/bird -f -u
> $BIRD_RUN_USER -g $BIRD_RUN_GROUP $BIRD_ARGS
> Restart=on-abort
>
> [Install]
> WantedBy=multi-user.target
> EOF
>
> After that, you can stop, disable and mask the 'bird' systemd unit, and
> enable the 'bird-dataplane' unit, after creating the network namespace (
> https://ipng.ch/s/articles/2021/09/21/vpp-7.html describes this in
> detail).
>
> groet,
> Pim
>
> On Wed, Jun 8, 2022 at 11:11 AM 李海艳  wrote:
>
>> Thanks very much, now my vpp and bird could work together.
>>
>>  we hope some routes that BGP received  not be added to kernel route
>> table, so that the default kernel table won't be impacted, meanwhile we
>> hope these routes to be added to vpp.
>>
>> we have tried several methods, none could acheive this goal, do you have
>> any suggestions?
>>
>>
>> --
>> haiyan...@ilinkall.cn
>>
>>
>> *From:* Pim van Pelt 
>> *Date:* 2022-04-24 16:56
>> *To:* haiyan...@ilinkall.cn
>> *CC:* vpp-dev 
>> *Subject:* Re: [vpp-dev] how BIRD routes integrate into vpp
>> Hoi,
>>
>> 20.01 is somewhat old. Synchronizing routes from bird or frr is possible,
>> although you'll want to run Linux Control Plane plugin (pertinent gerrit:
>> https://gerrit.fd.io/r/c/vpp/+/31122) which was merged in time for the
>> last release 21.10. I wrote a set of articles on the Linux CP plugin here:
>> https://ipng.ch/s/articles/  (see the 7 posts marked "VPP Linux CP" for
>> lots of background). You could try to compile the plugin on 20.01 but I'm
>> certain you'll have to make some changes on your own, as the codebase has
>> evolved quite a bit since your 20.01 release from 2020.
>>
>> You can also take a look at a screencast I made that shows the plugin
>> (that reads exactly from Bird as you wish):
>> https://asciinema.org/a/432943
>>
>> groet,
>> Pim
>>
>> On Sun, Apr 24, 2022 at 9:32 AM haiyan...@ilinkall.cn <
>> haiyan...@ilinkall.cn> wrote:
>>
>>> Dear All:
>>>
>>>
>>> I'm going to use BIRD(BGP) working together with VPP, but how can I
>>> synchronize BIRD routes to VPP, any suggestions or reference?
>>>
>>> VPP version is 20.01,  try not to update VPP to new version as possible
>>> as we can .
>>>
>>>
>>>
>>>
>>
>> --
>> Pim van Pelt 
>> PBVP1-RIPE - http://www.ipng.nl/
>>
>>
>
> --
> Pim van Pelt 
> PBVP1-RIPE - http://www.ipng.nl/
>
> 
>
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21515): https://lists.fd.io/g/vpp-dev/message/21515
Mute This Topic: https://lists.fd.io/mt/90661567/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] Crash in VPP 21.06 #vnet

2022-06-08 Thread Raj Kumar
Hi ,
I am observing some infrequent crash in VPP.  I am using VCL for receiving UDP 
packets in my application. We compiled VPP with DPDP and using Mellanox NIC to 
receive UDP packets ( the MTU is set to 9000). Attached is the startup.conf 
file.

#0  0x7efd06b8837f in raise () from /lib64/libc.so.6
#1  0x7efd06b72db5 in abort () from /lib64/libc.so.6
#2  0x55ffa1ed1134 in os_exit (code=code@entry=1) at 
/usr/src/debug/vpp-21.06.0-4~g2a4131173_dirty.x86_64/src/vpp/vnet/main.c:431
#3  0x7efd07de215a in unix_signal_handler (signum=11, si=, 
uc=)
at /usr/src/debug/vpp-21.06.0-4~g2a4131173_dirty.x86_64/src/vlib/unix/main.c:187
#4  
#5  0x7efd07b4cbc6 in _mm_storeu_si128 (__B=..., __P=) at 
/usr/lib/gcc/x86_64-redhat-linux/8/include/emmintrin.h:721
#6  clib_mov16 (src=0x7efcb99f8c00 "`", dst=) at 
/usr/src/debug/vpp-21.06.0-4~g2a4131173_dirty.x86_64/src/vppinfra/memcpy_avx2.h:65
#7  clib_memcpy_fast_avx2 (n=45, src=0x7efcb99f8c00, dst=) at 
/usr/src/debug/vpp-21.06.0-4~g2a4131173_dirty.x86_64/src/vppinfra/memcpy_avx2.h:166
#8  clib_memcpy_fast (n=45, src=0x7efcb99f8c00, dst=) at 
/usr/src/debug/vpp-21.06.0-4~g2a4131173_dirty.x86_64/src/vppinfra/string.h:97
#9  svm_fifo_copy_to_chunk_ma_skx (f=0x7efccb716040, c=0x7ef44e1e2cb0, 
tail_idx=, src=0x7efcb99f8c00 "`", len=45, last=0x7ef44e000c08)
at /usr/src/debug/vpp-21.06.0-4~g2a4131173_dirty.x86_64/src/svm/svm_fifo.c:53
#10 0x7efd07b49b73 in svm_fifo_copy_to_chunk (last=, 
len=, src=, tail_idx=, 
c=, f=0x7efccb716040)
at /usr/src/debug/vpp-21.06.0-4~g2a4131173_dirty.x86_64/src/svm/svm_fifo.c:96
#11 svm_fifo_enqueue_segments (f=0x7efccb716040, 
segs=segs@entry=0x7efcb99f7fb0, n_segs=n_segs@entry=2, 
allow_partial=allow_partial@entry=0 '\000')
at /usr/src/debug/vpp-21.06.0-4~g2a4131173_dirty.x86_64/src/svm/svm_fifo.c:1006
#12 0x7efd0883cd25 in session_enqueue_dgram_connection 
(s=s@entry=0x7efccb6d3580, hdr=0x7efcb99f8c00, b=0x1005415480, 
proto=proto@entry=1 '\001',
queue_event=queue_event@entry=1 '\001') at 
/usr/src/debug/vpp-21.06.0-4~g2a4131173_dirty.x86_64/src/vnet/session/session.c:637
#13 0x7efd0869e089 in udp_connection_enqueue (uc0=0x7efccb671e40, 
s0=0x7efccb6d3580, hdr0=hdr0@entry=0x7efcb99f8c00, 
thread_index=thread_index@entry=4,
b=b@entry=0x1005415480, queue_event=queue_event@entry=1 '\001', 
error0=0x7efcb99f80d4) at 
/usr/src/debug/vpp-21.06.0-4~g2a4131173_dirty.x86_64/src/vnet/udp/udp_input.c:159
#14 0x7efd0869e4d7 in udp46_input_inline (is_ip4=0 '\000', frame=, node=, vm=)
at 
/usr/src/debug/vpp-21.06.0-4~g2a4131173_dirty.x86_64/src/vnet/udp/udp_input.c:311
#15 udp6_input (vm=, node=, frame=) at 
/usr/src/debug/vpp-21.06.0-4~g2a4131173_dirty.x86_64/src/vnet/udp/udp_input.c:368
#16 0x7efd07d8e172 in dispatch_pending_node (vm=0x7efcca76a840, 
pending_frame_index=, last_time_stamp=)
at /usr/src/debug/vpp-21.06.0-4~g2a4131173_dirty.x86_64/src/vlib/main.c:1024
#17 0x7efd07d8fc6f in vlib_worker_loop (vm=vm@entry=0x7efcca76a840) at 
/usr/src/debug/vpp-21.06.0-4~g2a4131173_dirty.x86_64/src/vlib/main.c:1649
#18 0x7efd07dc8738 in vlib_worker_thread_fn (arg=) at 
/usr/src/debug/vpp-21.06.0-4~g2a4131173_dirty.x86_64/src/vlib/threads.c:1560
#19 0x7efd072efee8 in clib_calljmp () at 
/usr/src/debug/vpp-21.06.0-4~g2a4131173_dirty.x86_64/src/vppinfra/longjmp.S:123
#20 0x7efcba7fbd20 in ?? ()
#21 0x7efcc4ff41c5 in eal_thread_loop.cold () from 
/usr/lib/vpp_plugins/dpdk_plugin.so


startup.conf
Description: Binary data

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21514): https://lists.fd.io/g/vpp-dev/message/21514
Mute This Topic: https://lists.fd.io/mt/91623364/21656
Mute #vnet:https://lists.fd.io/g/vpp-dev/mutehashtag/vnet
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Support of "qos egress map" with GTP tunnels

2022-06-08 Thread Ray Kinsella

"Martin Dzuris"  writes:

> Hi
>
>  
>
> We need to update DSCP value on gtp output traffic (inside and outside of the 
> tunnel too ). 
>
>  
>
> I'm creating GTP tunnel : 
>
>  
>
> ...
>
> vppctl set in state GigabitEthernet0/9/0  up
>
> vppctl create sub-interfaces GigabitEthernet0/9/0 5 dot1q 100 exact-match
>
> vppctl set interface state GigabitEthernet0/9/0.5 up   
>
> vppctl set int ip address GigabitEthernet0/9/0.5  22.22.2.2/24
>
> vppctl create gtpu tunnel src 22.22.2.2 dst 22.22.2.4 teid 13 tteid 14 
> encap-vrf-id 0 decap-next ip4
>
> vppctl set interface ip table gtpu_tunnel0 22
>
> vppctl set in state gtpu_tunnel0 up
>
> vppctl set int ip address gtpu_tunnel0 50.50.50.1/24
>
>  
>
>  
>
> and I tried to mark packet on gtp interface : 
>
>  
>
> vppctl qos egress map id 0 [ip][0]=30
>
> vppctl qos egress map id 1 [ip][1]=31
>
> vppctl qos mark ext gtpu_tunnel0 id 0
>
> vppctl qos mark ext GigabitEthernet0/9/0.5 id 1
>
>

Does `show error` reveal anything useful?

>
>  
>
> Example from Trace : 
>
>  
>
> 04:16:56:227608: ip4-qos-mark
>
>   source:ext qos:30 used:no
>
> 04:16:56:227611: gtpu4-encap
>
>   GTPU encap to gtpu_tunnel0 tteid 14
>
>  
>
>  
>
> Result is that packets are not updated . Is QOS mark feature supported on GTP 
> interface and on sub-interfaces ? Can you advise me how to do it?
>
> Martin 
>
>
> 


-- 
Regards, Ray K

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21513): https://lists.fd.io/g/vpp-dev/message/21513
Mute This Topic: https://lists.fd.io/mt/91620408/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] Support of "qos egress map" with GTP tunnels

2022-06-08 Thread Martin Dzuris
Hi



We need to update DSCP value on gtp output traffic (inside and outside of
the tunnel too ).



I'm creating GTP tunnel :



...

vppctl set in state GigabitEthernet0/9/0  up

vppctl create sub-interfaces GigabitEthernet0/9/0 5 dot1q 100 exact-match

vppctl set interface state GigabitEthernet0/9/0.5 up

vppctl set int ip address GigabitEthernet0/9/0.5  22.22.2.2/24

vppctl create gtpu tunnel src 22.22.2.2 dst 22.22.2.4 teid 13 tteid 14
encap-vrf-id 0 decap-next ip4

vppctl set interface ip table gtpu_tunnel0 22

vppctl set in state gtpu_tunnel0 up

vppctl set int ip address gtpu_tunnel0 50.50.50.1/24





and I tried to mark packet on gtp interface :



vppctl qos egress map id 0 [ip][0]=30

vppctl qos egress map id 1 [ip][1]=31

vppctl qos mark ext gtpu_tunnel0 id 0

vppctl qos mark ext GigabitEthernet0/9/0.5 id 1





Example from Trace :



04:16:56:227608: ip4-qos-mark

  source:ext qos:30 used:no

04:16:56:227611: gtpu4-encap

  GTPU encap to gtpu_tunnel0 tteid 14





Result is that packets are not updated . Is QOS mark feature supported on
GTP interface and on sub-interfaces ? Can you advise me how to do it?

Martin

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21512): https://lists.fd.io/g/vpp-dev/message/21512
Mute This Topic: https://lists.fd.io/mt/91620408/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] how BIRD routes integrate into vpp

2022-06-08 Thread Pim van Pelt
Hoi,

Please take a look at the following - it describes how to add Bird into a
dedicated *network namespace* (typically 'dataplane') precisely so that the
default kernel table is not impacted. From there, the LinuxCP plugin
listens to that 'dataplane' namespace instead and installs all routes.
https://ipng.ch/s/articles/2021/09/02/vpp-5.html

To take an Ubuntu server and walk through the entire install, so that it is
built/configured as a full router, take a look at this article:
https://ipng.ch/s/articles/2021/09/21/vpp-7.html

The thing you're missing, and may not be obvious from the articles, is that
you will run bird (and other things, like sshd or snmpd, etc) in this new
namespace, like so (note the bold part in ExecStart):

pim@nlams0:~$ cat << EOF | sudo tee
/usr/lib/systemd/system/bird-dataplane.service
[Unit]
Description=BIRD Internet Routing Daemon
After=network.target netns-dataplane.service

[Service]
EnvironmentFile=/etc/bird/envvars
ExecStartPre=/usr/lib/bird/prepare-environment
ExecStartPre=/usr/sbin/bird -p
ExecReload=/usr/sbin/birdc configure
ExecStart=*/sbin/ip netns exec dataplane */usr/sbin/bird -f -u
$BIRD_RUN_USER -g $BIRD_RUN_GROUP $BIRD_ARGS
Restart=on-abort

[Install]
WantedBy=multi-user.target
EOF

After that, you can stop, disable and mask the 'bird' systemd unit, and
enable the 'bird-dataplane' unit, after creating the network namespace (
https://ipng.ch/s/articles/2021/09/21/vpp-7.html describes this in detail).

groet,
Pim

On Wed, Jun 8, 2022 at 11:11 AM 李海艳  wrote:

> Thanks very much, now my vpp and bird could work together.
>
>  we hope some routes that BGP received  not be added to kernel route
> table, so that the default kernel table won't be impacted, meanwhile we
> hope these routes to be added to vpp.
>
> we have tried several methods, none could acheive this goal, do you have
> any suggestions?
>
>
> --
> haiyan...@ilinkall.cn
>
>
> *From:* Pim van Pelt 
> *Date:* 2022-04-24 16:56
> *To:* haiyan...@ilinkall.cn
> *CC:* vpp-dev 
> *Subject:* Re: [vpp-dev] how BIRD routes integrate into vpp
> Hoi,
>
> 20.01 is somewhat old. Synchronizing routes from bird or frr is possible,
> although you'll want to run Linux Control Plane plugin (pertinent gerrit:
> https://gerrit.fd.io/r/c/vpp/+/31122) which was merged in time for the
> last release 21.10. I wrote a set of articles on the Linux CP plugin here:
> https://ipng.ch/s/articles/  (see the 7 posts marked "VPP Linux CP" for
> lots of background). You could try to compile the plugin on 20.01 but I'm
> certain you'll have to make some changes on your own, as the codebase has
> evolved quite a bit since your 20.01 release from 2020.
>
> You can also take a look at a screencast I made that shows the plugin
> (that reads exactly from Bird as you wish): https://asciinema.org/a/432943
>
> groet,
> Pim
>
> On Sun, Apr 24, 2022 at 9:32 AM haiyan...@ilinkall.cn <
> haiyan...@ilinkall.cn> wrote:
>
>> Dear All:
>>
>>
>> I'm going to use BIRD(BGP) working together with VPP, but how can I
>> synchronize BIRD routes to VPP, any suggestions or reference?
>>
>> VPP version is 20.01,  try not to update VPP to new version as possible
>> as we can .
>>
>> 
>>
>>
>
> --
> Pim van Pelt 
> PBVP1-RIPE - http://www.ipng.nl/
>
>

-- 
Pim van Pelt 
PBVP1-RIPE - http://www.ipng.nl/

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21511): https://lists.fd.io/g/vpp-dev/message/21511
Mute This Topic: https://lists.fd.io/mt/90661567/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] how BIRD routes integrate into vpp

2022-06-08 Thread haiyan...@ilinkall.cn
Thanks very much, now my vpp and bird could work together. 

 we hope some routes that BGP received  not be added to kernel route table, so 
that the default kernel table won't be impacted, meanwhile we hope these routes 
to be added to vpp.

we have tried several methods, none could acheive this goal, do you have any 
suggestions?  




haiyan...@ilinkall.cn
 
From: Pim van Pelt
Date: 2022-04-24 16:56
To: haiyan...@ilinkall.cn
CC: vpp-dev
Subject: Re: [vpp-dev] how BIRD routes integrate into vpp
Hoi,

20.01 is somewhat old. Synchronizing routes from bird or frr is possible, 
although you'll want to run Linux Control Plane plugin (pertinent gerrit: 
https://gerrit.fd.io/r/c/vpp/+/31122) which was merged in time for the last 
release 21.10. I wrote a set of articles on the Linux CP plugin here: 
https://ipng.ch/s/articles/  (see the 7 posts marked "VPP Linux CP" for lots of 
background). You could try to compile the plugin on 20.01 but I'm certain 
you'll have to make some changes on your own, as the codebase has evolved quite 
a bit since your 20.01 release from 2020.

You can also take a look at a screencast I made that shows the plugin (that 
reads exactly from Bird as you wish): https://asciinema.org/a/432943

groet,
Pim

On Sun, Apr 24, 2022 at 9:32 AM haiyan...@ilinkall.cn  
wrote:
Dear All:
 
I'm going to use BIRD(BGP) working together with VPP, but how can I synchronize 
BIRD routes to VPP, any suggestions or reference? 

VPP version is 20.01,  try not to update VPP to new version as possible as we 
can .





-- 
Pim van Pelt  
PBVP1-RIPE - http://www.ipng.nl/

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21510): https://lists.fd.io/g/vpp-dev/message/21510
Mute This Topic: https://lists.fd.io/mt/90661567/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-