Re: [vpp-dev] unformat_vnet_uri not implemented following RFC 3986

2021-05-26 Thread Florin Coras
Hi, 

That unformat function and the associated session layer apis (e.g., 
vnet_connect_uri) are mainly used for testing and their production use is 
discouraged. Provided that functionality is not lost, if anybody wants to do 
the work, I don’t see why we wouldn’t want to make the unformat function rfc 
compliant. At this point I can’t remember why we settled on the use of “/“ but 
I suspect it may have to do with easier parsing of ipv6 ips. 

Regards,
Florin

> On May 26, 2021, at 8:04 PM, jiangxiaom...@outlook.com wrote:
> 
> Hi Florin:
> Currently unformat_vnet_uri not implemented following RFC 3986. The 
> syntax `tcp://10.0.0.1/500` should be `tcp://10.0.0.1:500` in rfc 3986.
> I noticed in there is a comment for `unformat_vent_uri` in 
> `src/vnet/session/application_interface.c`,
> ```
> /**
>  * unformat a vnet URI
>  *
>  * transport-proto://[hostname]ip46-addr:port
>  * eg.  tcp://ip46-addr:port
>  *  tls://[testtsl.fd.io]ip46-addr:port
>  *
>  ...
> ```
> Does it mean `unformat_vnet_uri` will be refactored following rfc in future?
> 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19482): https://lists.fd.io/g/vpp-dev/message/19482
Mute This Topic: https://lists.fd.io/mt/83117335/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] unformat_vnet_uri not implemented following RFC 3986

2021-05-26 Thread jiangxiaoming
Hi Florin:
Currently unformat_vnet_uri not implemented following RFC 3986. The syntax 
`tcp://10.0.0.1/500` should be `tcp://10.0.0.1:500` in rfc 3986.
I noticed in there is a comment for `unformat_vent_uri` in 
`src/vnet/session/application_interface.c`,
```
/**
* unformat a vnet URI
*
* transport-proto://[hostname]ip46-addr:port
* eg. tcp://ip46-addr:port
* tls://[testtsl.fd.io]ip46-addr:port
*
...
```
Does it mean `unformat_vnet_uri` will be refactored following rfc in future?

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19481): https://lists.fd.io/g/vpp-dev/message/19481
Mute This Topic: https://lists.fd.io/mt/83117335/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] IPsec crash with async crypto

2021-05-26 Thread Florin Coras
Hi Matt, 

Did you try checking if quic plugin is loaded, just to see if there’s a 
connection there. 

Regards,
Florin

> On May 26, 2021, at 3:19 PM, Matthew Smith via lists.fd.io 
>  wrote:
> 
> Hi,
> 
> I saw VPP crash several times during some tests that were running to evaluate 
> IPsec performance. The last upstream commit on my build of VPP is 'fd77f8c00 
> quic: remove cmake --target'. The tests ran on a C3000 with an onboard QAT. 
> The tests were repeated with the QAT removed from the device whitelist in 
> startup.conf (using async crypto with sw_scheduler) and the same thing 
> happened.
> 
> The relevant part of the stack trace looks like this:
> 
> #8  0x7fdbb4006459 in os_out_of_memory () at 
> /usr/src/debug/vpp-21.01-568~g67ff5da46.el8.x86_64/src/vppinfra/unix-misc.c:221
> #9  0x7fdbb400d1fb in clib_mem_alloc_aligned_at_offset 
> (size=2305843009213692256, align=8, align_offset=8, 
> os_out_of_memory_on_failure=1) at 
> /usr/src/debug/vpp-21.01-568~g67ff5da46.el8.x86_64/src/vppinfra/mem.h:243
> #10 vec_resize_allocate_memory (v=0x7fdb36a9b7f0, 
> length_increment=288230376151711515, data_bytes=2305843009213692256, 
> header_bytes=8, data_align=8, numa_id=255) at 
> /usr/src/debug/vpp-21.01-568~g67ff5da46.el8.x86_64/src/vppinfra/vec.c:111
> #11 0x7fdbb60efe01 in _vec_resize_inline (v=0x7fdb36a9b7f0, 
> length_increment=288230376151711515, data_bytes=2305843009213692248, 
> header_bytes=0, data_align=8, numa_id=255) at 
> /usr/src/debug/vpp-21.01-568~g67ff5da46.el8.x86_64/src/vppinfra/vec.h:170
> #12 clib_bitmap_ori_notrim (ai=0x7fdb36a9b7f0, i=18446744073709537927) at 
> /usr/src/debug/vpp-21.01-568~g67ff5da46.el8.x86_64/src/vppinfra/bitmap.h:643
> #13 vnet_crypto_async_free_frame (vm=0x7fdb356f7a80, frame=0x7fdb3461c280) at 
> /usr/src/debug/vpp-21.01-568~g67ff5da46.el8.x86_64/src/vnet/crypto/crypto.h:585
> #14 crypto_dequeue_frame (vm=0x7fdb356f7a80, node=0x7fdb36bbd280, 
> ct=0x7fdb33537f80, hdl=0x7fdb2bc32810 , n_cache=1, 
> n_total=0x7fdb145053dc) at 
> /usr/src/debug/vpp-21.01-568~g67ff5da46.el8.x86_64/src/vnet/crypto/node.c:135
> #15 crypto_dispatch_node_fn (vm=0x7fdb356f7a80, node=0x7fdb36bbd280, 
> frame=0x0) at 
> /usr/src/debug/vpp-21.01-568~g67ff5da46.el8.x86_64/src/vnet/crypto/node.c:166
> #16 0x7fdbb4b789e5 in dispatch_node (vm=0x7fdb356f7a80, 
> node=0x7fdb36bbd280, type=VLIB_NODE_TYPE_INPUT, 
> dispatch_state=VLIB_NODE_STATE_POLLING, frame=0x0, 
> last_time_stamp=207016971809128) at 
> /usr/src/debug/vpp-21.01-568~g67ff5da46.el8.x86_64/src/vlib/main.c:1024
> #17 vlib_main_or_worker_loop (vm=0x7fdb356f7a80, is_main=0) at 
> /usr/src/debug/vpp-21.01-568~g67ff5da46.el8.x86_64/src/vlib/main.c:1618
> 
> In vnet_crypto_async_free_frame() it appears that a call to pool_put() is 
> trying to return a pointer to a pool that it is not a member of:
> 
> (gdb) frame 13
> #13 vnet_crypto_async_free_frame (vm=0x7fdb356f7a80, frame=0x7fdb3461c280) at 
> /usr/src/debug/vpp-21.01-568~g67ff5da46.el8.x86_64/src/vnet/crypto/crypto.h:585
> 585  pool_put (ct->frame_pool, frame);
> (gdb) p frame - ct->frame_pool
> $1 = -13689
> 
> It seems like maybe a pointer to a vnet_crypto_async_frame_t was stored by 
> the crypto engine and before it could be dequeued the pool filled and had to 
> be reallocated. The per-thread frame_pool's are allocated with room for 1024 
> entries initially and ct->frame_pool had a vector length of 1025 when the 
> crash occurred.
> 
> Can anyone with knowledge of the async crypto code confirm or refute that 
> theory? Anyone have suggestions on the best way to fix this?
> 
> Thanks,
> -Matt
> 
> 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19480): https://lists.fd.io/g/vpp-dev/message/19480
Mute This Topic: https://lists.fd.io/mt/83112898/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] IPsec crash with async crypto

2021-05-26 Thread Matthew Smith via lists.fd.io
Hi,

I saw VPP crash several times during some tests that were running to
evaluate IPsec performance. The last upstream commit on my build of VPP is
'fd77f8c00 quic: remove cmake --target'. The tests ran on a C3000 with an
onboard QAT. The tests were repeated with the QAT removed from the device
whitelist in startup.conf (using async crypto with sw_scheduler) and the
same thing happened.

The relevant part of the stack trace looks like this:

#8  0x7fdbb4006459 in os_out_of_memory () at
/usr/src/debug/vpp-21.01-568~g67ff5da46.el8.x86_64/src/vppinfra/unix-misc.c:221
#9  0x7fdbb400d1fb in clib_mem_alloc_aligned_at_offset
(size=2305843009213692256, align=8, align_offset=8,
os_out_of_memory_on_failure=1) at
/usr/src/debug/vpp-21.01-568~g67ff5da46.el8.x86_64/src/vppinfra/mem.h:243
#10 vec_resize_allocate_memory (v=0x7fdb36a9b7f0,
length_increment=288230376151711515, data_bytes=2305843009213692256,
header_bytes=8, data_align=8, numa_id=255) at
/usr/src/debug/vpp-21.01-568~g67ff5da46.el8.x86_64/src/vppinfra/vec.c:111
#11 0x7fdbb60efe01 in _vec_resize_inline (v=0x7fdb36a9b7f0,
length_increment=288230376151711515, data_bytes=2305843009213692248,
header_bytes=0, data_align=8, numa_id=255) at
/usr/src/debug/vpp-21.01-568~g67ff5da46.el8.x86_64/src/vppinfra/vec.h:170
#12 clib_bitmap_ori_notrim (ai=0x7fdb36a9b7f0, i=18446744073709537927) at
/usr/src/debug/vpp-21.01-568~g67ff5da46.el8.x86_64/src/vppinfra/bitmap.h:643
#13 vnet_crypto_async_free_frame (vm=0x7fdb356f7a80, frame=0x7fdb3461c280)
at
/usr/src/debug/vpp-21.01-568~g67ff5da46.el8.x86_64/src/vnet/crypto/crypto.h:585
#14 crypto_dequeue_frame (vm=0x7fdb356f7a80, node=0x7fdb36bbd280,
ct=0x7fdb33537f80, hdl=0x7fdb2bc32810 , n_cache=1,
n_total=0x7fdb145053dc) at
/usr/src/debug/vpp-21.01-568~g67ff5da46.el8.x86_64/src/vnet/crypto/node.c:135
#15 crypto_dispatch_node_fn (vm=0x7fdb356f7a80, node=0x7fdb36bbd280,
frame=0x0) at
/usr/src/debug/vpp-21.01-568~g67ff5da46.el8.x86_64/src/vnet/crypto/node.c:166
#16 0x7fdbb4b789e5 in dispatch_node (vm=0x7fdb356f7a80,
node=0x7fdb36bbd280, type=VLIB_NODE_TYPE_INPUT,
dispatch_state=VLIB_NODE_STATE_POLLING, frame=0x0,
last_time_stamp=207016971809128) at
/usr/src/debug/vpp-21.01-568~g67ff5da46.el8.x86_64/src/vlib/main.c:1024
#17 vlib_main_or_worker_loop (vm=0x7fdb356f7a80, is_main=0) at
/usr/src/debug/vpp-21.01-568~g67ff5da46.el8.x86_64/src/vlib/main.c:1618

In vnet_crypto_async_free_frame() it appears that a call to pool_put() is
trying to return a pointer to a pool that it is not a member of:

(gdb) frame 13
#13 vnet_crypto_async_free_frame (vm=0x7fdb356f7a80, frame=0x7fdb3461c280)
at
/usr/src/debug/vpp-21.01-568~g67ff5da46.el8.x86_64/src/vnet/crypto/crypto.h:585
585  pool_put (ct->frame_pool, frame);
(gdb) p frame - ct->frame_pool
$1 = -13689

It seems like maybe a pointer to a vnet_crypto_async_frame_t was stored by
the crypto engine and before it could be dequeued the pool filled and had
to be reallocated. The per-thread frame_pool's are allocated with room for
1024 entries initially and ct->frame_pool had a vector length of 1025 when
the crash occurred.

Can anyone with knowledge of the async crypto code confirm or refute that
theory? Anyone have suggestions on the best way to fix this?

Thanks,
-Matt

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19479): https://lists.fd.io/g/vpp-dev/message/19479
Mute This Topic: https://lists.fd.io/mt/83112898/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] #plugin #vpp Linux-cp multicast issue

2021-05-26 Thread Petr Boltík
Awesome, It works.

Great job. many thanks to you.

Regards
Petr

st 26. 5. 2021 v 22:02 odesílatel Matthew Smith 
napsal:

>
> Hi Petr,
>
> My earlier statement that multicast should work with no additional
> configuration required was wrong. That's only true if you have the netlink
> listener patch (https://gerrit.fd.io/r/c/vpp/+/31122) applied. I have
> that applied to my local builds and I incorrectly thought it was part of
> the linux-cp code that had already been merged to master. Sorry for the
> confusion.
>
> To punt multicast packets arriving on GigabitEthernet3/0/0 to the host, I
> think you can run these commands:
>
> vppctl ip mroute add 224.0.0.0/24 via local Forward
> vppctl ip mroute add 224.0.0.0/24 via GigabitEthernet3/0/0 Accept
>
> The explanation of the purpose of those commands that I received a while
> back was "In mfib you need to specify both where the traffic can come from
> (via an Accept path) so it passes the RPF check and where it's going to
> (via a Forward path)".
>
> -Matt
>
>
> On Wed, May 26, 2021 at 1:40 PM Petr Boltík  wrote:
>
>> Hi,
>> thank you . answers are inline below.
>>
>> Predispositions
>> The host is apu4d4 + Debian 10.9 + bird 1.66 (old, but in repo), clear
>> install + VPP 21.06-rc1~0-ge82d59f38~b1 (10.10.50.1/24). The opposite
>> side is MikroTik rb4011 (10.10.50.2/24). There is no disabled plugins,
>> only enabled linux-cp.
>>
>> st 26. 5. 2021 v 19:08 odesílatel Matthew Smith 
>> napsal:
>>
>>> Hi Petr,
>>>
>>> Responses are inline...
>>>
>>> On Wed, May 26, 2021 at 10:23 AM  wrote:
>>>
 Hello,

 I'm sorry for the beginner question, but I was unable to find the
 answer. I tested the linux-cp feature in VPP 21.10-rc0. Nice job, but I
 cannot get to work ospf multicast, probably I'm doing something wrong, or
 should it work?

>>>
>>>
>>> Multicast should work without any special configuration.
>>>
>>> What is the output of 'vppctl show ip mfib 224.0.0.0/24' after you
>>> apply your linux-cp and interface configurations?
>>>
>>
>> There is first interesting point. There is no mbib route for 224.0.0.0/24
>>
>> vpp# show ip mfib 224.0.0.0/24
>> ipv4-VRF:0, fib_index:0 flags:none
>> (*, 0.0.0.0/0):  flags:Drop,
>>  fib:0 index:0 locks:1
>>   src:Default Route flags:none locks:1:  flags:Drop,
>> Extensions:
>> Interface-Forwarding:
>>   Interfaces:
>>   multicast-ip4-chain
>>   [@0]: dpo-drop ip4
>>
>>
>>
>>>
>>>

 BGP communication works fine (tcp), OSPF NBMA works fine.
 Communication from host to physical interface already passes
 (224.0.0.5).
 Communication from the physical interface to the host did not pass
 (224.0.0.5).


>>>
>>> What did you observe while determining that inbound multicast was not
>>> being passed to the host? Did you run a packet trace and confirm that the
>>> packets are arriving on the VPP hardware interface? If so, can you please
>>> send trace output for 1 or 2 of the inbound multicast packets? If not, you
>>> can run a trace via a sequence of commands like:
>>>
>>> vppctl clear trace
>>> vppctl trace filter include ip4-mfib-forward-lookup 100
>>> vppctl trace add dpdk-input 100
>>> sleep 10
>>> vppctl show trace
>>>
>>
>> Packets successfully arrived from physical interface 
>> ip4-input report several problems:
>> Step 1:
>>
>> Packet 3
>> 00:11:11:019194: dpdk-input
>>   GigabitEthernet3/0/0 rx queue 0
>>   buffer 0x9c0c2: current data 0, length 82, buffer-pool 0, ref-count 1,
>> totlen-nifb 0, trace handle 0x2
>>   ext-hdr-valid
>>   l4-cksum-computed l4-cksum-correct
>>   PKT MBUF: port 1, nb_segs 1, pkt_len 82
>> buf_len 2176, data_len 82, ol_flags 0x180, data_off 128, phys_addr
>> 0x72103100
>> packet_type 0x11 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
>> rss 0x0 fdir.hi 0x0 fdir.lo 0x0
>> Packet Offload Flags
>>   PKT_RX_IP_CKSUM_GOOD (0x0080) IP cksum of RX pkt. is valid
>>   PKT_RX_L4_CKSUM_GOOD (0x0100) L4 cksum of RX pkt. is valid
>> Packet Types
>>   RTE_PTYPE_L2_ETHER (0x0001) Ethernet packet
>>   RTE_PTYPE_L3_IPV4 (0x0010) IPv4 packet without extension headers
>>   IP4: b8:69:f4:99:85:40 -> 01:00:5e:00:00:05
>>   OSPF: 10.10.50.2 -> 224.0.0.5
>> tos 0xc0, ttl 1, length 68, checksum 0x1214 dscp CS6 ecn NON_ECN
>> fragment id 0x8a7c
>> 00:11:11:019235: ethernet-input
>>   frame: flags 0x3, hw-if-index 2, sw-if-index 2
>>   IP4: b8:69:f4:99:85:40 -> 01:00:5e:00:00:05
>> 00:11:11:019250: ip4-input-no-checksum
>>   OSPF: 10.10.50.2 -> 224.0.0.5
>> tos 0xc0, ttl 1, length 68, checksum 0x1214 dscp CS6 ecn NON_ECN
>> fragment id 0x8a7c
>> 00:11:11:019258: ip4-mfib-forward-lookup
>>   fib 0 entry 0
>> 00:11:11:019268: ip4-mfib-forward-rpf
>>   entry 0 itf -1 flags
>> 00:11:11:019273: ip4-drop
>> OSPF: 10.10.50.2 -> 224.0.0.5
>>   tos 0xc0, ttl 1, length 68, checksum 0x1214 dscp CS6 ecn NON_ECN
>>   fragment id 0x8a7c
>> 00:11:11:019276: error-drop

Re: [vpp-dev] #plugin #vpp Linux-cp multicast issue

2021-05-26 Thread Matthew Smith via lists.fd.io
Hi Petr,

My earlier statement that multicast should work with no additional
configuration required was wrong. That's only true if you have the netlink
listener patch (https://gerrit.fd.io/r/c/vpp/+/31122) applied. I have that
applied to my local builds and I incorrectly thought it was part of the
linux-cp code that had already been merged to master. Sorry for the
confusion.

To punt multicast packets arriving on GigabitEthernet3/0/0 to the host, I
think you can run these commands:

vppctl ip mroute add 224.0.0.0/24 via local Forward
vppctl ip mroute add 224.0.0.0/24 via GigabitEthernet3/0/0 Accept

The explanation of the purpose of those commands that I received a while
back was "In mfib you need to specify both where the traffic can come from
(via an Accept path) so it passes the RPF check and where it's going to
(via a Forward path)".

-Matt


On Wed, May 26, 2021 at 1:40 PM Petr Boltík  wrote:

> Hi,
> thank you . answers are inline below.
>
> Predispositions
> The host is apu4d4 + Debian 10.9 + bird 1.66 (old, but in repo), clear
> install + VPP 21.06-rc1~0-ge82d59f38~b1 (10.10.50.1/24). The opposite
> side is MikroTik rb4011 (10.10.50.2/24). There is no disabled plugins,
> only enabled linux-cp.
>
> st 26. 5. 2021 v 19:08 odesílatel Matthew Smith 
> napsal:
>
>> Hi Petr,
>>
>> Responses are inline...
>>
>> On Wed, May 26, 2021 at 10:23 AM  wrote:
>>
>>> Hello,
>>>
>>> I'm sorry for the beginner question, but I was unable to find the
>>> answer. I tested the linux-cp feature in VPP 21.10-rc0. Nice job, but I
>>> cannot get to work ospf multicast, probably I'm doing something wrong, or
>>> should it work?
>>>
>>
>>
>> Multicast should work without any special configuration.
>>
>> What is the output of 'vppctl show ip mfib 224.0.0.0/24' after you apply
>> your linux-cp and interface configurations?
>>
>
> There is first interesting point. There is no mbib route for 224.0.0.0/24
>
> vpp# show ip mfib 224.0.0.0/24
> ipv4-VRF:0, fib_index:0 flags:none
> (*, 0.0.0.0/0):  flags:Drop,
>  fib:0 index:0 locks:1
>   src:Default Route flags:none locks:1:  flags:Drop,
> Extensions:
> Interface-Forwarding:
>   Interfaces:
>   multicast-ip4-chain
>   [@0]: dpo-drop ip4
>
>
>
>>
>>
>>>
>>> BGP communication works fine (tcp), OSPF NBMA works fine.
>>> Communication from host to physical interface already passes (224.0.0.5).
>>> Communication from the physical interface to the host did not pass
>>> (224.0.0.5).
>>>
>>>
>>
>> What did you observe while determining that inbound multicast was not
>> being passed to the host? Did you run a packet trace and confirm that the
>> packets are arriving on the VPP hardware interface? If so, can you please
>> send trace output for 1 or 2 of the inbound multicast packets? If not, you
>> can run a trace via a sequence of commands like:
>>
>> vppctl clear trace
>> vppctl trace filter include ip4-mfib-forward-lookup 100
>> vppctl trace add dpdk-input 100
>> sleep 10
>> vppctl show trace
>>
>
> Packets successfully arrived from physical interface  ip4-input report
> several problems:
> Step 1:
>
> Packet 3
> 00:11:11:019194: dpdk-input
>   GigabitEthernet3/0/0 rx queue 0
>   buffer 0x9c0c2: current data 0, length 82, buffer-pool 0, ref-count 1,
> totlen-nifb 0, trace handle 0x2
>   ext-hdr-valid
>   l4-cksum-computed l4-cksum-correct
>   PKT MBUF: port 1, nb_segs 1, pkt_len 82
> buf_len 2176, data_len 82, ol_flags 0x180, data_off 128, phys_addr
> 0x72103100
> packet_type 0x11 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
> rss 0x0 fdir.hi 0x0 fdir.lo 0x0
> Packet Offload Flags
>   PKT_RX_IP_CKSUM_GOOD (0x0080) IP cksum of RX pkt. is valid
>   PKT_RX_L4_CKSUM_GOOD (0x0100) L4 cksum of RX pkt. is valid
> Packet Types
>   RTE_PTYPE_L2_ETHER (0x0001) Ethernet packet
>   RTE_PTYPE_L3_IPV4 (0x0010) IPv4 packet without extension headers
>   IP4: b8:69:f4:99:85:40 -> 01:00:5e:00:00:05
>   OSPF: 10.10.50.2 -> 224.0.0.5
> tos 0xc0, ttl 1, length 68, checksum 0x1214 dscp CS6 ecn NON_ECN
> fragment id 0x8a7c
> 00:11:11:019235: ethernet-input
>   frame: flags 0x3, hw-if-index 2, sw-if-index 2
>   IP4: b8:69:f4:99:85:40 -> 01:00:5e:00:00:05
> 00:11:11:019250: ip4-input-no-checksum
>   OSPF: 10.10.50.2 -> 224.0.0.5
> tos 0xc0, ttl 1, length 68, checksum 0x1214 dscp CS6 ecn NON_ECN
> fragment id 0x8a7c
> 00:11:11:019258: ip4-mfib-forward-lookup
>   fib 0 entry 0
> 00:11:11:019268: ip4-mfib-forward-rpf
>   entry 0 itf -1 flags
> 00:11:11:019273: ip4-drop
> OSPF: 10.10.50.2 -> 224.0.0.5
>   tos 0xc0, ttl 1, length 68, checksum 0x1214 dscp CS6 ecn NON_ECN
>   fragment id 0x8a7c
> 00:11:11:019276: error-drop
>   rx:GigabitEthernet3/0/0
> 00:11:11:019281: drop
>   ip4-input: *Multicast RPF check failed*
>
>
> *==> vppctl ip mroute add 0.0.0.0/0  AA*
> Step 2:
>
> Packet 2
>
> 00:15:51:052464: dpdk-input
>   GigabitEthernet3/0/0 rx queue 0
>   buffer 0x9ccf2: current data 0, 

Re: [vpp-dev] #plugin #vpp Linux-cp multicast issue

2021-05-26 Thread Petr Boltík
Hi,
thank you . answers are inline below.

Predispositions
The host is apu4d4 + Debian 10.9 + bird 1.66 (old, but in repo), clear
install + VPP 21.06-rc1~0-ge82d59f38~b1 (10.10.50.1/24). The opposite side
is MikroTik rb4011 (10.10.50.2/24). There is no disabled plugins, only
enabled linux-cp.

st 26. 5. 2021 v 19:08 odesílatel Matthew Smith 
napsal:

> Hi Petr,
>
> Responses are inline...
>
> On Wed, May 26, 2021 at 10:23 AM  wrote:
>
>> Hello,
>>
>> I'm sorry for the beginner question, but I was unable to find the answer.
>> I tested the linux-cp feature in VPP 21.10-rc0. Nice job, but I cannot get
>> to work ospf multicast, probably I'm doing something wrong, or should it
>> work?
>>
>
>
> Multicast should work without any special configuration.
>
> What is the output of 'vppctl show ip mfib 224.0.0.0/24' after you apply
> your linux-cp and interface configurations?
>

There is first interesting point. There is no mbib route for 224.0.0.0/24

vpp# show ip mfib 224.0.0.0/24
ipv4-VRF:0, fib_index:0 flags:none
(*, 0.0.0.0/0):  flags:Drop,
 fib:0 index:0 locks:1
  src:Default Route flags:none locks:1:  flags:Drop,
Extensions:
Interface-Forwarding:
  Interfaces:
  multicast-ip4-chain
  [@0]: dpo-drop ip4



>
>
>>
>> BGP communication works fine (tcp), OSPF NBMA works fine.
>> Communication from host to physical interface already passes (224.0.0.5).
>> Communication from the physical interface to the host did not pass
>> (224.0.0.5).
>>
>>
>
> What did you observe while determining that inbound multicast was not
> being passed to the host? Did you run a packet trace and confirm that the
> packets are arriving on the VPP hardware interface? If so, can you please
> send trace output for 1 or 2 of the inbound multicast packets? If not, you
> can run a trace via a sequence of commands like:
>
> vppctl clear trace
> vppctl trace filter include ip4-mfib-forward-lookup 100
> vppctl trace add dpdk-input 100
> sleep 10
> vppctl show trace
>

Packets successfully arrived from physical interface  ip4-input report
several problems:
Step 1:

Packet 3
00:11:11:019194: dpdk-input
  GigabitEthernet3/0/0 rx queue 0
  buffer 0x9c0c2: current data 0, length 82, buffer-pool 0, ref-count 1,
totlen-nifb 0, trace handle 0x2
  ext-hdr-valid
  l4-cksum-computed l4-cksum-correct
  PKT MBUF: port 1, nb_segs 1, pkt_len 82
buf_len 2176, data_len 82, ol_flags 0x180, data_off 128, phys_addr
0x72103100
packet_type 0x11 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
rss 0x0 fdir.hi 0x0 fdir.lo 0x0
Packet Offload Flags
  PKT_RX_IP_CKSUM_GOOD (0x0080) IP cksum of RX pkt. is valid
  PKT_RX_L4_CKSUM_GOOD (0x0100) L4 cksum of RX pkt. is valid
Packet Types
  RTE_PTYPE_L2_ETHER (0x0001) Ethernet packet
  RTE_PTYPE_L3_IPV4 (0x0010) IPv4 packet without extension headers
  IP4: b8:69:f4:99:85:40 -> 01:00:5e:00:00:05
  OSPF: 10.10.50.2 -> 224.0.0.5
tos 0xc0, ttl 1, length 68, checksum 0x1214 dscp CS6 ecn NON_ECN
fragment id 0x8a7c
00:11:11:019235: ethernet-input
  frame: flags 0x3, hw-if-index 2, sw-if-index 2
  IP4: b8:69:f4:99:85:40 -> 01:00:5e:00:00:05
00:11:11:019250: ip4-input-no-checksum
  OSPF: 10.10.50.2 -> 224.0.0.5
tos 0xc0, ttl 1, length 68, checksum 0x1214 dscp CS6 ecn NON_ECN
fragment id 0x8a7c
00:11:11:019258: ip4-mfib-forward-lookup
  fib 0 entry 0
00:11:11:019268: ip4-mfib-forward-rpf
  entry 0 itf -1 flags
00:11:11:019273: ip4-drop
OSPF: 10.10.50.2 -> 224.0.0.5
  tos 0xc0, ttl 1, length 68, checksum 0x1214 dscp CS6 ecn NON_ECN
  fragment id 0x8a7c
00:11:11:019276: error-drop
  rx:GigabitEthernet3/0/0
00:11:11:019281: drop
  ip4-input: *Multicast RPF check failed*


*==> vppctl ip mroute add 0.0.0.0/0  AA*
Step 2:

Packet 2

00:15:51:052464: dpdk-input
  GigabitEthernet3/0/0 rx queue 0
  buffer 0x9ccf2: current data 0, length 82, buffer-pool 0, ref-count 1,
totlen-nifb 0, trace handle 0x1
  ext-hdr-valid
  l4-cksum-computed l4-cksum-correct
  PKT MBUF: port 1, nb_segs 1, pkt_len 82
buf_len 2176, data_len 82, ol_flags 0x180, data_off 128, phys_addr
0x72133d00
packet_type 0x11 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
rss 0x0 fdir.hi 0x0 fdir.lo 0x0
Packet Offload Flags
  PKT_RX_IP_CKSUM_GOOD (0x0080) IP cksum of RX pkt. is valid
  PKT_RX_L4_CKSUM_GOOD (0x0100) L4 cksum of RX pkt. is valid
Packet Types
  RTE_PTYPE_L2_ETHER (0x0001) Ethernet packet
  RTE_PTYPE_L3_IPV4 (0x0010) IPv4 packet without extension headers
  IP4: b8:69:f4:99:85:40 -> 01:00:5e:00:00:05
  OSPF: 10.10.50.2 -> 224.0.0.5
tos 0xc0, ttl 1, length 68, checksum 0x11f8 dscp CS6 ecn NON_ECN
fragment id 0x8a98
00:15:51:052515: ethernet-input
  frame: flags 0x3, hw-if-index 2, sw-if-index 2
  IP4: b8:69:f4:99:85:40 -> 01:00:5e:00:00:05
00:15:51:052533: ip4-input-no-checksum
  OSPF: 10.10.50.2 -> 224.0.0.5
tos 0xc0, ttl 1, length 68, checksum 0x11f8 dscp CS6 ecn 

[vpp-dev] VPP 21.06 RC1 is complete! RC2 on 2021-06-16

2021-05-26 Thread Andrew Yourtchenko
Hi all,

The VPP 21.06 RC1 milestone is complete!

The artifacts are available at https://packagecloud.io/fdio/2106/  [0]

The stable/2106 branch has been pulled and is ready for your
release-related bugfixes.

For all the fixes - please merge them to master first, then
cherry-pick to stable/2106 and other relevant branches.

Speaking of bugfixes: you may already find the draft release notes for
21.06 at https://gerrit.fd.io/r/c/vpp/+/32454 [1]

You will notice that in addition to the usual data, there is a section
with the per-component open coverity defects snapshot at the time of
RC1.

I would encourage the component maintainers to take a look, but anyone
is welcome to take a look and submit fixes that address them too. If
anyone is looking to get yourself more familiar with a given component
of VPP - this is a good way! :-)

As per release plan [3] - our next milestone is RC2 which will be in
three weeks time, 16th June 2021.

--a # your friendly 21.06 release manager

[0] installability tested with "PACKAGECLOUD_REPO=fdio/2106
VPP_CHECK_VERSION=21.06-rc1 ./run-docker-test" from
https://github.com/ayourtch/vpp-relops/tree/master/docker-tests

[1] I still need to craft the "release highlights", so far it is just
three bullet points. But I did not want it to get in the way of the
review of the other parts of it.

[2] https://wiki.fd.io/view/Projects/vpp/Release_Plans/Release_Plan_21.06

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19475): https://lists.fd.io/g/vpp-dev/message/19475
Mute This Topic: https://lists.fd.io/mt/83106944/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] #plugin #vpp Linux-cp multicast issue

2021-05-26 Thread Matthew Smith via lists.fd.io
Hi Petr,

Responses are inline...

On Wed, May 26, 2021 at 10:23 AM  wrote:

> Hello,
>
> I'm sorry for the beginner question, but I was unable to find the answer.
> I tested the linux-cp feature in VPP 21.10-rc0. Nice job, but I cannot get
> to work ospf multicast, probably I'm doing something wrong, or should it
> work?
>


Multicast should work without any special configuration.

What is the output of 'vppctl show ip mfib 224.0.0.0/24' after you apply
your linux-cp and interface configurations?


>
> BGP communication works fine (tcp), OSPF NBMA works fine.
> Communication from host to physical interface already passes (224.0.0.5).
> Communication from the physical interface to the host did not pass
> (224.0.0.5).
>
>

What did you observe while determining that inbound multicast was not being
passed to the host? Did you run a packet trace and confirm that the packets
are arriving on the VPP hardware interface? If so, can you please send
trace output for 1 or 2 of the inbound multicast packets? If not, you can
run a trace via a sequence of commands like:

vppctl clear trace
vppctl trace filter include ip4-mfib-forward-lookup 100
vppctl trace add dpdk-input 100
sleep 10
vppctl show trace

Several possible outcomes based on the trace output:

1. No multicast packets arrive on the hardware interface. That would
obviously be a problem with the external network rather than VPP or
linux-cp.
2. Multicast packets arrive on the hardware interface and the trace shows
them being dropped during some phase of processing. Hopefully there would
be some indication of why they are being dropped in the trace output or in
the output of 'vppctl show errors'.
3. Multicast packets arrive on the hardware interface and the trace shows
them being transmitted on the host tap interface. If that is the case, does
'tcpdump -i ge300' on the host show any packets arriving? Are you running
any kernel-based packet filter like iptables or nftables on the host?

Thanks,
-Matt


Is there any way how to configure multicast redirect? A have tried a lot of
> configurations (ip mroute, ) with no success.
> Many thanks for your answers. Regards Petr
>
> example configuration (enable linux-cp plugin)
> # vpp
> set int ip address GigabitEthernet3/0/0 10.10.50.1/24
> set int state GigabitEthernet3/0/0 up
> lcp create GigabitEthernet3/0/0 host-if ge300
> # linux host
> ip addr add 10.10.50.1/24 dev ge300
> ip link set dev ge300 mtu 1500
> ip link set dev ge300 up
>
>
>
>
> 
>
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19474): https://lists.fd.io/g/vpp-dev/message/19474
Mute This Topic: https://lists.fd.io/mt/83103366/21656
Mute #vpp:https://lists.fd.io/g/vpp-dev/mutehashtag/vpp
Mute #plugin:https://lists.fd.io/g/vpp-dev/mutehashtag/plugin
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] #plugin #vpp Linux-cp multicast issue

2021-05-26 Thread petr . boltik
Hello,

I'm sorry for the beginner question, but I was unable to find the answer. I 
tested the linux-cp feature in VPP 21.10-rc0. Nice job, but I cannot get to 
work ospf multicast, probably I'm doing something wrong, or should it work?

BGP communication works fine (tcp), OSPF NBMA works fine.
Communication from host to physical interface already passes (224.0.0.5).
Communication from the physical interface to the host did not pass (224.0.0.5).

Is there any way how to configure multicast redirect? A have tried a lot of 
configurations (ip mroute, ) with no success.
Many thanks for your answers. Regards Petr

example configuration (enable linux-cp plugin)
# vpp
set int ip address GigabitEthernet3/0/0 10.10.50.1/24
set int state GigabitEthernet3/0/0 up
lcp create GigabitEthernet3/0/0 host-if ge300
# linux host
ip addr add 10.10.50.1/24 dev ge300
ip link set dev ge300 mtu 1500
ip link set dev ge300 up

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19473): https://lists.fd.io/g/vpp-dev/message/19473
Mute This Topic: https://lists.fd.io/mt/83103366/21656
Mute #vpp:https://lists.fd.io/g/vpp-dev/mutehashtag/vpp
Mute #plugin:https://lists.fd.io/g/vpp-dev/mutehashtag/plugin
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] IPv6 in IPv6 Encapsulation

2021-05-26 Thread Mohsin Kazmi via lists.fd.io
Hello Jerome,

You can disable checksum offload on veth pair in Linux.

sudo ethtool -K veth0 tx off

sudo ethtool -K veth0 rx off



But it will not resolve the actual issue if an interface being used in future 
has enabled offloads.

You need to compute the checksums in your custom node before encapsulating them 
in IPv6 header. Vxlan encap node has an example of it today.



Best Regards,

Mohsin
From:  on behalf of "jerome.bay...@student.uliege.be" 

Date: Tuesday, May 25, 2021 at 5:31 PM
To: Ole Troan 
Cc: "vpp-dev@lists.fd.io" , Justin Iurman 

Subject: Re: [vpp-dev] IPv6 in IPv6 Encapsulation

Hello Ole,

I implemented the solution you suggested (i.e chaining the buffers) and it 
seems to work correctly now so thank you !

However, I had another issue : when some TCP or UDP packets arrive in VPP, the 
latter seems to set their checksum to zero and it also sets the "offload" flag 
of the associated buffer. In the last VPP nodes the packet traverses, the 
checksum is recomputed just before the packet is forwarded and everything is 
fine.

Firstly, I don't really understand why it does that ? I create a veth interface 
on my ubuntu and then I link this interface to VPP by using an 
"host-interface". Maybe I need to configure something about the interfaces to 
disable this behavior ?
Secondly, in a "normal" case, as I said here above, VPP is able to recompute 
the checksum at the end of the graph and nothing bad happens. The problem is 
that, in my case, I need to create a buffer chain and when I do so, VPP is not 
able to recompute the checksums (probably because some information from the 
buffer metadata it is usually using are invalidated because of the buffer 
chains ?).

Thanks again for your help,

Jérôme


De: "jerome bayaux" 
À: "Ole Troan" 
Cc: vpp-dev@lists.fd.io, "Neale Ranns" , "Justin Iurman" 

Envoyé: Vendredi 21 Mai 2021 18:20:31
Objet: Re: [vpp-dev] IPv6 in IPv6 Encapsulation

Changing the PRE_DATA_SIZE value in src/vlib/CMakeLists.txt does not appear to 
be that easy..

Indeed, it seems to require several other changes like the value of 
DPDK_RTE_PKTMBUF_HEADROOM that appears in src/plugins/dpdk/CMakeLists.txt, and 
some static assert fail by saying : "save_rewrite_length member must be able to 
hold the max value of rewrite length".

Thus, the best solution is probably the one given by Ole ? Could you help me 
(guide me) a little bit by pointing me files of interest or by redirecting me 
towards some examples if some exist ? For instance, I'm not sure to see which 
functions I should use to create a new buffer and then to chain it to the 
"main" one.

Jérôme


De: "Ole Troan" 
À: "jerome bayaux" 
Cc: vpp-dev@lists.fd.io, "Neale Ranns" , "Justin Iurman" 

Envoyé: Vendredi 21 Mai 2021 17:21:32
Objet: Re: [vpp-dev] IPv6 in IPv6 Encapsulation



On 21 May 2021, at 17:15, Neale Ranns  wrote:
Right, there’s only so much space available. You’ll need to recompile VPP to 
get more space.
Change the PRE_DATA_SIZE value in src/vlib/CMakeLists.txt.

Alternatively use a new buffer for the new IPv6 header and extension header 
chain and chain the buffers together.
You might want to look at the ioam plugin too btw.

Cheers
Ole


/neale


From: jerome.bay...@student.uliege.be 
Date: Friday, 21 May 2021 at 17:06
To: Neale Ranns 
Cc: vpp-dev@lists.fd.io , Justin Iurman 

Subject: Re: [vpp-dev] IPv6 in IPv6 Encapsulation
I've just run few tests to be sure :

It's exactly that ! As long as the extension header is smaller or exactly equal 
to 128 bytes, everything is fine.

Once it gets bigger than 128 bytes, it starts to go wrong and funky.

Jérôme


De: "Neale Ranns" 
À: "jerome bayaux" 
Cc: vpp-dev@lists.fd.io, "Justin Iurman" 
Envoyé: Vendredi 21 Mai 2021 16:38:02
Objet: Re: [vpp-dev] IPv6 in IPv6 Encapsulation


Does it all start to go wrong when the extension header gets to about 128 bytes?

/neale


From: jerome.bay...@student.uliege.be 
Date: Friday, 21 May 2021 at 16:04
To: Neale Ranns 
Cc: vpp-dev@lists.fd.io , Justin Iurman 

Subject: Re: [vpp-dev] IPv6 in IPv6 Encapsulation
Hi again Neale,

Here are some additional observations I've noticed and that could be useful for 
you to help me :

1) The error only shows up when the Hop-by-Hop extension header I add is big 
enough (I can give you a more accurate definition of "enough" if you need). 
When it is quite small, everything seems fine.

2) The faulty MAC address seems to follow a "pattern" : it is always of the 
form "X:00:00:00:e3:6e", where byte X is a number that increases for the 
following packets. Moreover, the bytes "e3:6e" (i.e last 16 bytes of MAC 
address) are correct and correspond to the last 16 bytes of the expected and 
thus correct destination MAC address.

Thank you for the help,

Jérôme


De: "jerome bayaux" 
À: "Neale Ranns" 
Cc: vpp-dev@lists.fd.io, "Justin Iurman" 
Envoyé: Vendredi 21 Mai 2021 

Re: [vpp-dev]: Unable to run VPP with ASAN enabled

2021-05-26 Thread Benoit Ganne (bganne) via lists.fd.io
Hi Rajith,

> I was able to proceed further after setting LD_PRELOAD to the asan
> library. After this i get SIGSEGV crash in asan. These dont seem to be
> related to our code, as without ASAN they have been perfectly working.

I suspect the opposite  - ASan detects errors we do not detect in release or 
debug mode, esp. out-of-bound access and use-after-free. Look carefully at 
/home/supervisor/libvpp/src/vpp/rtbrick/rtb_vpp_ifp.c:287

Best
ben

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19471): https://lists.fd.io/g/vpp-dev/message/19471
Mute This Topic: https://lists.fd.io/mt/83071228/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-