Re: [vpp-dev] BFD sends old remote discriminator in its control packet after session goes to DOWN state #vpp

2020-01-19 Thread sontu mazumdar
Hi Klement,

Thanks for the patch file.
The fix works.
Now I could see that once BFD session goes DOWN due to inactivity timer,
remote discriminator is set 0 in BFD control packet.

On Fri, Jan 17, 2020 at 3:37 PM Klement Sekera -X (ksekera - PANTHEON TECH
SRO at Cisco)  wrote:

> Hi,
>
> thank you for your report.
>
> Can you please apply this fix and verify that the behaviour is now correct?
>
> https://gerrit.fd.io/r/c/vpp/+/24388
>
> Thanks,
> Klement
>
> > On 17 Jan 2020, at 07:04, sont...@gmail.com wrote:
> >
> > Hi,
> >
> > I have observed an incorrect behavior in BFD code of VPP.
> > I have brought BFD session UP between VPP and peer router.
> > Due to interface shutdown on peer router BFD session on VPP goes to DOWN
> state.
> > Once it goes to DOWN state, it is continuously sending control packets
> using its old remote discriminator in its control packet's "Your
> Discriminator" field.
> > This is a wrong behavior. Below RFC section tells that once BFD goes
> DOWN due to non-receipt of BFD control packet, "Your Discriminator" field
> should be set to zero.
> >
> > RFC 5880 6.8.1. State Variables
> >
> >
> >bfd.RemoteDiscr
> >
> >   The remote discriminator for this BFD session.  This is the
> >   discriminator chosen by the remote system, and is totally opaque
> >   to the local system.  This MUST be initialized to zero.  If a
> >   period of a Detection Time passes without the receipt of a valid,
> >   authenticated BFD packet from the remote system, this variable
> >   MUST be set to zero.
> >
> > 
>
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15201): https://lists.fd.io/g/vpp-dev/message/15201
Mute This Topic: https://lists.fd.io/mt/69820780/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] #vpp-hoststack - Issue with UDP receiver application using VCL library

2020-01-19 Thread Florin Coras
Hi Raj,

The function used for receiving datagrams is limited to reading at most the 
length of a datagram from the rx fifo. UDP datagrams are mtu sized, so your 
reads are probably limited to ~1.5kB. On each epoll rx event try reading from 
the session handle in a while loop until you get an VPPCOM_EWOULDBLOCK. That 
might improve performance. 

Having said that, udp is lossy so unless you implement your own congestion/flow 
control algorithms, the data you’ll receive might be full of “holes”. What are 
the rx/tx error counters on your interfaces (check with “sh int”). 

Also, with simple tuning like this [1], you should be able to achieve much more 
than 15Gbps with tcp. 

Regards,
Florin

[1] https://wiki.fd.io/view/VPP/HostStack/LDP/iperf

> On Jan 19, 2020, at 3:25 PM, Raj Kumar  wrote:
> 
>   Hi Florin,
>  By using VCL library in an UDP receiver application,  I am able to receive 
> only 2 Mbps traffic. On increasing the traffic, I see Rx FIFO full error and 
> application stopped receiving the traffic from the session layer.  Whereas, 
> with TCP I can easily achieve 15Gbps throughput without tuning any DPDK 
> parameter.  UDP tx also looks fine. From an host application I can send 
> ~5Gbps without any issue. 
> 
> I am running VPP( stable/2001 code) on RHEL8 server using Mellanox 100G 
> (MLNX5) adapters.
> Please advise if I can use VCL library to receive high throughput UDP traffic 
> ( in Gbps). I would be running multiple instances of host application to 
> receive data ( ~50-60 Gbps).
> 
> I also tried by increasing the Rx FIFO size to 16MB but did not help much. 
> The host application is just throwing the received packets , it is not doing 
> any packet processing.
> 
> [root@orc01 vcl_test]# VCL_DEBUG=2 ./udp6_server_vcl
> VCL<20201>: configured VCL debug level (2) from VCL_DEBUG!
> VCL<20201>: allocated VCL heap = 0x7f39a17ab010, size 268435456 (0x1000)
> VCL<20201>: configured rx_fifo_size 400 (0x3d0900)
> VCL<20201>: configured tx_fifo_size 400 (0x3d0900)
> VCL<20201>: configured app_scope_local (1)
> VCL<20201>: configured app_scope_global (1)
> VCL<20201>: configured api-socket-name (/tmp/vpp-api.sock)
> VCL<20201>: completed parsing vppcom config!
> vppcom_connect_to_vpp:480: vcl<20201:0>: app (udp6_server) is connected to 
> VPP!
> vppcom_app_create:1104: vcl<20201:0>: sending session enable
> vppcom_app_create:1112: vcl<20201:0>: sending app attach
> vppcom_app_create:1121: vcl<20201:0>: app_name 'udp6_server', my_client_index 
> 256 (0x100)
> vppcom_epoll_create:2439: vcl<20201:0>: Created vep_idx 0
> vppcom_session_create:1179: vcl<20201:0>: created session 1
> vppcom_session_bind:1317: vcl<20201:0>: session 1 handle 1: binding to local 
> IPv6 address fd0d:edc4::2001::203 port 8092, proto UDP
> vppcom_session_listen:1349: vcl<20201:0>: session 1: sending vpp listen 
> request...
> vcl_session_bound_handler:604: vcl<20201:0>: session 1 [0x1]: listen 
> succeeded!
> vppcom_epoll_ctl:2541: vcl<20201:0>: EPOLL_CTL_ADD: vep_sh 0, sh 1, events 
> 0x1, data 0x1!
> vppcom_session_create:1179: vcl<20201:0>: created session 2
> vppcom_session_bind:1317: vcl<20201:0>: session 2 handle 2: binding to local 
> IPv6 address fd0d:edc4::2001::203 port 8093, proto UDP
> vppcom_session_listen:1349: vcl<20201:0>: session 2: sending vpp listen 
> request...
> vcl_session_app_add_segment_handler:765: vcl<20201:0>: mapped new segment 
> '20190-2' size 134217728
> vcl_session_bound_handler:604: vcl<20201:0>: session 2 [0x2]: listen 
> succeeded!
> vppcom_epoll_ctl:2541: vcl<20201:0>: EPOLL_CTL_ADD: vep_sh 0, sh 2, events 
> 0x1, data 0x2!
> 
> 
> vpp# sh session verbose 2
> [#0][U] fd0d:edc4::2001::203:8092->:::0
> 
>  Rx fifo: cursize 3999125 nitems 399 has_event 1
>   head 2554045 tail 2553170 segment manager 1
>   vpp session 0 thread 0 app session 1 thread 0
>   ooo pool 0 active elts newest 4294967295
>  Tx fifo: cursize 0 nitems 399 has_event 0
>   head 0 tail 0 segment manager 1
>   vpp session 0 thread 0 app session 1 thread 0
>   ooo pool 0 active elts newest 0
> [#0][U] fd0d:edc4::2001::203:8093->:::0
> 
>  Rx fifo: cursize 0 nitems 399 has_event 0
>   head 0 tail 0 segment manager 2
>   vpp session 1 thread 0 app session 2 thread 0
>   ooo pool 0 active elts newest 0
>  Tx fifo: cursize 0 nitems 399 has_event 0
>   head 0 tail 0 segment manager 2
>   vpp session 1 thread 0 app session 2 thread 0
>   ooo pool 0 active elts newest 0
> Thread 0: active sessions 2
> 
> [root@orc01 vcl_test]# cat /etc/vpp/vcl.conf
> vcl {
>   rx-fifo-size 400
>   tx-fifo-size 400
>   app-scope-local
>   app-scope-global
>   api-socket-name /tmp/vpp-api.sock
> }
> [root@orc01 vcl_test]#
> 
> --- Start of thread 0 vpp_main ---
> Packet 1
> 
> 00:09:53:445025: dpdk-input
>   HundredGigabitEthernet12/0/0 rx queue 0
>   buffer 0x88078: 

Re: [vpp-dev] #vpp-hoststack - Issue with UDP receiver application using VCL library

2020-01-19 Thread Raj Kumar
  Hi Florin,
 By using VCL library in an UDP receiver application,  I am able to receive
only 2 Mbps traffic. On increasing the traffic, I see Rx FIFO full error
and application stopped receiving the traffic from the session layer.
Whereas, with TCP I can easily achieve 15Gbps throughput without tuning any
DPDK parameter.  UDP tx also looks fine. From an host application I can
send ~5Gbps without any issue.

I am running VPP( stable/2001 code) on RHEL8 server using Mellanox 100G
(MLNX5) adapters.
Please advise if I can use VCL library to receive high throughput UDP
traffic ( in Gbps). I would be running multiple instances of host
application to receive data ( ~50-60 Gbps).

I also tried by increasing the Rx FIFO size to 16MB but did not help much.
The host application is just throwing the received packets , it is not
doing any packet processing.

[root@orc01 vcl_test]# VCL_DEBUG=2 ./udp6_server_vcl
VCL<20201>: configured VCL debug level (2) from VCL_DEBUG!
VCL<20201>: allocated VCL heap = 0x7f39a17ab010, size 268435456 (0x1000)
VCL<20201>: configured rx_fifo_size 400 (0x3d0900)
VCL<20201>: configured tx_fifo_size 400 (0x3d0900)
VCL<20201>: configured app_scope_local (1)
VCL<20201>: configured app_scope_global (1)
VCL<20201>: configured api-socket-name (/tmp/vpp-api.sock)
VCL<20201>: completed parsing vppcom config!
vppcom_connect_to_vpp:480: vcl<20201:0>: app (udp6_server) is connected to
VPP!
vppcom_app_create:1104: vcl<20201:0>: sending session enable
vppcom_app_create:1112: vcl<20201:0>: sending app attach
vppcom_app_create:1121: vcl<20201:0>: app_name 'udp6_server',
my_client_index 256 (0x100)
vppcom_epoll_create:2439: vcl<20201:0>: Created vep_idx 0
vppcom_session_create:1179: vcl<20201:0>: created session 1
vppcom_session_bind:1317: vcl<20201:0>: session 1 handle 1: binding to
local IPv6 address fd0d:edc4::2001::203 port 8092, proto UDP
vppcom_session_listen:1349: vcl<20201:0>: session 1: sending vpp listen
request...
vcl_session_bound_handler:604: vcl<20201:0>: session 1 [0x1]: listen
succeeded!
vppcom_epoll_ctl:2541: vcl<20201:0>: EPOLL_CTL_ADD: vep_sh 0, sh 1, events
0x1, data 0x1!
vppcom_session_create:1179: vcl<20201:0>: created session 2
vppcom_session_bind:1317: vcl<20201:0>: session 2 handle 2: binding to
local IPv6 address fd0d:edc4::2001::203 port 8093, proto UDP
vppcom_session_listen:1349: vcl<20201:0>: session 2: sending vpp listen
request...
vcl_session_app_add_segment_handler:765: vcl<20201:0>: mapped new segment
'20190-2' size 134217728
vcl_session_bound_handler:604: vcl<20201:0>: session 2 [0x2]: listen
succeeded!
vppcom_epoll_ctl:2541: vcl<20201:0>: EPOLL_CTL_ADD: vep_sh 0, sh 2, events
0x1, data 0x2!


vpp# sh session verbose 2
[#0][U] fd0d:edc4::2001::203:8092->:::0

 Rx fifo: cursize 3999125 nitems 399 has_event 1
  head 2554045 tail 2553170 segment manager 1
  vpp session 0 thread 0 app session 1 thread 0
  ooo pool 0 active elts newest 4294967295
 Tx fifo: cursize 0 nitems 399 has_event 0
  head 0 tail 0 segment manager 1
  vpp session 0 thread 0 app session 1 thread 0
  ooo pool 0 active elts newest 0
[#0][U] fd0d:edc4::2001::203:8093->:::0

 Rx fifo: cursize 0 nitems 399 has_event 0
  head 0 tail 0 segment manager 2
  vpp session 1 thread 0 app session 2 thread 0
  ooo pool 0 active elts newest 0
 Tx fifo: cursize 0 nitems 399 has_event 0
  head 0 tail 0 segment manager 2
  vpp session 1 thread 0 app session 2 thread 0
  ooo pool 0 active elts newest 0
Thread 0: active sessions 2

[root@orc01 vcl_test]# cat /etc/vpp/vcl.conf
vcl {
  rx-fifo-size 400
  tx-fifo-size 400
  app-scope-local
  app-scope-global
  api-socket-name /tmp/vpp-api.sock
}
[root@orc01 vcl_test]#

--- Start of thread 0 vpp_main ---
Packet 1

00:09:53:445025: dpdk-input
  HundredGigabitEthernet12/0/0 rx queue 0
  buffer 0x88078: current data 0, length 1516, buffer-pool 0, ref-count 1,
totlen-nifb 0, trace handle 0x0
  ext-hdr-valid
  l4-cksum-computed l4-cksum-correct
  PKT MBUF: port 0, nb_segs 1, pkt_len 1516
buf_len 2176, data_len 1516, ol_flags 0x180, data_off 128, phys_addr
0x75601e80
packet_type 0x2e1 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
rss 0x0 fdir.hi 0x0 fdir.lo 0x0
Packet Offload Flags
  PKT_RX_IP_CKSUM_GOOD (0x0080) IP cksum of RX pkt. is valid
  PKT_RX_L4_CKSUM_GOOD (0x0100) L4 cksum of RX pkt. is valid
Packet Types
  RTE_PTYPE_L2_ETHER (0x0001) Ethernet packet
  RTE_PTYPE_L3_IPV6_EXT_UNKNOWN (0x00e0) IPv6 packet with or without
extension headers
  RTE_PTYPE_L4_UDP (0x0200) UDP packet
  IP6: b8:83:03:79:9f:e4 -> b8:83:03:79:af:8c 802.1q vlan 2001
  UDP: fd0d:edc4::2001::201 -> fd0d:edc4::2001::203
tos 0x00, flow label 0x0, hop limit 64, payload length 1458
  UDP: 56944 -> 8092
length 1458, checksum 0xb22d
00:09:53:445028: 

Re: [vpp-dev] [VCL] hoststack app crash with invalid memfd segment address

2020-01-19 Thread Florin Coras
Hi Hanlin, 

Thanks for confirming!

Regards,
Florin

> On Jan 18, 2020, at 7:00 PM, wanghanlin  wrote:
> 
> Hi Florin,
> With latest master code,the problem regarding 3) has been fixed.
> 
> Thanks & Regards,
> Hanlin
> 
>   
> wanghanlin
> 
> wanghan...@corp.netease.com
>  
> 
> 签名由 网易邮箱大师  定制
> On 12/12/2019 14:53,wanghanlin 
>  wrote: 
> That's great! 
> I'll apply and check it soon.
> 
> Thanks & Regards,
> Hanlin
> 
>   
> wanghanlin
> 
> wanghan...@corp.netease.com
>  
> 
> 签名由 网易邮箱大师  定制
> On 12/12/2019 04:15,Florin Coras 
>  wrote: 
> Hi Hanlin, 
> 
> Thanks to Dave, we can now have per thread binary api connections to vpp. 
> I’ve updated the socket client and vcl to leverage this so, after [1] we have 
> per vcl worker thread binary api sockets that are used to exchange fds. 
> 
> Let me know if you’re still hitting the issue. 
> 
> Regards,
> Florin
> 
> [1] https://gerrit.fd.io/r/c/vpp/+/23687 
> 
> 
>> On Nov 22, 2019, at 10:30 AM, Florin Coras > > wrote:
>> 
>> Hi Hanlin, 
>> 
>> Okay, that’s a different issue. The expectation is that each vcl worker has 
>> a different binary api transport into vpp. This assumption holds for 
>> applications with multiple process workers (like nginx) but is not 
>> completely satisfied for applications with thread workers. 
>> 
>> Namely, for each vcl worker we connect over the socket api to vpp and 
>> initialize the shared memory transport (so binary api messages are delivered 
>> over shared memory instead of the socket). However, as you’ve noted, the 
>> socket client is currently not multi-thread capable, consequently we have an 
>> overlap of socket client fds between the workers. The first segment is 
>> assigned properly but the subsequent ones will fail in this scenario. 
>> 
>> I wasn’t aware of this so we’ll have to either fix the socket binary api 
>> client, for multi-threaded apps, or change the session layer to use 
>> different fds for exchanging memfd fds. 
>> 
>> Regards, 
>> Florin
>> 
>>> On Nov 21, 2019, at 11:47 PM, wanghanlin >> > wrote:
>>> 
>>> Hi Florin,
>>> Regarding 3), I think main problem maybe in function 
>>> vl_socket_client_recv_fd_msg called by vcl_session_app_add_segment_handler. 
>>>  Mutiple worker threads share the same scm->client_socket.fd, so B2 may 
>>> receive the segment memfd belong to A1.
>>> 
>>>
>>> Regards,
>>> Hanlin
>>> 
>>> 
>>> wanghanlin
>>> 
>>> wanghan...@corp.netease.com
>>>  
>>> 
>>> 签名由 网易邮箱大师  定制
>>> On 11/22/2019 01:44,Florin Coras 
>>>  wrote: 
>>> Hi Hanlin, 
>>> 
>>> As Jon pointed out, you may want to register with gerrit. 
>>> 
>>> You comments with respect to points 1) and 2) are spot on. I’ve updated the 
>>> patch to fix them. 
>>> 
>>> Regarding 3), if I understood your scenario correctly, it should not 
>>> happen. The ssvm infra forces applications to map segments at fixed 
>>> addresses. That is, for the scenario you’re describing lower, if B2 is 
>>> processed first, ssvm_slave_init_memfd will map the segment at A2. Note how 
>>> we first map the segment to read the shared header (sh) and then use 
>>> sh->ssvm_va (which should be A2) to remap the segment at a fixed virtual 
>>> address (va). 
>>> 
>>> Regards,
>>> Florin
>>> 
 On Nov 21, 2019, at 2:49 AM, wanghanlin >>> > wrote:
 
 Hi Florin,
 I have applied the patch, and found some problems in my case.  I have not 
 right to post it in gerrit, so I post here.
 1)evt->event_type should be set  with SESSION_CTRL_EVT_APP_DEL_SEGMENT 
 rather than SESSION_CTRL_EVT_APP_ADD_SEGMENT. File: 
 src/vnet/session/session_api.c, Line: 561, Function:mq_send_del_segment_cb
 

[vpp-dev] Coverity run FAILED as of 2020-01-19 14:00:54 UTC

2020-01-19 Thread Noreply Jenkins
Coverity run failed today.

Current number of outstanding issues are 7
Newly detected: 0
Eliminated: 0
More details can be found at  
https://scan.coverity.com/projects/fd-io-vpp/view_defects
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15197): https://lists.fd.io/g/vpp-dev/message/15197
Mute This Topic: https://lists.fd.io/mt/69909885/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-