Re: [vpp-dev] vpp hangs with bfd configuration along with mpls (inner and outer ctxt)

2022-03-18 Thread Rajith PR via lists.fd.io
HI Sastry, For VPN v4 session, labelled IPvx route(for ingressing) and mpls route(for egressing) need to be set up. Can you check if they are getting programmed correctly into VPP. The Labeled route seems to be OK from the o/p you have pasted. Thanks, Rajith On Fri, Mar 18, 2022 at 10:58 AM

Re: [vpp-dev] vpp hangs with bfd configuration along with mpls (inner and outer ctxt)

2022-03-17 Thread Rajith PR via lists.fd.io
Hi Sastry, The loop issue with BFD in our case we resolved by installing mpls php route with eos. Earlier we had installed the mpls php route without eos. Our case was mpls php with IPv4 forward. However from the config you shared your case seems to be with MPLS ingress?? Thanks, Rajith On

[vpp-dev]: Question on VPP's hash table usage

2022-03-01 Thread Rajith PR via lists.fd.io
Hi All, We are observing a random crash in code that we have added in VPP. The stack trace indicates an invalid memory access in _*hash_get*(). From the hash table code we see that the hash table can auto resize and sink based on the utilization. So the question is whether we need to take a

[vpp-dev]: Integratiing VPP NAT for MPLS VPN

2022-02-20 Thread Rajith PR via lists.fd.io
Hi All, We are exploring VPP's *NAT plugin *for PE router in an MPLS VPN deployment. A reference diagram is given below. [image: NAT-PE.png] Private IP addresses are assigned to the hosts by the PE routers(NAT-PE and PE-2). All the hosts in a VPN(Shop or Bank) are assigned unique IP addresses

[vpp-dev]: Patches for Code Review and Merge

2022-02-12 Thread Rajith PR via lists.fd.io
Hi VPP Reviewers, We have been rebasing our downstream vpp version to the upstream version since 2019 quite successfully. But due to various reasons we have not been able to upstream the few fixes that we have made in our downstream version. The following patches we are submitting for review. Do

Re: [vpp-dev]: Unable to run make test

2022-02-04 Thread Rajith PR via lists.fd.io
is getting built. Thanks, Rajith On Fri, Feb 4, 2022 at 10:10 PM Dave Wallace wrote: > Rajith, > > What OS are you building on and what VPP branch are you trying to build? > > Ubuntu-20.04/master:HEAD works for me. > > Thanks, > -daw- > > On 2/4/22 10:29 AM, R

Re: [vpp-dev]: Unable to run make test

2022-02-04 Thread Rajith PR via lists.fd.io
Feb 4, 2022 at 6:14 PM Klement Sekera wrote: > Also, running under root is a bad idea and not supported. > > Cheers, > Klement > > On 4 Feb 2022, at 13:27, Ole Troan via lists.fd.io < > otroan=employees@lists.fd.io> wrote: > > Rajith, > > Have yo

[vpp-dev]: Unable to run make test

2022-02-04 Thread Rajith PR via lists.fd.io
Hi All, We are trying to understand the VPP test framework. To get started we ran an example suite (ip4 test) but it seems that the dependent executable(vpp) is missing. Please find the logs below. *sudo make test TEST=test_ip4 vpp-install* make -C /home/supervisor/libvpp/build-root PLATFORM=vpp

[vpp-dev]: Segmentation fault in mspace_is_heap_object

2022-01-17 Thread Rajith PR via lists.fd.io
Hi All, We are facing a random crash during the scale of MPLS tunnels(8000 mpls tunnels). The crash has been observed multiple times and the call stack is the same. During the worker thread crash the main thread has executed the following lines of code(between the barrier sync and release),

Re: [vpp-dev] Unable to configure mixed NAT and non-NAT traffic

2022-01-14 Thread Rajith PR via lists.fd.io
Hi all, Just to add to the query, I have observed that in interface configuration is optional for NAT to work. All traffic get NATed if out interface is set with output-feature. Thanks, Rajith On Thu, 13 Jan 2022 at 7:06 AM, alekcejk via lists.fd.io wrote: > Hi all, > > I am trying to get

Re: [vpp-dev]: SIGSEV in mpls_tunnel_collect_forwarding

2021-10-21 Thread Rajith PR via lists.fd.io
ldren - it updates all the children with that "popular" flag. And at > that point there are no path extensions yet on the last child. So, I > suppose that it should be okay to add a check something like > > if (NULL == path_ext) > > { > > return (FIB_PATH_LIST_WALK_CON

Re: [vpp-dev] assert in pool_elt_at_index

2021-10-13 Thread Rajith PR via lists.fd.io
HI Stanislav, My guess is you don't have the commit below. commit 8341f76fd1cd4351961cd8161cfed2814fc55103 Author: Dave Barach Date: Wed Jun 3 08:05:15 2020 -0400 fib: add barrier sync, pool/vector expand cases load_balance_alloc_i(...) is not thread safe when the

Re: [vpp-dev]: ASSERT in load_balance_get()

2021-10-13 Thread Rajith PR via lists.fd.io
dbinit > Loading vpp functions... > Load vlLoad pe > Load pifi > Load node_name_from_index > Load vnet_buffer_opaque > Load vnet_buffer_opaque2 > Load bitmap_get > Done loading vpp functions... > (gdb) pifi load_balance_pool 16 > pool_is_free_index (load_balan

Re: [vpp-dev]: Assert in vnet_mpls_tunnel_del()

2021-09-14 Thread Rajith PR via lists.fd.io
, Rajith On Tue, Sep 14, 2021 at 5:27 PM Neale Ranns wrote: > > > Hi Rajiyh, > > > > Maybe there’s something that still resolves through the tunnel when it’s > deleted? > > > > /neale > > > > *From: *vpp-dev@lists.fd.io on behalf of Rajith PR &g

[vpp-dev]: Assert in vnet_mpls_tunnel_del()

2021-09-14 Thread Rajith PR via lists.fd.io
Hi All, We recently started using the VPP's mpls tunnel constructs for our L2 cross connect application. In certain test scenarios we are seeing a crash in the delete path of the mpls tunnel. Any pointers to fix the issue would be really helpful. Version: *20.09* Call Stack: Thread 1 (Thread

Re: [vpp-dev] : Worker Thread Deadlock Detected from vl_api_clnt_node

2021-07-09 Thread Rajith PR via lists.fd.io
Hi Satya, We migrated to 20.09 in March 2021. The crash has not been observed after that. Not sure if some commit went between 20.05 and 20.09 that has fixed or improved the situation. Thanks, Rajith On Fri, Jul 9, 2021 at 10:19 AM Satya Murthy wrote: > Hi Rajith / Dave, > > We are on

Re: [vpp-dev]: Unable to run VPP with ASAN enabled

2021-05-27 Thread Rajith PR via lists.fd.io
Hi Ben, The problem seems to be due to external libraries that we have linked with VPP. These external libraries have not been compiled with ASAN. I could see that when those external libraries were suppressed through the MyASAN.supp file, VPP started running with ASAN enabled. Thanks, Rajith

Re: [vpp-dev]: Unable to run VPP with ASAN enabled

2021-05-25 Thread Rajith PR via lists.fd.io
Original Message- > > From: vpp-dev@lists.fd.io On Behalf Of Rajith PR > via > > lists.fd.io > > Sent: mardi 25 mai 2021 09:51 > > To: vpp-dev > > Subject: [vpp-dev]: Unable to run VPP with ASAN enabled > > > > Hi All, > > > > I am not able to run

[vpp-dev]: Unable to run VPP with ASAN enabled

2021-05-25 Thread Rajith PR via lists.fd.io
Hi All, I am not able to run VPP with ASAN. Though we have been using VPP for sometime this is the first time we enabled ASAN in the build. I have followed the steps as mentioned in the sanitizer doc, can someone please let me know what is missed here. *Run Time Error(Missing symbol):*

[vpp-dev]: Query: Socket state -1 on 20.09

2021-03-15 Thread Rajith PR via lists.fd.io
Hi All, We did a VPP version upgrade from 19.08 to 20.09. I am seeing that the socket state is -1 in 20.09 on one of the devices. When does this happen? *20.09* DBGvpp# show threads ID NameTypeLWP Sched Policy (Priority) lcore Core Socket State 0 vpp_main

Re: [vpp-dev]: Worker Thread Deadlock Detected from vl_api_clnt_node

2020-12-04 Thread Rajith PR via lists.fd.io
h:155 > #19 0x7ffaa0c7da0a in vl_msg_api_alloc_internal (nbytes=73, pool=0, > may_return_null=0) > at /development/libvpp/src/vlibmemory/memory_shared.c:177 > #20 0x7ffaa0c7db6f in vl_msg_api_alloc_as_if_client (nbytes=57) at > /development/libvpp/src/vlibmemory/memory_shared.c:

[vpp-dev]: Worker Thread Deadlock Detected from vl_api_clnt_node

2020-12-03 Thread Rajith PR via lists.fd.io
Hi All, We have hit a VPP Worker Thread Deadlock issue. And from the call stacks it looks like the main thread is waiting for workers to come back to their main loop( ie has taken the barrier lock) and one of the two workers is on spin lock to make an rpc to the main thread. I believe this lock

Re: [vpp-dev]: Crash in memclnt_queue_callback().

2020-11-17 Thread Rajith PR via lists.fd.io
]; > > > >if(*vl_api_queue_cursizes[i]) > > > > > > Capture a coredump. It should be obvious why the reference blows up. If > you can, change your custom signal handler so that the faulting virtual > address is as obvious as possible. > > >

[vpp-dev]: Crash in memclnt_queue_callback().

2020-11-17 Thread Rajith PR via lists.fd.io
Hi All, We are seeing a random crash in *VPP-19.08*. The crash is occurring in memclnt_queue_callback and it is in code that we are not using. Any pointers to fix the crash would be helpful. *Complete Call Stack:* Thread 1 (Thread 0x7fe728f43d00 (LWP 189)): #0 0x7fe728049492 in

[vpp-dev] Lockless queue/ring buffer

2020-09-17 Thread Rajith PR via lists.fd.io
Hi All, We are integrating a *Linux pthread* with a *vpp thread* and are looking for a *lockless queue/ring buffer implementation* that can be used. In vpp infra i could see fifo and ring. But not sure if they can be used for enqueue/dequeue from a pthread that VPP is not aware off. Do you have

Re: [vpp-dev]: Crash in Timer wheel infra

2020-09-09 Thread Rajith PR via lists.fd.io
Thanks, Rajith On Wed, Sep 2, 2020 at 8:15 PM Dave Barach (dbarach) wrote: > It looks like vpp is crashing while expiring timers from the main thread > process timer wheel. That’s not been reported before. > > > > You might want to dust off .../extras/deprecated/vlib/unix/cj.[ch], and &g

Re: [vpp-dev]: Crash in Timer wheel infra

2020-09-02 Thread Rajith PR via lists.fd.io
the timer which > has expired. > > > > If you have > 1 timer per object and you manipulate timer B when timer A > expires, there’s no guarantee that timer B isn’t already on the expired > timer list. That’s almost always good for trouble. > > > > HTH... Dave > &

[vpp-dev]: Crash in Timer wheel infra

2020-09-01 Thread Rajith PR via lists.fd.io
Hi All, We are facing a crash in VPP's Timer wheel INFRA. Please find the details below. Version : *19.08* Configuration: *2 workers and the main thread.* Bactraces: thread apply all bt Thread 1 (Thread 0x7ff41d586d00 (LWP 253)): ---Type to continue, or q to quit--- #0 0x7ff41c696722 in

Re: [vpp-dev]: Trouble shooting low bandwidth of memif interface

2020-07-31 Thread Rajith PR via lists.fd.io
y them. > > Jerome > > > > > > > > *De : * au nom de "Rajith PR via lists.fd.io" > > *Répondre à : *"raj...@rtbrick.com" > *Date : *jeudi 30 juillet 2020 à 08:44 > *À : *vpp-dev > *Objet : *Re: [vpp-dev]: Trouble shooting low bandwidth of memif in

Re: [vpp-dev]: Trouble shooting low bandwidth of memif interface

2020-07-30 Thread Rajith PR via lists.fd.io
Looks like the image is not visible. Resending the topology diagram for reference. [image: iperf_memif.png] On Thu, Jul 30, 2020 at 11:44 AM Rajith PR via lists.fd.io wrote: > Hello Experts, > > I am trying to measure the performance of memif interface and getting a > very l

[vpp-dev]: Trouble shooting low bandwidth of memif interface

2020-07-30 Thread Rajith PR via lists.fd.io
Hello Experts, I am trying to measure the performance of memif interface and getting a very low bandwidth(652Kbytes/sec). I am new to performance tuning and any help on troubleshooting the issue would be very helpful. The test topology i am using is as below: Basically, I have two lxc

Re: [vpp-dev]: ASSERT in load_balance_get()

2020-07-17 Thread Rajith PR via lists.fd.io
ks, Rajith On Tue, Jul 7, 2020 at 6:28 PM Rajith PR via lists.fd.io wrote: > Hi Benoit, > > I have all those fixes. I had reported this issue (27407), the others i > found during my tests and added barrier protection in all those places. > This ASSERT seems to be not due to pool expan

Re: [vpp-dev]: ASSERT in load_balance_get()

2020-07-07 Thread Rajith PR via lists.fd.io
al Message- > > From: vpp-dev@lists.fd.io On Behalf Of Rajith PR > via > > lists.fd.io > > Sent: mardi 7 juillet 2020 14:11 > > To: vpp-dev > > Subject: [vpp-dev]: ASSERT in load_balance_get() > > > > Hi All, > > > > During our scale testin

[vpp-dev]: ASSERT in load_balance_get()

2020-07-07 Thread Rajith PR via lists.fd.io
Hi All, During our scale testing of routes we have hit an ASSERT in *load_balance_get() . * From the code it looks like the lb_index(148) referred to is already returned to the pool by the main thread causing the ASSERT in the worker. The version in *19.08. *We have two workers and a main

[vpp-dev] ASSERT in load_balance_get()

2020-07-07 Thread Rajith PR via lists.fd.io
Hi All, During our scale testing of routes we have hit an ASSERT in *load_balance_get() . * From the code it looks like the lb_index(148) referred to is already returned to the pool by the main thread causing the ASSERT in the worker. The version in *19.08. *We have two workers and a main

[vpp-dev] ASSERT in arp_mk_reply

2020-06-28 Thread Rajith PR via lists.fd.io
Hi All, We are seeing *ASSERT (vec_len (hw_if0->hw_address) == 6);* being hit in *arp_mk_reply*() . This is happening on *19.08. * We are having worker threads and a main thread. As such , the hw_if0 appears to be valid(the pointer and content). *But the length of the vector is 15. * I have

Re: [vpp-dev] VPP_Main Thread Gets Stuck

2020-06-19 Thread Rajith PR via lists.fd.io
t; Please refer to > https://fd.io/docs/vpp/master/troubleshooting/reportingissues/reportingissues.html# > <https://fd.io/docs/vpp/master/troubleshooting/reportingissues/reportingissues.html> > > > > *From:* vpp-dev@lists.fd.io *On Behalf Of *Rajith > PR via lists.fd.io > *Se

[vpp-dev] VPP_Main Thread Gets Stuck

2020-06-18 Thread Rajith PR via lists.fd.io
Hi All, While during scale tests with large numbers of routes, we occasionally hit a strange issue in our container. The *vpp process became unresponsive*, after attaching the process to gdb we could see the *vpp_main thread is stuck on a specific function*. Any pointer to debug such issues would

Re: [vpp-dev] SEGMENTATION FAULT in load_balance_get()

2020-06-10 Thread Rajith PR via lists.fd.io
, so we could create a fixed-size load balance pool to > prevent runtime reallocation, but it would waste memory and impose a > maximum size. > > ben > > > -Original Message- > > From: vpp-dev@lists.fd.io On Behalf Of Rajith PR > > via lists.fd.io &

Re: [vpp-dev] SEGMENTATION FAULT in load_balance_get()

2020-06-02 Thread Rajith PR via lists.fd.io
ern – > copying Neale for an opinion. > > > > D. > > > > *From:* vpp-dev@lists.fd.io *On Behalf Of *Rajith > PR via lists.fd.io > *Sent:* Tuesday, June 2, 2020 10:00 AM > *To:* vpp-dev > *Subject:* [vpp-dev] SEGMENTATION FAULT in load_balance_get() > >

[vpp-dev] SEGMENTATION FAULT in load_balance_get()

2020-06-02 Thread Rajith PR via lists.fd.io
Hello All, In *19.08** VPP version* we are seeing a crash while accessing the *load_balance_pool* in *load_balanc_get*() function. This is happening after *enabling worker threads*. As such the FIB programming is happening in the main thread and in one of the worker threads we see this crash.

Re: [vpp-dev] How to match a specific packet to the outbound direction of a specified interface #vpp

2020-05-10 Thread Rajith PR via lists.fd.io
Another solution is to redirect the traffic from punt node to your feature node. Here you can match on packets of interest and send them to interfere output node. Thanks, Rajith On Sat 9 May, 2020, 3:43 PM Mrityunjay Kumar, wrote: > which vpp version are you heading? If you r using 19.05 or