Re: [vpp-dev]: Trouble shooting low bandwidth of memif interface

2020-07-29 Thread Rajith PR via lists.fd.io
Looks like the image is not visible. Resending the topology diagram for reference. [image: iperf_memif.png] On Thu, Jul 30, 2020 at 11:44 AM Rajith PR via lists.fd.io wrote: > Hello Experts, > > I am trying to measure the performance of memif interface and getting a > very low bandwidth(652Kb

[vpp-dev]: Trouble shooting low bandwidth of memif interface

2020-07-29 Thread Rajith PR via lists.fd.io
Hello Experts, I am trying to measure the performance of memif interface and getting a very low bandwidth(652Kbytes/sec). I am new to performance tuning and any help on troubleshooting the issue would be very helpful. The test topology i am using is as below: Basically, I have two lxc contain

Re: [vpp-dev] VPP 2005 is crashing on stopping the VCL applications #vpp-hoststack

2020-07-29 Thread Florin Coras
Hi Raj, In that case it should work. Just from the trace lower it’s hard to figure out what exactly happened. Also, keep in mind that vcl is not thread safe, so make sure you’re not trying to share sessions or allow two workers to interact with the message queue(s) at the same time. Regards

Re: [vpp-dev] VPP 2005 is crashing on stopping the VCL applications #vpp-hoststack

2020-07-29 Thread Raj Kumar
Hi Florin, I am using kill to stop the application. But , the application has a kill signal handler and after receiving the signal it is exiting gracefully. About vppcom_app_exit, I think this function is registered with atexit() inside vppcom_app_create() so it should call when the application ex

Re: [vpp-dev] VPP 2005 is crashing on stopping the VCL applications #vpp-hoststack

2020-07-29 Thread Florin Coras
Hi Raj, Does stopping include a call to vppcom_app_exit or killing the applications? If the latter, the apps might be killed with some mutexes/spinlocks held. For now, we only support the former. Regards, Florin > On Jul 29, 2020, at 1:49 PM, Raj Kumar wrote: > > Hi, > In my UDP applicatio

[vpp-dev] VPP 2005 is crashing on stopping the VCL applications #vpp-hoststack

2020-07-29 Thread Raj Kumar
Hi, In my UDP application , I am using VPP host stack to receive packets and memIf to transmit packets. There are a total 6 application connected to VPP. if I stop the application(s) then VPP is crashing.  In vpp configuration , 4 worker threads are configured.  If there is no worker thread confi

Re: [vpp-dev] TCP timer race and another possible TCP issue

2020-07-29 Thread Florin Coras
Hi Ivan, Inline. > On Jul 29, 2020, at 9:40 AM, Ivan Shvedunov wrote: > > Hi Florin, > > while trying to fix the proxy cleanup issue, I've spotted another problem in > the TCP stack, namely RSTs being ignored in SYN_SENT (half-open) connection > state: > https://gerrit.fd.io/r/c/vpp/+/28103

Re: [vpp-dev] TCP timer race and another possible TCP issue

2020-07-29 Thread Ivan Shvedunov
Hi Florin, while trying to fix the proxy cleanup issue, I've spotted another problem in the TCP stack, namely RSTs being ignored in SYN_SENT (half-open) connection state: https://gerrit.fd.io/r/c/vpp/+/28103 The following fix for handling failed active connections in the proxy has worked for me,

Re: [vpp-dev] Update to iOAM using latest IETF draft #vpp

2020-07-29 Thread Justin Iurman
Hi Mauricio, CC'ing Shwetha, she implemented the IOAM plugin. Last time I checked, IOAM namespaces were not included, so it is probably based on the -03 version of draft-ietf-ippm-ioam-data. Actually, just to let you know, there is already someone that is going to rebase the implementation on t

[vpp-dev] Update to iOAM using latest IETF draft #vpp

2020-07-29 Thread mauricio.solisjr via lists.fd.io
Hi, I noticed that the current iOAM plugin implementation is using the first IETF drafts, so I'm thinking about trying to update the iOAM implementation in VPP using the latest.  I first just want to make sure there that this update is not in the immediate VPP release pipeline since I do not wi

Re: [vpp-dev] Create big tables on huge-page

2020-07-29 Thread Nitin Saxena
Hi Lijian, +1 on the finding. It would be interested to know how much is the performance gain. Having said that, correct me if I am wrong, I think pmalloc module works only on single hugepage size (pm->def_log2_page_sz) which means either 1G or 2M and not both Thanks, Nitin From: vpp-dev@lis