Re: [vpp-dev] SEGSEGV in acl using 2 core configuration

2017-11-07 Thread khers
Dear Andrew

Sorry for my delay, I get last revision of master  (commit :
e695cb4dbdb6f9424ac5a567799e67f791fad328 ), and
segfault did not occur with the same environment and test scenario. I will
try to reproduce the potential bug
with running test with longer duration and more aggressive scenario.

Regards,
Khers

On Wed, Oct 25, 2017 at 1:45 PM, Andrew  Yourtchenko 
wrote:

> Dear Khers,
>
> okay, cool! When testing the debug image, you could save the full dump
> and the .debs for all the artefacts so just in case I could grab the
> entire set of info and was able to look at it in my environment.
>
> Meantime, I had an idea for another potential failure mode, whereby
> the session would get checked while there is a session being freed,
> potentially resulting in a reallocation of the free bitmap in the
> pool.
>
> So before the reproduction in the debug build, give a shot to this
> one-line change
>  in the release build and see if you still can reproduce the crash with it:
>
> --- a/src/plugins/acl/fa_node.c
> +++ b/src/plugins/acl/fa_node.c
> @@ -609,6 +609,8 @@ acl_fa_verify_init_sessions (acl_main_t * am)
>  for (wk = 0; wk < vec_len (am->per_worker_data); wk++) {
>acl_fa_per_worker_data_t *pw = >per_worker_data[wk];
>pool_alloc_aligned(pw->fa_sessions_pool,
> am->fa_conn_table_max_entries, CLIB_CACHE_LINE_BYTES);
> +  /* preallocate the free bitmap */
> +  clib_bitmap_validate(pool_header(pw->fa_sessions_pool)->
> free_bitmap,
> am->fa_conn_table_max_entries);
>  }
>
> --a
>
> On 10/24/17, khers  wrote:
> > Dear Andrew
> >
> > I used latest version of master branch, I will replay the test with debug
> > build to make more debug info ASAP.
> > Vpp is running on Xeon E5-2600  series.
> > I did the tanother tests with two rx-queue and two worker, also with 4
> > rx-queue and 4 worker, I got segmentation fault on the same function.
> >
> > I will send more info in few days.
> >
> > Regards,
> > Khers
> >
> > On Oct 24, 2017 6:43 PM, "Andrew  Yourtchenko" 
> > wrote:
> >
> >> Dear Khers,
> >>
> >> Thanks for the info!
> >>
> >> I tried with these configs in my local setup (I tried even to increase
> >> the multi-cpu contention by specifying 4 rx-queues instead of 2), but
> >> it works ok for me on the master. What is the version you are testing
> >> with ? I presume it is also the master, but just wanted to verify.
> >>
> >> To try to get more info about this happening: could you give a shot at
> >> reproducing this on the debug build ? There are a few asserts that
> >> would be handy to verify that they do hold true during your tests -
> >> the location of the crash points to either the pool header being
> >> corrupted by something (the asserts should catch that) or the pool
> >> itself reallocated and memory used by something else (which should not
> >> happen because the memory is preallocated during the initialisation
> >> time - unless you change the max number of sessions after
> >> initialisation).
> >>
> >> Also, could you tell a bit more about the hardware you are testing
> >> with ? (cat /proc/cpuinfo)
> >>
> >> --a
> >>
> >> On 10/24/17, khers  wrote:
> >> > Dear Andrew
> >> >
> >> > Thanks for your attention.
> >> > Trex config file 
> >> > Trex scenario is default sfr.yaml.
> >> > vpp: startup.conf 
> >> > I changed size of acl_mheap to '(uword)2<<32' in acl.c
> >> > vpp config:
> >> > vppctl set interface l2 bridge TenGigabitEthernet86/0/0 1
> >> > vppctl set interface l2 bridge TenGigabitEthernet86/0/1 1
> >> >
> >> > vppctl set int state TenGigabitEthernet86/0/0 up
> >> > vppctl set int state TenGigabitEthernet86/0/1 up
> >> >
> >> > vppctl set acl-plugin session table hash-table-buckets 100
> >> > vppctl set acl-plugin session table hash-table-memory 2147483648
> >> >
> >> > vppctl set acl-plugin session timeout udp idle 5
> >> > vppctl set acl-plugin session timeout tcp idle 10
> >> > vppctl set acl-plugin session timeout tcp transient 5
> >> >
> >> > Regards,
> >> > Khers
> >> >
> >> >
> >> > On Mon, Oct 23, 2017 at 7:52 PM, Andrew  Yourtchenko <
> >> ayour...@gmail.com>
> >> > wrote:
> >> >
> >> >> Hi,
> >> >>
> >> >> could you share the exact TRex and VPP config files, so I could
> >> >> recreate it locally to investigate further ?
> >> >>
> >> >> Thanks a lot!
> >> >>
> >> >> --a
> >> >>
> >> >> On 10/23/17, khers  wrote:
> >> >> > Dear folks
> >> >> >
> >> >> > I have bridged two interfaces and set permit+reflect acl on the
> >> >> > input
> >> >> > of
> >> >> > interface one and deny rule on output of same interface as follow:
> >> >> >
> >> >> > acl_add_replace permit+reflect
> >> >> > acl_add_replace deny
> >> >> >
> >> >> > acl_interface_add_del sw_if_index 1 add input acl 0
> >> >> > acl_interface_add_del sw_if_index 1 add output acl 1
> >> >> >
> >> >> >

Re: [vpp-dev] Merge jobs faiing

2017-11-07 Thread Lori Jakab
Hello,

Any update on this? We can't get the latest patches via the Ubuntu package
manager and that hurts certain test scenarios.

-Lori

On Mon, Nov 6, 2017 at 6:31 PM, Ed Warnicke  wrote:

> This looks like we are hitting the limit on storage at packagecloud.
>
> I've manually trimmed some old packages, but we really really need to get
> the script going that autotrims for us.
>
> Vanessa,
>
> How are we doing on that?
>
> Ed
>
> On Mon, Nov 6, 2017 at 9:17 AM Luke, Chris  wrote:
>
>> The post-merge jobs are failing with errors like this:
>>
>>
>>
>> *16:04:05* [ERROR] Failed to execute goal 
>> org.apache.maven.plugins:maven-deploy-plugin:2.7:deploy-file (default-cli) 
>> on project standalone-pom: Failed to deploy artifacts: Could not transfer 
>> artifact io.fd.vpp:vpp-dpdk-dev:deb:deb:17.08-vpp2_amd64 from/to 
>> fd.io.master.ubuntu.xenial.main 
>> (https://nexus.fd.io/content/repositories/fd.io.master.ubuntu.xenial.main): 
>> Failed to transfer file: 
>> https://nexus.fd.io/content/repositories/fd.io.master.ubuntu.xenial.main/io/fd/vpp/vpp-dpdk-dev/17.08-vpp2_amd64/vpp-dpdk-dev-17.08-vpp2_amd64-deb.deb.
>>  Return code is: 400, ReasonPhrase: Bad Request. -> [Help 1]
>>
>>
>>
>> (from https://jenkins.fd.io/job/vpp-merge-master-ubuntu1604/3148/
>> consoleFull)
>>
>>
>>
>> This is sufficiently voodoo for me to not know how to proceed. (The Nexus
>> stuff consistently annoys and baffles me, not least the overly-verbose
>> chatter in build logs)
>>
>>
>>
>> Could someone take a look, please?
>>
>>
>>
>> Net result: No packages are being stored from master, and docs are not
>> getting generated. It has been doing for thia a few days as best I can tell.
>>
>>
>>
>> Thanks,
>>
>> Chris.
>> ___
>> vpp-dev mailing list
>> vpp-dev@lists.fd.io
>> https://lists.fd.io/mailman/listinfo/vpp-dev
>
>
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev
>
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] multi-core multi-threading performance

2017-11-07 Thread Pragash Vijayaragavan
Hi all,

Any help/ideas on how we can have a better performance using multi-cores is
appreciated.

Thanks,

Pragash Vijayaragavan
Grad Student at Rochester Institute of Technology
email : pxv3...@rit.edu
ph : 585 764 4662


On Mon, Nov 6, 2017 at 8:10 AM, Pragash Vijayaragavan 
wrote:

> Ok now i provisioned 4 rx queues for 4 worker threads and yea all workers
> are processing traffic, but the lookup rate has dropped, i am getting low
> packets than when it was 2 workers.
>
> I tried configuring 4 tx queues as well, still same problem (low packets
> received compared to 2 workers).
>
>
>
> Thanks,
>
> Pragash Vijayaragavan
> Grad Student at Rochester Institute of Technology
> email : pxv3...@rit.edu
> ph : 585 764 4662 <(585)%20764-4662>
>
>
> On Mon, Nov 6, 2017 at 8:00 AM, Pragash Vijayaragavan 
> wrote:
>
>> Just 1, let me change it to 2 may be 3 and get back to you.
>>
>> Thanks,
>>
>> Pragash Vijayaragavan
>> Grad Student at Rochester Institute of Technology
>> email : pxv3...@rit.edu
>> ph : 585 764 4662 <(585)%20764-4662>
>>
>>
>> On Mon, Nov 6, 2017 at 7:48 AM, Dave Barach (dbarach) 
>> wrote:
>>
>>> How many RX queues did you provision? One per worker, or no supper...
>>>
>>>
>>>
>>> Thanks… Dave
>>>
>>>
>>>
>>> *From:* Pragash Vijayaragavan [mailto:pxv3...@rit.edu]
>>> *Sent:* Monday, November 6, 2017 7:36 AM
>>>
>>> *To:* Dave Barach (dbarach) 
>>> *Cc:* vpp-dev@lists.fd.io; John Marshall (jwm) ; Neale
>>> Ranns (nranns) ; Minseok Kwon 
>>> *Subject:* Re: multi-core multi-threading performance
>>>
>>>
>>>
>>> Hi Dave,
>>>
>>>
>>>
>>> As per your suggestion i tried sending different traffic and i could
>>> notice that, 1 worker acts per port (hardware NIC)
>>>
>>>
>>>
>>> Is it true that multiple workers cannot work on same port at the same
>>> time?
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> Thanks,
>>>
>>>
>>>
>>> Pragash Vijayaragavan
>>>
>>> Grad Student at Rochester Institute of Technology
>>>
>>> email : pxv3...@rit.edu
>>>
>>> ph : 585 764 4662 <(585)%20764-4662>
>>>
>>>
>>>
>>>
>>>
>>> On Mon, Nov 6, 2017 at 7:13 AM, Pragash Vijayaragavan 
>>> wrote:
>>>
>>> Thanks Dave,
>>>
>>>
>>>
>>> let me try it out real quick and get back to you.
>>>
>>>
>>> Thanks,
>>>
>>>
>>>
>>> Pragash Vijayaragavan
>>>
>>> Grad Student at Rochester Institute of Technology
>>>
>>> email : pxv3...@rit.edu
>>>
>>> ph : 585 764 4662 <(585)%20764-4662>
>>>
>>>
>>>
>>>
>>>
>>> On Mon, Nov 6, 2017 at 7:11 AM, Dave Barach (dbarach) 
>>> wrote:
>>>
>>> Incrementing / random src/dst addr/port
>>>
>>>
>>>
>>> Thanks… Dave
>>>
>>>
>>>
>>> *From:* Pragash Vijayaragavan [mailto:pxv3...@rit.edu]
>>> *Sent:* Monday, November 6, 2017 7:06 AM
>>> *To:* Dave Barach (dbarach) 
>>> *Cc:* vpp-dev@lists.fd.io; John Marshall (jwm) ; Neale
>>> Ranns (nranns) ; Minseok Kwon 
>>> *Subject:* Re: multi-core multi-threading performance
>>>
>>>
>>>
>>> Hi Dave,
>>>
>>>
>>>
>>> Thanks for the mail
>>>
>>>
>>>
>>> a "show run" command shows dpdk-input process on 2 of the workers but
>>> the ip6-lookup process is running only on 1 worker.
>>>
>>>
>>>
>>> What config should be done to make all threads process traffic.
>>>
>>>
>>>
>>> This is for 4 workers and 1 main core.
>>>
>>>
>>>
>>> Pasted output :
>>>
>>>
>>>
>>>
>>>
>>> vpp# sh run
>>>
>>> Thread 0 vpp_main (lcore 1)
>>>
>>> Time 7.5, average vectors/node 0.00, last 128 main loops 0.00 per node
>>> 0.00
>>>
>>>   vector rates in 0.e0, out 0.e0, drop 0.e0, punt 0.e0
>>>
>>>  Name State Calls  Vectors
>>>   Suspends Clocks   Vectors/Call
>>>
>>> acl-plugin-fa-cleaner-process   any wait 0
>>>  0  15  4.97e30.00
>>>
>>> api-rx-from-ring active  0
>>>  0  79  1.07e50.00
>>>
>>> cdp-process any wait 0
>>>  0   3  2.65e30.00
>>>
>>> dpdk-processany wait 0
>>>  0   2  6.77e70.00
>>>
>>> fib-walkany wait 0
>>>  07474  6.74e20.00
>>>
>>> gmon-processtime wait0
>>>  0   1  4.24e30.00
>>>
>>> ikev2-manager-process   any wait 0
>>>  0   7  7.04e30.00
>>>
>>> ip6-icmp-neighbor-discovery-ev  any wait 0
>>>  0   7  4.67e30.00
>>>
>>> lisp-retry-service  any wait 0
>>>  0   3  7.21e30.00
>>>
>>> unix-epoll-input polling  21655148

Re: [vpp-dev] Simple setup, that does not work.

2017-11-07 Thread Dave Barach (dbarach)
Check host interface IP address, basic connectivity [cable on floor?], and so 
on.

Check “show hardware.” If the MIB stats indicate that packets are reaching the 
NIC MAC layer - but not VPP - see if /proc/cmdline contains “intel_iommu=on”. 
If it does, try removing that stanza and reboot. You can, in fact, run with the 
iommu enabled, but for a 101(a) simple test it’s not worth going there...

HTH… Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of John Wei
Sent: Monday, November 6, 2017 11:00 PM
To: vpp-dev 
Subject: [vpp-dev] Simple setup, that does not work.



I followed one of the fd.io youtube demo, it is very simple, but 
it does not work for me.



  *   Restart vpp
  *   vppctl set int state GigabitEthernet13/0/0 up
  *   vppctl set int ip address GigabitEthernet13/0/0 
192.168.50.166/24
  *   # vppctl show int addr

 *   GigabitEthernet13/0/0 (up):
 * 192.168.50.166/24
 *   GigabitEthernetb/0/0 (dn):
 *   local0 (dn):

  *   on host: ping 192.168.50.166 does not work (just hang)
What is missing?
I am running v17.10-release bits.

John

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev