Re: [vpp-dev] multi-core multi-threading performance

2017-11-07 Thread Pragash Vijayaragavan
Hi all,

Any help/ideas on how we can have a better performance using multi-cores is
appreciated.

Thanks,

Pragash Vijayaragavan
Grad Student at Rochester Institute of Technology
email : pxv3...@rit.edu
ph : 585 764 4662


On Mon, Nov 6, 2017 at 8:10 AM, Pragash Vijayaragavan <pxv3...@g.rit.edu>
wrote:

> Ok now i provisioned 4 rx queues for 4 worker threads and yea all workers
> are processing traffic, but the lookup rate has dropped, i am getting low
> packets than when it was 2 workers.
>
> I tried configuring 4 tx queues as well, still same problem (low packets
> received compared to 2 workers).
>
>
>
> Thanks,
>
> Pragash Vijayaragavan
> Grad Student at Rochester Institute of Technology
> email : pxv3...@rit.edu
> ph : 585 764 4662 <(585)%20764-4662>
>
>
> On Mon, Nov 6, 2017 at 8:00 AM, Pragash Vijayaragavan <pxv3...@g.rit.edu>
> wrote:
>
>> Just 1, let me change it to 2 may be 3 and get back to you.
>>
>> Thanks,
>>
>> Pragash Vijayaragavan
>> Grad Student at Rochester Institute of Technology
>> email : pxv3...@rit.edu
>> ph : 585 764 4662 <(585)%20764-4662>
>>
>>
>> On Mon, Nov 6, 2017 at 7:48 AM, Dave Barach (dbarach) <dbar...@cisco.com>
>> wrote:
>>
>>> How many RX queues did you provision? One per worker, or no supper...
>>>
>>>
>>>
>>> Thanks… Dave
>>>
>>>
>>>
>>> *From:* Pragash Vijayaragavan [mailto:pxv3...@rit.edu]
>>> *Sent:* Monday, November 6, 2017 7:36 AM
>>>
>>> *To:* Dave Barach (dbarach) <dbar...@cisco.com>
>>> *Cc:* vpp-dev@lists.fd.io; John Marshall (jwm) <j...@cisco.com>; Neale
>>> Ranns (nranns) <nra...@cisco.com>; Minseok Kwon <mxk...@rit.edu>
>>> *Subject:* Re: multi-core multi-threading performance
>>>
>>>
>>>
>>> Hi Dave,
>>>
>>>
>>>
>>> As per your suggestion i tried sending different traffic and i could
>>> notice that, 1 worker acts per port (hardware NIC)
>>>
>>>
>>>
>>> Is it true that multiple workers cannot work on same port at the same
>>> time?
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> Thanks,
>>>
>>>
>>>
>>> Pragash Vijayaragavan
>>>
>>> Grad Student at Rochester Institute of Technology
>>>
>>> email : pxv3...@rit.edu
>>>
>>> ph : 585 764 4662 <(585)%20764-4662>
>>>
>>>
>>>
>>>
>>>
>>> On Mon, Nov 6, 2017 at 7:13 AM, Pragash Vijayaragavan <pxv3...@g.rit.edu>
>>> wrote:
>>>
>>> Thanks Dave,
>>>
>>>
>>>
>>> let me try it out real quick and get back to you.
>>>
>>>
>>> Thanks,
>>>
>>>
>>>
>>> Pragash Vijayaragavan
>>>
>>> Grad Student at Rochester Institute of Technology
>>>
>>> email : pxv3...@rit.edu
>>>
>>> ph : 585 764 4662 <(585)%20764-4662>
>>>
>>>
>>>
>>>
>>>
>>> On Mon, Nov 6, 2017 at 7:11 AM, Dave Barach (dbarach) <dbar...@cisco.com>
>>> wrote:
>>>
>>> Incrementing / random src/dst addr/port
>>>
>>>
>>>
>>> Thanks… Dave
>>>
>>>
>>>
>>> *From:* Pragash Vijayaragavan [mailto:pxv3...@rit.edu]
>>> *Sent:* Monday, November 6, 2017 7:06 AM
>>> *To:* Dave Barach (dbarach) <dbar...@cisco.com>
>>> *Cc:* vpp-dev@lists.fd.io; John Marshall (jwm) <j...@cisco.com>; Neale
>>> Ranns (nranns) <nra...@cisco.com>; Minseok Kwon <mxk...@rit.edu>
>>> *Subject:* Re: multi-core multi-threading performance
>>>
>>>
>>>
>>> Hi Dave,
>>>
>>>
>>>
>>> Thanks for the mail
>>>
>>>
>>>
>>> a "show run" command shows dpdk-input process on 2 of the workers but
>>> the ip6-lookup process is running only on 1 worker.
>>>
>>>
>>>
>>> What config should be done to make all threads process traffic.
>>>
>>>
>>>
>>> This is for 4 workers and 1 main core.
>>>
>>>
>>>
>>> Pasted output :
>>>
>>>
>>>
>>>
>>>
>>> vpp# sh run
>>>
>>> Thread 0 vpp_main (lcore 1)
>>>
>>> Time 7.5

Re: [vpp-dev] multi-core multi-threading performance

2017-11-06 Thread Pragash Vijayaragavan
Ok now i provisioned 4 rx queues for 4 worker threads and yea all workers
are processing traffic, but the lookup rate has dropped, i am getting low
packets than when it was 2 workers.

I tried configuring 4 tx queues as well, still same problem (low packets
received compared to 2 workers).



Thanks,

Pragash Vijayaragavan
Grad Student at Rochester Institute of Technology
email : pxv3...@rit.edu
ph : 585 764 4662


On Mon, Nov 6, 2017 at 8:00 AM, Pragash Vijayaragavan <pxv3...@g.rit.edu>
wrote:

> Just 1, let me change it to 2 may be 3 and get back to you.
>
> Thanks,
>
> Pragash Vijayaragavan
> Grad Student at Rochester Institute of Technology
> email : pxv3...@rit.edu
> ph : 585 764 4662 <(585)%20764-4662>
>
>
> On Mon, Nov 6, 2017 at 7:48 AM, Dave Barach (dbarach) <dbar...@cisco.com>
> wrote:
>
>> How many RX queues did you provision? One per worker, or no supper...
>>
>>
>>
>> Thanks… Dave
>>
>>
>>
>> *From:* Pragash Vijayaragavan [mailto:pxv3...@rit.edu]
>> *Sent:* Monday, November 6, 2017 7:36 AM
>>
>> *To:* Dave Barach (dbarach) <dbar...@cisco.com>
>> *Cc:* vpp-dev@lists.fd.io; John Marshall (jwm) <j...@cisco.com>; Neale
>> Ranns (nranns) <nra...@cisco.com>; Minseok Kwon <mxk...@rit.edu>
>> *Subject:* Re: multi-core multi-threading performance
>>
>>
>>
>> Hi Dave,
>>
>>
>>
>> As per your suggestion i tried sending different traffic and i could
>> notice that, 1 worker acts per port (hardware NIC)
>>
>>
>>
>> Is it true that multiple workers cannot work on same port at the same
>> time?
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> Thanks,
>>
>>
>>
>> Pragash Vijayaragavan
>>
>> Grad Student at Rochester Institute of Technology
>>
>> email : pxv3...@rit.edu
>>
>> ph : 585 764 4662 <(585)%20764-4662>
>>
>>
>>
>>
>>
>> On Mon, Nov 6, 2017 at 7:13 AM, Pragash Vijayaragavan <pxv3...@g.rit.edu>
>> wrote:
>>
>> Thanks Dave,
>>
>>
>>
>> let me try it out real quick and get back to you.
>>
>>
>> Thanks,
>>
>>
>>
>> Pragash Vijayaragavan
>>
>> Grad Student at Rochester Institute of Technology
>>
>> email : pxv3...@rit.edu
>>
>> ph : 585 764 4662 <(585)%20764-4662>
>>
>>
>>
>>
>>
>> On Mon, Nov 6, 2017 at 7:11 AM, Dave Barach (dbarach) <dbar...@cisco.com>
>> wrote:
>>
>> Incrementing / random src/dst addr/port
>>
>>
>>
>> Thanks… Dave
>>
>>
>>
>> *From:* Pragash Vijayaragavan [mailto:pxv3...@rit.edu]
>> *Sent:* Monday, November 6, 2017 7:06 AM
>> *To:* Dave Barach (dbarach) <dbar...@cisco.com>
>> *Cc:* vpp-dev@lists.fd.io; John Marshall (jwm) <j...@cisco.com>; Neale
>> Ranns (nranns) <nra...@cisco.com>; Minseok Kwon <mxk...@rit.edu>
>> *Subject:* Re: multi-core multi-threading performance
>>
>>
>>
>> Hi Dave,
>>
>>
>>
>> Thanks for the mail
>>
>>
>>
>> a "show run" command shows dpdk-input process on 2 of the workers but the
>> ip6-lookup process is running only on 1 worker.
>>
>>
>>
>> What config should be done to make all threads process traffic.
>>
>>
>>
>> This is for 4 workers and 1 main core.
>>
>>
>>
>> Pasted output :
>>
>>
>>
>>
>>
>> vpp# sh run
>>
>> Thread 0 vpp_main (lcore 1)
>>
>> Time 7.5, average vectors/node 0.00, last 128 main loops 0.00 per node
>> 0.00
>>
>>   vector rates in 0.e0, out 0.e0, drop 0.e0, punt 0.e0
>>
>>  Name State Calls  Vectors
>> Suspends Clocks   Vectors/Call
>>
>> acl-plugin-fa-cleaner-process   any wait 0
>>  0  15  4.97e30.00
>>
>> api-rx-from-ring active  0
>>  0  79  1.07e50.00
>>
>> cdp-process any wait 0
>>  0   3  2.65e30.00
>>
>> dpdk-processany wait 0
>>  0   2  6.77e70.00
>>
>> fib-walkany wait 0
>>  07474  6.74e20.00

Re: [vpp-dev] multi-core multi-threading performance

2017-11-06 Thread Pragash Vijayaragavan
Just 1, let me change it to 2 may be 3 and get back to you.

Thanks,

Pragash Vijayaragavan
Grad Student at Rochester Institute of Technology
email : pxv3...@rit.edu
ph : 585 764 4662


On Mon, Nov 6, 2017 at 7:48 AM, Dave Barach (dbarach) <dbar...@cisco.com>
wrote:

> How many RX queues did you provision? One per worker, or no supper...
>
>
>
> Thanks… Dave
>
>
>
> *From:* Pragash Vijayaragavan [mailto:pxv3...@rit.edu]
> *Sent:* Monday, November 6, 2017 7:36 AM
>
> *To:* Dave Barach (dbarach) <dbar...@cisco.com>
> *Cc:* vpp-dev@lists.fd.io; John Marshall (jwm) <j...@cisco.com>; Neale
> Ranns (nranns) <nra...@cisco.com>; Minseok Kwon <mxk...@rit.edu>
> *Subject:* Re: multi-core multi-threading performance
>
>
>
> Hi Dave,
>
>
>
> As per your suggestion i tried sending different traffic and i could
> notice that, 1 worker acts per port (hardware NIC)
>
>
>
> Is it true that multiple workers cannot work on same port at the same time?
>
>
>
>
>
>
>
>
>
>
> Thanks,
>
>
>
> Pragash Vijayaragavan
>
> Grad Student at Rochester Institute of Technology
>
> email : pxv3...@rit.edu
>
> ph : 585 764 4662 <(585)%20764-4662>
>
>
>
>
>
> On Mon, Nov 6, 2017 at 7:13 AM, Pragash Vijayaragavan <pxv3...@g.rit.edu>
> wrote:
>
> Thanks Dave,
>
>
>
> let me try it out real quick and get back to you.
>
>
> Thanks,
>
>
>
> Pragash Vijayaragavan
>
> Grad Student at Rochester Institute of Technology
>
> email : pxv3...@rit.edu
>
> ph : 585 764 4662 <(585)%20764-4662>
>
>
>
>
>
> On Mon, Nov 6, 2017 at 7:11 AM, Dave Barach (dbarach) <dbar...@cisco.com>
> wrote:
>
> Incrementing / random src/dst addr/port
>
>
>
> Thanks… Dave
>
>
>
> *From:* Pragash Vijayaragavan [mailto:pxv3...@rit.edu]
> *Sent:* Monday, November 6, 2017 7:06 AM
> *To:* Dave Barach (dbarach) <dbar...@cisco.com>
> *Cc:* vpp-dev@lists.fd.io; John Marshall (jwm) <j...@cisco.com>; Neale
> Ranns (nranns) <nra...@cisco.com>; Minseok Kwon <mxk...@rit.edu>
> *Subject:* Re: multi-core multi-threading performance
>
>
>
> Hi Dave,
>
>
>
> Thanks for the mail
>
>
>
> a "show run" command shows dpdk-input process on 2 of the workers but the
> ip6-lookup process is running only on 1 worker.
>
>
>
> What config should be done to make all threads process traffic.
>
>
>
> This is for 4 workers and 1 main core.
>
>
>
> Pasted output :
>
>
>
>
>
> vpp# sh run
>
> Thread 0 vpp_main (lcore 1)
>
> Time 7.5, average vectors/node 0.00, last 128 main loops 0.00 per node 0.00
>
>   vector rates in 0.e0, out 0.e0, drop 0.e0, punt 0.e0
>
>  Name State Calls  Vectors
> Suspends Clocks   Vectors/Call
>
> acl-plugin-fa-cleaner-process   any wait 0
>  0  15  4.97e30.00
>
> api-rx-from-ring active  0
>  0  79  1.07e50.00
>
> cdp-process any wait 0
>  0   3  2.65e30.00
>
> dpdk-processany wait 0
>  0   2  6.77e70.00
>
> fib-walkany wait 0
>  07474  6.74e20.00
>
> gmon-processtime wait0
>  0   1  4.24e30.00
>
> ikev2-manager-process   any wait 0
>  0   7  7.04e30.00
>
> ip6-icmp-neighbor-discovery-ev  any wait 0
>  0   7  4.67e30.00
>
> lisp-retry-service  any wait 0
>  0   3  7.21e30.00
>
> unix-epoll-input polling  21655148
>  0   0  5.43e20.00
>
> vpe-oam-process any wait 0
>  0   4  5.28e30.00
>
> ---
>
> Thread 1 vpp_wk_0 (lcore 2)
>
> Time 7.5, average vectors/node 255.99, last 128 main loops 14.00 per node
> 256.00
>
>   vector rates in 4.1903e6, out 4.1903e6, drop 0.e0, punt 0.e0
>
>  Name State Calls  Vectors
> Suspends Clocks   Vectors/Call
>
> FortyGigabitEthernet4/0/0-outp   active 123334
> 31572992   0  6.58e

Re: [vpp-dev] multi-core multi-threading performance

2017-11-06 Thread Pragash Vijayaragavan
Hi Dave,

As per your suggestion i tried sending different traffic and i could notice
that, 1 worker acts per port (hardware NIC)

Is it true that multiple workers cannot work on same port at the same time?





Thanks,

Pragash Vijayaragavan
Grad Student at Rochester Institute of Technology
email : pxv3...@rit.edu
ph : 585 764 4662


On Mon, Nov 6, 2017 at 7:13 AM, Pragash Vijayaragavan <pxv3...@g.rit.edu>
wrote:

> Thanks Dave,
>
> let me try it out real quick and get back to you.
>
> Thanks,
>
> Pragash Vijayaragavan
> Grad Student at Rochester Institute of Technology
> email : pxv3...@rit.edu
> ph : 585 764 4662 <(585)%20764-4662>
>
>
> On Mon, Nov 6, 2017 at 7:11 AM, Dave Barach (dbarach) <dbar...@cisco.com>
> wrote:
>
>> Incrementing / random src/dst addr/port....
>>
>>
>>
>> Thanks… Dave
>>
>>
>>
>> *From:* Pragash Vijayaragavan [mailto:pxv3...@rit.edu]
>> *Sent:* Monday, November 6, 2017 7:06 AM
>> *To:* Dave Barach (dbarach) <dbar...@cisco.com>
>> *Cc:* vpp-dev@lists.fd.io; John Marshall (jwm) <j...@cisco.com>; Neale
>> Ranns (nranns) <nra...@cisco.com>; Minseok Kwon <mxk...@rit.edu>
>> *Subject:* Re: multi-core multi-threading performance
>>
>>
>>
>> Hi Dave,
>>
>>
>>
>> Thanks for the mail
>>
>>
>>
>> a "show run" command shows dpdk-input process on 2 of the workers but the
>> ip6-lookup process is running only on 1 worker.
>>
>>
>>
>> What config should be done to make all threads process traffic.
>>
>>
>>
>> This is for 4 workers and 1 main core.
>>
>>
>>
>> Pasted output :
>>
>>
>>
>>
>>
>> vpp# sh run
>>
>> Thread 0 vpp_main (lcore 1)
>>
>> Time 7.5, average vectors/node 0.00, last 128 main loops 0.00 per node
>> 0.00
>>
>>   vector rates in 0.e0, out 0.e0, drop 0.e0, punt 0.e0
>>
>>  Name State Calls  Vectors
>> Suspends Clocks   Vectors/Call
>>
>> acl-plugin-fa-cleaner-process   any wait 0
>>  0  15  4.97e30.00
>>
>> api-rx-from-ring active  0
>>  0  79  1.07e50.00
>>
>> cdp-process any wait 0
>>  0   3  2.65e30.00
>>
>> dpdk-processany wait 0
>>  0   2  6.77e70.00
>>
>> fib-walkany wait 0
>>  07474  6.74e20.00
>>
>> gmon-processtime wait0
>>  0   1  4.24e30.00
>>
>> ikev2-manager-process   any wait 0
>>  0   7  7.04e30.00
>>
>> ip6-icmp-neighbor-discovery-ev  any wait 0
>>  0   7  4.67e30.00
>>
>> lisp-retry-service  any wait 0
>>  0   3  7.21e30.00
>>
>> unix-epoll-input polling  21655148
>>  0   0  5.43e20.00
>>
>> vpe-oam-process any wait 0
>>  0   4  5.28e30.00
>>
>> ---
>>
>> Thread 1 vpp_wk_0 (lcore 2)
>>
>> Time 7.5, average vectors/node 255.99, last 128 main loops 14.00 per node
>> 256.00
>>
>>   vector rates in 4.1903e6, out 4.1903e6, drop 0.e0, punt 0.e0
>>
>>  Name State Calls  Vectors
>> Suspends Clocks   Vectors/Call
>>
>> FortyGigabitEthernet4/0/0-outp   active 123334
>> 31572992   0  6.58e0  255.99
>>
>> FortyGigabitEthernet4/0/0-tx active 123334
>> 31572992   0  7.20e1  255.99
>>
>> dpdk-input   polling124347
>> 31572992   0  5.49e1  253.91
>>
>> ip6-inputactive 123334
>> 31572992   0  2.28e1  255.99
>>
>> ip6-load-balance active 123334
>> 31572992   0  1.61e1  255.99
>>
>> ip6-lookup   active   

[vpp-dev] multi-core multi-threading performance

2017-11-05 Thread Pragash Vijayaragavan
Hi ,

We are measuring performance of ip6 lookup in multi-core multi-worker
environments and
we don't see good scaling of performance when we keep increasing the number
of cores/workers.

We are just changing the startup.conf file to create more workers,
rx-queues, sock-mem etc. Should we do anything else to see an increase in
performance.

Is there a limitation on the performance even if we increase the number of
workers.

Is it dependent on the number of hardware NICs we have, we only have 1 NIC
to receive the traffic.


TIA,

Thanks,

Pragash Vijayaragavan
Grad Student at Rochester Institute of Technology
email : pxv3...@rit.edu
ph : 585 764 4662
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] ipv6 lookup rate

2017-09-25 Thread Pragash Vijayaragavan
Any inputs on this is appreciated

Thanks,

Pragash Vijayaragavan
Grad Student at Rochester Institute of Technology
email : pxv3...@rit.edu
ph : 585 764 4662


On Thu, Sep 21, 2017 at 4:50 PM, Pragash Vijayaragavan <pxv3...@g.rit.edu>
wrote:

> Hi Guys,
>
> We are working on ipv6 lookup rate in VPP.
>
> We are sending traffic from TRex and receiving on the same machine, with
> VPP as the SUT.
>
> The problem is that lookup rate of ip6 is very low, we are only getting a
> lookup rate of 200 Kpps, there are lot of drops. (ip4 has a rate of 5 Mpps).
>
> Is there any special configuration / requirement for doing the ip6 lookup
> in the VPP faster. We tried increasing the heapsize, using multiple cores..
>
> Our topology :
>
>--- | TRex  port 1 |  | VPP
> port 1 |
>|   port 2 | | VPP
> port 2 |
>
> Note : we have default routes in VPP, (so that there are no drops).
>
> Thanks,
>
> Pragash Vijayaragavan
> Grad Student at Rochester Institute of Technology
> email : pxv3...@rit.edu
> ph : 585 764 4662 <(585)%20764-4662>
>
>
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] ipv6 lookup rate

2017-09-21 Thread Pragash Vijayaragavan
Hi Guys,

We are working on ipv6 lookup rate in VPP.

We are sending traffic from TRex and receiving on the same machine, with
VPP as the SUT.

The problem is that lookup rate of ip6 is very low, we are only getting a
lookup rate of 200 Kpps, there are lot of drops. (ip4 has a rate of 5 Mpps).

Is there any special configuration / requirement for doing the ip6 lookup
in the VPP faster. We tried increasing the heapsize, using multiple cores..

Our topology :

   --- | TRex  port 1 |  | VPP port
1 |
   |   port 2 | | VPP
port 2 |

Note : we have default routes in VPP, (so that there are no drops).

Thanks,

Pragash Vijayaragavan
Grad Student at Rochester Institute of Technology
email : pxv3...@rit.edu
ph : 585 764 4662
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] ip6 routing issue

2017-09-03 Thread Pragash Vijayaragavan
Ignore this thread.

17.10-rc0 does not have this issue.

Thanks,

Pragash Vijayaragavan
Grad Student at Rochester Institute of Technology
email : pxv3...@rit.edu
ph : 585 764 4662


On Wed, Aug 30, 2017 at 6:53 PM, Pragash Vijayaragavan <pxv3...@g.rit.edu>
wrote:

> Hi,
>
> We are sending ip6 traffic as below, but traffic is not forwarded to the
> destination port.
>
> -  [  Traffic gen  (dst ip - port 2) ]---[ (port
> 1) vpp (port 2) ]
>
> We tried to ping from port 2 from port 1, 98 percent packet loss.
>
> Can someone let us know whether we are missing any configuration.
> I can give more outputs if needed.
>
>
> vpp# sh interfaces addr
> FortyGigabitEthernet2/0/0 (up):
>   2002::1/48
>   fe80::3efd:feff:fea6:8ef0/128
> FortyGigabitEthernet4/0/0 (up):
>   9002::1/48
>   fe80::3efd:feff:fea6:8e78/128
> local0 (up):
>
> vpp# ping 9002::1 source FortyGigabitEthernet2/0/0
> 5 bytes from 9002::1: icmp_seq=0 ttl=64 time=20630217.5218 ms
> 5 bytes from 9002::1: icmp_seq=0 ttl=64 time=20631217.5077 ms
> 5 bytes from 9002::1: icmp_seq=0 ttl=64 time=20632217.5011 ms
> 5 bytes from 9002::1: icmp_seq=0 ttl=64 time=20633217.5006 ms
> 5 bytes from 9002::1: icmp_seq=0 ttl=64 time=20634220.4604 ms
> 5 bytes from 9002::1: icmp_seq=0 ttl=64 time=20634220.4636 ms
>
> Statistics: 300 sent, 6 received, 98% packet loss
>
>
>
> Thanks,
>
> Pragash Vijayaragavan
> Grad Student at Rochester Institute of Technology
> email : pxv3...@rit.edu
> ph : 585 764 4662 <(585)%20764-4662>
>
>
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] multiple ips for an interface

2017-08-31 Thread Pragash Vijayaragavan
Hi Ole,

Yea, i have my routes on the "show ip6 fib", but still even ping did not
work.

I tried to ping the ips assigned to the local ports on vpp, -> did not work

tried ip4, still did not work.

I could not figure out what "Configuration" i am missing here.



Outputs :

vpp# sh interfaces
  Name   Idx   State  Counter
 Count
FortyGigabitEthernet2/0/0 1 up
FortyGigabitEthernet4/0/0 2 up
local00down

vpp# sh ip fib
ipv4-VRF:0, fib_index 0, flow hash: src dst sport dport proto
0.0.0.0/0
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:0 buckets:1 uRPF:0 to:[0:0]]
[0] [@0]: dpo-drop ip4
0.0.0.0/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:1 buckets:1 uRPF:1 to:[0:0]]
[0] [@0]: dpo-drop ip4
10.1.1.0/24
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:8 buckets:1 uRPF:7 to:[0:0]]
[0] [@4]: ipv4-glean: FortyGigabitEthernet2/0/0
10.1.1.1/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:9 buckets:1 uRPF:8 to:[0:0]]
[0] [@2]: dpo-receive: 10.1.1.1 on FortyGigabitEthernet2/0/0
20.1.1.0/24
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:10 buckets:1 uRPF:9 to:[0:0]]
[0] [@4]: ipv4-glean: FortyGigabitEthernet4/0/0
20.1.1.1/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:11 buckets:1 uRPF:10 to:[0:0]]
[0] [@2]: dpo-receive: 20.1.1.1 on FortyGigabitEthernet4/0/0
224.0.0.0/4
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:3 buckets:1 uRPF:3 to:[0:0]]
[0] [@0]: dpo-drop ip4
240.0.0.0/4
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:2 buckets:1 uRPF:2 to:[0:0]]
[0] [@0]: dpo-drop ip4
255.255.255.255/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [index:4 buckets:1 uRPF:4 to:[0:0]]
[0] [@0]: dpo-drop ip4

vpp# ping 20.1.1.1

Statistics: 300 sent, 0 received, 100% packet loss




Thanks,

Pragash Vijayaragavan
Grad Student at Rochester Institute of Technology
email : pxv3...@rit.edu
ph : 585 764 4662


On Thu, Aug 31, 2017 at 7:44 AM, Ole Troan <otr...@employees.org> wrote:

> Pragash,
>
> > I am just trying to send and receive ip6 traffic through the vpp.
> > I configured the ports with ip6 addresses and send traffic as below. But
> it was not forwarded to the destination port 2.
> >
> >  [  Traffic gen  (dst ip - port 2) ]---[ (port 1) vpp (port
> 2) ]
>
> Right, that has nothing to do with source address selection on VPP.
> You might want to verify that the FIB is set correctly. "show ip6 fib".
>
> Cheers,
> Ole
>
> >
> >
> >
> > Thanks,
> >
> > Pragash Vijayaragavan
> > Grad Student at Rochester Institute of Technology
> > email : pxv3...@rit.edu
> > ph : 585 764 4662
> >
> >
> > On Thu, Aug 31, 2017 at 5:53 AM, Ole Troan <otr...@employees.org> wrote:
> > Pragash,
> >
> > > Is there any cli command for source address selection in the vpp?
> >
> > No. Source address selection only comes into play for locally originated
> packets. VPP itself has only a few of those, and they are somewhat special
> case. Like ND for example.
> >
> > An application could use ip_address_dump/ip_address_details APIs and
> then follow the algorithm in RFC6724/RFC8028. But before we talk about
> solutions, what problem are you trying to solve?
> >
> > Best regards,
> > Ole
> >
> > >
> > > Thanks,
> > >
> > > Pragash Vijayaragavan
> > > Grad Student at Rochester Institute of Technology
> > > email : pxv3...@rit.edu
> > > ph : 585 764 4662
> > >
> > >
> > > On Wed, Aug 30, 2017 at 4:14 AM, Ole Troan <otr...@employees.org>
> wrote:
> > > Pragash,
> > >
> > > > I have a quick question, i am able to assign multiple ips for an
> interface in vpp. Is this correct behavior.
> > >
> > > Yes. Shouldn't it be?
> > > Note that VPP itself doesn't implement source address selection so the
> local application would have to deal with that.
> > >
> > > Best regards,
> > > Ole
> > >
> > >
> > > >
> > > > pragash@revvit:~/VPP$ sudo vppctl sh interfaces address
> > > > FortyGigabitEthernet2/0/0 (up):
> > > >   10.1.1.1/24
> > > >   10.2.1.1/24
> > > >   2001::1/48
> > > >   fe80::3efd:feff:fea6:8ef0/128
> > > >   2001::2/48
> > > >   2003::2/48
> > > >   2004::2/48
> > > > FortyGigabitEthernet4/0/0 (up):
> > > >   2002::1/48
> > > >   fe80::3efd:feff:fea6:8e78/128
> > > >   9001::2/48
> > > > local0 (dn):
> > > >
> > > >
> > > >
> > > > Thanks,
> > > >
> > > > Pragash Vijayaragavan
> > > > Grad Student at Rochester Institute of Technology
> > > > email : pxv3...@rit.edu
> > > > ph : 585 764 4662
> > > >
> > > > ___
> > > > vpp-dev mailing list
> > > > vpp-dev@lists.fd.io
> > > > https://lists.fd.io/mailman/listinfo/vpp-dev
> > >
> > >
> >
> >
>
>
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] multiple ips for an interface

2017-08-30 Thread Pragash Vijayaragavan
Hi Ole,

Is there any cli command for source address selection in the vpp?

Thanks,

Pragash Vijayaragavan
Grad Student at Rochester Institute of Technology
email : pxv3...@rit.edu
ph : 585 764 4662


On Wed, Aug 30, 2017 at 4:14 AM, Ole Troan <otr...@employees.org> wrote:

> Pragash,
>
> > I have a quick question, i am able to assign multiple ips for an
> interface in vpp. Is this correct behavior.
>
> Yes. Shouldn't it be?
> Note that VPP itself doesn't implement source address selection so the
> local application would have to deal with that.
>
> Best regards,
> Ole
>
>
> >
> > pragash@revvit:~/VPP$ sudo vppctl sh interfaces address
> > FortyGigabitEthernet2/0/0 (up):
> >   10.1.1.1/24
> >   10.2.1.1/24
> >   2001::1/48
> >   fe80::3efd:feff:fea6:8ef0/128
> >   2001::2/48
> >   2003::2/48
> >   2004::2/48
> > FortyGigabitEthernet4/0/0 (up):
> >   2002::1/48
> >   fe80::3efd:feff:fea6:8e78/128
> >   9001::2/48
> > local0 (dn):
> >
> >
> >
> > Thanks,
> >
> > Pragash Vijayaragavan
> > Grad Student at Rochester Institute of Technology
> > email : pxv3...@rit.edu
> > ph : 585 764 4662
> >
> > ___
> > vpp-dev mailing list
> > vpp-dev@lists.fd.io
> > https://lists.fd.io/mailman/listinfo/vpp-dev
>
>
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] ip6 routing issue

2017-08-30 Thread Pragash Vijayaragavan
Hi,

We are sending ip6 traffic as below, but traffic is not forwarded to the
destination port.

-  [  Traffic gen  (dst ip - port 2) ]---[ (port 1)
vpp (port 2) ]

We tried to ping from port 2 from port 1, 98 percent packet loss.

Can someone let us know whether we are missing any configuration.
I can give more outputs if needed.


vpp# sh interfaces addr
FortyGigabitEthernet2/0/0 (up):
  2002::1/48
  fe80::3efd:feff:fea6:8ef0/128
FortyGigabitEthernet4/0/0 (up):
  9002::1/48
  fe80::3efd:feff:fea6:8e78/128
local0 (up):

vpp# ping 9002::1 source FortyGigabitEthernet2/0/0
5 bytes from 9002::1: icmp_seq=0 ttl=64 time=20630217.5218 ms
5 bytes from 9002::1: icmp_seq=0 ttl=64 time=20631217.5077 ms
5 bytes from 9002::1: icmp_seq=0 ttl=64 time=20632217.5011 ms
5 bytes from 9002::1: icmp_seq=0 ttl=64 time=20633217.5006 ms
5 bytes from 9002::1: icmp_seq=0 ttl=64 time=20634220.4604 ms
5 bytes from 9002::1: icmp_seq=0 ttl=64 time=20634220.4636 ms

Statistics: 300 sent, 6 received, 98% packet loss



Thanks,

Pragash Vijayaragavan
Grad Student at Rochester Institute of Technology
email : pxv3...@rit.edu
ph : 585 764 4662
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] multiple ips for an interface

2017-08-29 Thread Pragash Vijayaragavan
Hi,

I have a quick question, i am able to assign multiple ips for an interface
in vpp. Is this correct behavior.

pragash@revvit:~/VPP$ sudo vppctl sh interfaces address
FortyGigabitEthernet2/0/0 (up):
  10.1.1.1/24
  10.2.1.1/24
  2001::1/48
  fe80::3efd:feff:fea6:8ef0/128
  2001::2/48
  2003::2/48
  2004::2/48
FortyGigabitEthernet4/0/0 (up):
  2002::1/48
  fe80::3efd:feff:fea6:8e78/128
  9001::2/48
local0 (dn):



Thanks,

Pragash Vijayaragavan
Grad Student at Rochester Institute of Technology
email : pxv3...@rit.edu
ph : 585 764 4662
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] VPP Performance drop from 17.04 to 17.07

2017-08-15 Thread Pragash Vijayaragavan
Hi,

We recently faced a similar issue, we could not insert more than say 500k
routes in ip6 fib table.

https://wiki.fd.io/view/VPP/Command-line_Arguments#.22heapsize.22_parameter

we refered the above link and made the following changes in
/etc/vpp/startup.conf file

ip6 {
  heap-size 4G
}

If you trace back, this parameter makes a call to

/*

* The size of the hash table

*/

#define L2FIB_NUM_BUCKETS (64 * 1024)

#define L2FIB_MEMORY_SIZE (256<<20)


and sets the memory size.



HTH





Thanks,

Pragash Vijayaragavan
Grad Student at Rochester Institute of Technology
email : pxv3...@rit.edu
ph : 585 764 4662


On Tue, Aug 15, 2017 at 8:05 AM, John Lo (loj) <l...@cisco.com> wrote:

> Hi Billy,
>
>
>
> The output of “show l2fib” is showing how many MAC entries exist in the
> L2FIB and is not relevant to the size of L2FIB table. The L2FIB table size
> is not configurable. It is a bi-hash table with size set by the following
> #def’s in l2_fib.h and has not changed for quite a while, definitely not
> between 1704, 1707 and current master:
>
> /*
>
> * The size of the hash table
>
> */
>
> #define L2FIB_NUM_BUCKETS (64 * 1024)
>
> #define L2FIB_MEMORY_SIZE (256<<20)
>
>
>
> It is interesting to note that at the end of the test run, there is
> different number of MAC entries in the L2FIB. I think this may have to do
> with a change in 1707 where an interface up/down would cause MACs learned
> on that interface to be flushed. So when the interface come back up, the
> MACs needs to be learned again.  With 1704, the stale learned MACs from an
> interface will remain in L2FIB even if the interface is down or deleted
> unless aging is enabled to removed them at the BD aging interval.
>
>
>
> Another improvement added in 1707 was a check in the l2-fwd node so when a
> MAC entry is found in L2FIB, its sequence number is checked to make sure it
> is not stale and subject to flushing (such as MAC learned when this
> interface sw_if_index was up but went down, or if this sw_if_index was
> used, deleted and reused). If the MAC is stale, the packet will be flooded
> instead of making use of the stale MAC entry to forward it.
>
>
>
> I wonder if the test script for the performance does create/delete
> interfaces or set interface to admin up/down states causing stale MACs be
> flushed in 1707?  With 1704, it may be using stale MAC entries to forward
> packets rather than flooding to learn the MACs again. This can explain the
> l2-flood and l2-input count ratio difference between 1704 and 1707.
>
>
>
> When measuring l2-bridg forwarding performance, are you setup to measure
> the forwarding rate in the steady forwarding state?  If all the 10K or 1M
> flows are started at the same time for a particular test, there will be an
> initial low PPS throughput period when all packets needs to be flooded and
> MACs earned before it settle down to a higher steady state PPS forwarding
> rate. If there is interface flap or other events that causes MAC flush, the
> MAC will needs to be learned again. I wonder if the forwarding performance
> for 10K or 1M flows is measure at the steady forwarding state or not.
>
>
>
> Above are a few generic comments I can think of, without knowing much
> details about how the tests are setup and measured. Hope it can help to
> explain the different behavior observed between 1704 and 1707.
>
>
>
> Regards,
>
> John
>
>
>
> *From:* vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] *On
> Behalf Of *Billy McFall
> *Sent:* Monday, August 14, 2017 6:40 PM
> *To:* vpp-dev@lists.fd.io
> *Subject:* [vpp-dev] VPP Performance drop from 17.04 to 17.07
>
>
>
> In the last VPP call, I reported some internal Red Hat performance testing
> was showing a significant drop in performance between releases 17.04 to
> 17.07. This with l2-bridge testing - PVP - 0.002% Drop Rate:
>
>VPP-17.04: 256 Flow 7.8 MP/s 10k Flow 7.3 MP/s 1m Flow 5.2 MP/s
>
>VPP-17.07: 256 Flow 7.7 MP/s 10k Flow 2.7 MP/s 1m Flow 1.8 MP/s
>
>
>
> The performance team re-ran some of the tests for me with some additional
> data collected. Looks like the size of the L2 FIB table was reduced in
> 17.07. Below are the number of entries in the MAC Table after the tests are
> run:
>
>17.04:
>
>  show l2fib
>
>  408 l2fib entries
>
>17.07:
>
>  show l2fib
>
>  1067053 l2fib entries with 1048576 learned (or non-static) entries
>
>
>
> This caused more packets to be flooded (see out of 'show node counters'
> below). I looked but couldn't find anything. Is the size of the L2 FIB
> Table table configurable?
>
>
>
> Thanks,
>
> Billy McFal

Re: [vpp-dev] ip6 route add bug

2017-08-11 Thread Pragash Vijayaragavan
Hi Neale,

Its done.

I checked for [add|del] [count ] as well.
I believe it works fine now.

Please find the output below.

vpp# sh ip6 fib
ipv6-VRF:0, fib_index 0, flow hash:
::/0
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:5 buckets:1 uRPF:5 to:[0:0]]
[0] [@0]: dpo-drop ip6
fe80::/10
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:6 buckets:1 uRPF:6 to:[0:0]]
[0] [@2]: dpo-receive

vpp# ip route addd::12/128 via 9001::1

vpp# sh ip6 fib
ipv6-VRF:0, fib_index 0, flow hash:
::/0
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:5 buckets:1 uRPF:5 to:[0:0]]
[0] [@0]: dpo-drop ip6
9001::1/128
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:8 buckets:1 uRPF:5 to:[0:0]]
[0] [@0]: dpo-drop ip6
addd::12/128
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:9 buckets:1 uRPF:7 to:[0:0]]
  load-balance-map: index:0 buckets:1
 index:0
   map:0
[0] [@6]: dpo-load-balance: [index:8 buckets:1 uRPF:5 to:[0:0]]
  [0] [@0]: dpo-drop ip6
fe80::/10
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:6 buckets:1 uRPF:6 to:[0:0]]
[0] [@2]: dpo-receive

vpp# ip route add addd::11/128 via 9001::1

vpp# sh ip6 fib
ipv6-VRF:0, fib_index 0, flow hash:
::/0
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:5 buckets:1 uRPF:5 to:[0:0]]
[0] [@0]: dpo-drop ip6
9001::1/128
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:8 buckets:1 uRPF:5 to:[0:0]]
[0] [@0]: dpo-drop ip6
addd::11/128
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:10 buckets:1 uRPF:7 to:[0:0]]
  load-balance-map: index:0 buckets:1
 index:0
   map:0
[0] [@6]: dpo-load-balance: [index:8 buckets:1 uRPF:5 to:[0:0]]
  [0] [@0]: dpo-drop ip6
addd::12/128
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:9 buckets:1 uRPF:7 to:[0:0]]
  load-balance-map: index:0 buckets:1
 index:0
   map:0
[0] [@6]: dpo-load-balance: [index:8 buckets:1 uRPF:5 to:[0:0]]
  [0] [@0]: dpo-drop ip6
fe80::/10
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:6 buckets:1 uRPF:6 to:[0:0]]
[0] [@2]: dpo-receive

vpp# ip route del addd::11/128 via 9001::1

vpp# sh ip6 fib
ipv6-VRF:0, fib_index 0, flow hash:
::/0
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:5 buckets:1 uRPF:5 to:[0:0]]
[0] [@0]: dpo-drop ip6
9001::1/128
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:8 buckets:1 uRPF:5 to:[0:0]]
[0] [@0]: dpo-drop ip6
addd::12/128
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:9 buckets:1 uRPF:7 to:[0:0]]
  load-balance-map: index:0 buckets:1
 index:0
   map:0
[0] [@6]: dpo-load-balance: [index:8 buckets:1 uRPF:5 to:[0:0]]
  [0] [@0]: dpo-drop ip6
fe80::/10
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:6 buckets:1 uRPF:6 to:[0:0]]
[0] [@2]: dpo-receive

vpp# ip route del addd::12/128

vpp# sh ip6 fib
ipv6-VRF:0, fib_index 0, flow hash:
::/0
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:5 buckets:1 uRPF:5 to:[0:0]]
[0] [@0]: dpo-drop ip6
fe80::/10
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:6 buckets:1 uRPF:6 to:[0:0]]
[0] [@2]: dpo-receive

vpp# ip route count 2 addd::12/128 via 9001::1
5.680738e4 routes/sec

vpp# sh ip6 fib
ipv6-VRF:0, fib_index 0, flow hash:
::/0
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:5 buckets:1 uRPF:5 to:[0:0]]
[0] [@0]: dpo-drop ip6
9001::1/128
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:9 buckets:1 uRPF:5 to:[0:0]]
[0] [@0]: dpo-drop ip6
addd::12/128
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:8 buckets:1 uRPF:7 to:[0:0]]
  load-balance-map: index:0 buckets:1
 index:0
   map:0
[0] [@6]: dpo-load-balance: [index:9 buckets:1 uRPF:5 to:[0:0]]
  [0] [@0]: dpo-drop ip6
addd:0:0:1::12/128
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:10 buckets:1 uRPF:7 to:[0:0]]
  load-balance-map: index:0 buckets:1
 index:0
   map:0
[0] [@6]: dpo-load-balance: [index:9 buckets:1 uRPF:5 to:[0:0]]
  [0] [@0]: dpo-drop ip6
fe80::/10
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:6 buckets:1 uRPF:6 to:[0:0]]
[0] [@2]: dpo-receive

vpp# ip route del count 2 addd::12/128 via 9001::1
1.061138e5 routes/sec

vpp# sh ip6 fib
ipv6-VRF:0, fib_index 0, flow hash:
::/0
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:5 buckets:1 uRPF:5 to:[0:0]]
[0] [@0]: dpo-drop ip6
fe80::/10
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:6 buckets:1 uRPF:6 to:[0:0]]
[0] [@2]: dpo-receive





Thanks,

Pragash Vijayaragavan
Grad Student at Rochester Institute of Technology
email : pxv3...@rit.edu
ph : 585 764 4662


On Fri, Aug 11, 2017 at 1:48 PM, Neale Ranns (nranns) <nra...@cisco.com>
wrote:

> Hi Pragash,
>
>
>
> It’s in vnet_ip_route_cmd() from ip/lookup.c.
>
>
>
> It is a loop over a series of text matching rules. After each rule
> (un

Re: [vpp-dev] ip6 route add bug

2017-08-11 Thread Pragash Vijayaragavan
Hi Neale,

I took a look at the code.

What i believe is being done is the parsing is going from left to right,
for say  "ip route add add::123/128 via 9001::3",
then when the parser encounters a "add", it sets the is_add = 1, flag, so
the is_add flag is set two times in this case
and hence the ip is not parsed properly.

We can just have a check, if the is_add flag is set or not, thus preventing
it being set 2 times.

I tried to find where i should do this change in the code, but could not
get where the cli is being used and called.

I tried to modify the vat/api_format.c, which has the "api_ip_add_del_route
(vat_main_t * vam)" function
but it didnt work.

Could you please hint me on how this function was generated.

I checked all the format files and api files. I could not find any file
where the input string is parsed.
















Thanks,

Pragash Vijayaragavan
Grad Student at Rochester Institute of Technology
email : pxv3...@rit.edu
ph : 585 764 4662


On Thu, Aug 10, 2017 at 9:56 AM, Neale Ranns (nranns) <nra...@cisco.com>
wrote:

> Hi Pragash,
>
>
>
> Yes that’s a bug.
>
> Could you submit a patch for it – we need to flip the order the ‘add’
> string is parsed from the options so that it comes after parsing the IPv6
> address.
>
>
>
> Thanks,
>
> neale
>
>
>
> *From: *Pragash Vijayaragavan <pxv3...@rit.edu>
> *Reply-To: *"pxv3...@rit.edu" <pxv3...@rit.edu>
> *Date: *Thursday, 10 August 2017 at 14:47
> *To: *"vpp-dev@lists.fd.io" <vpp-dev@lists.fd.io>
> *Cc: *"Neale Ranns (nranns)" <nra...@cisco.com>, "John Marshall (jwm)" <
> j...@cisco.com>, Minseok Kwon <mxk...@rit.edu>
> *Subject: *ip6 route add bug
>
>
>
> Hi,
>
>
>
> When i add the following ip6 route, which starts with "add", the "add" is
> ignored and the rest of the ip is added.
>
>
>
> Is this a bug?
>
>
>
> You can check the outputs below.
>
>
>
> This is in 17.07-rc0.
>
>
>
>
>
> *vpp# ip route add add:9538:44f8::/45 via 9000::1*
>
>
>
> vpp# sh ip6 fib
>
> ipv6-VRF:0, fib_index 0, flow hash:
>
> ::/0
>
>   unicast-ip6-chain
>
>   [@0]: dpo-load-balance: [index:5 buckets:1 uRPF:5 to:[0:0]]
>
> [0] [@0]: dpo-drop ip6
>
> 9000::1/128
>
>   unicast-ip6-chain
>
>   [@0]: dpo-load-balance: [index:8 buckets:1 uRPF:5 to:[0:0]]
>
> [0] [@0]: dpo-drop ip6
>
> *9538:44f8::/45*
>
>   unicast-ip6-chain
>
>   [@0]: dpo-load-balance: [index:9 buckets:1 uRPF:7 to:[0:0]]
>
>   load-balance-map: index:0 buckets:1
>
>  index:0
>
>map:0
>
> [0] [@6]: dpo-load-balance: [index:8 buckets:1 uRPF:5 to:[0:0]]
>
>   [0] [@0]: dpo-drop ip6
>
> fe80::/10
>
>   unicast-ip6-chain
>
>   [@0]: dpo-load-balance: [index:6 buckets:1 uRPF:6 to:[0:0]]
>
> [0] [@2]: dpo-receive
>
>
>
>
>
> Is this fixed in 17.10.
>
>
>
>
>
>
> Thanks,
>
>
>
> Pragash Vijayaragavan
>
> Grad Student at Rochester Institute of Technology
>
> email : pxv3...@rit.edu
>
> ph : 585 764 4662 <(585)%20764-4662>
>
>
>
>
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] ip6 route add bug

2017-08-10 Thread Pragash Vijayaragavan
Hi,

When i add the following ip6 route, which starts with "add", the "add" is
ignored and the rest of the ip is added.

Is this a bug?

You can check the outputs below.

This is in 17.07-rc0.


*vpp# ip route add add:9538:44f8::/45 via 9000::1*

vpp# sh ip6 fib
ipv6-VRF:0, fib_index 0, flow hash:
::/0
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:5 buckets:1 uRPF:5 to:[0:0]]
[0] [@0]: dpo-drop ip6
9000::1/128
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:8 buckets:1 uRPF:5 to:[0:0]]
[0] [@0]: dpo-drop ip6
*9538:44f8::/45*
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:9 buckets:1 uRPF:7 to:[0:0]]
  load-balance-map: index:0 buckets:1
 index:0
   map:0
[0] [@6]: dpo-load-balance: [index:8 buckets:1 uRPF:5 to:[0:0]]
  [0] [@0]: dpo-drop ip6
fe80::/10
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:6 buckets:1 uRPF:6 to:[0:0]]
[0] [@2]: dpo-receive


Is this fixed in 17.10.



Thanks,

Pragash Vijayaragavan
Grad Student at Rochester Institute of Technology
email : pxv3...@rit.edu
ph : 585 764 4662
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] vpp ip6 fib

2017-06-13 Thread Pragash Vijayaragavan
Thanks for the help Neale, works fine now.

Thanks,

Pragash Vijayaragavan
Grad Student at Rochester Institute of Technology
email : pxv3...@rit.edu
ph : 585 764 4662


On Tue, Jun 13, 2017 at 3:40 AM, Neale Ranns (nranns) <nra...@cisco.com>
wrote:

> Hi Pragash,
>
>
>
> To update table[FWD] we use the ip6_fib_table_fwding* functions.
>
> Tp update table[NON_FWD] we use the ip6_fb_table_entry_* functions.
>
>
>
> In both cases the key is the prefix (a/x in your example). In table[FWD]
> the value is the index of the load-balance object, in table[NON-FWD] the
> value is the index of the fib_entry_t.
>
>
>
> If you add this route;
>
>   Ip route add a/x via b.
>
>
>
> You’ll first see additions to both tables for b/128 and then additions for
> a/x.
>
>
>
> Hth,
>
> neale
>
>
>
> *From: *Pragash Vijayaragavan <pxv3...@rit.edu>
> *Reply-To: *"pxv3...@rit.edu" <pxv3...@rit.edu>
> *Date: *Tuesday, 13 June 2017 at 07:26
> *To: *"Neale Ranns (nranns)" <nra...@cisco.com>, "vpp-dev@lists.fd.io" <
> vpp-dev@lists.fd.io>
> *Cc: *Minseok Kwon <mxk...@rit.edu>, Shailesh Vajpayee <srv6...@rit.edu>
> *Subject: *vpp ip6 fib
>
>
>
> Hi,
>
>
>
> Can someone please help me on below,
>
>
>
> when i insert a route using "ip route add  via "
>
>
>
> how does the fib insert this in its table[FWD, NON_FWD], -> does it call
> different functions for forwarding and nonforwarding ip6_addresses?
>
>
>
> I have inserted a cuckoo_add code on "ip6_fib_table_fwding_dpo_update"
> function,
>
> but only the  ip is getting inserted into my cuckoo filter always.
>
>
>
> Which function does a/x call to insert into the fib?,
>
>
>
> i also tried a cuckoo_add inside "ip6_fib_table_entry_insert"
>
> but this didnt work as well.
>
>
>
> I came across this when i made a CLI to display my cuckoo filter, and im
> pretty sure there is
>
> nothing wrong with the cli.
>
>
>
>
> Thanks,
>
>
>
> Pragash Vijayaragavan
>
> Grad Student at Rochester Institute of Technology
>
> email : pxv3...@rit.edu
>
> ph : 585 764 4662 <(585)%20764-4662>
>
>
>
>
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] DPDK forwarding through line

2017-05-11 Thread Pragash Vijayaragavan
Hi,

We are trying to sent traffic through the line from one DUT to another, we
created the traffic using moongen, but we are not aware of how to forward
the traffic to the line using dpdk, since our destination ips cant change.

Is there any way through which we can send traffic through the line from 1
DUT to another

Can someone point us to any documentation which can be helpful for this.

Topology :
|   DPDK eth1 |
|   moongen||
DPDK eth1 |
--- |   (DUT 1)  | -- |
 (DUT 2) |
|||
   |


Thanks,

Pragash Vijayaragavan
Grad Student at Rochester Institute of Technology
email : pxv3...@rit.edu
ph : 585 764 4662
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Building router plugin

2017-03-24 Thread Pragash Vijayaragavan
Hi,

Dave has explained how to include a plugin in this mail, hope it helps.


Thanks,



Pragash Vijayaragavan

Grad Student at Rochester Institute of Technology

email : pxv3...@rit.edu

ph : 585 764 4662 <(585)%20764-4662>

-- Forwarded message --
From: Dave Barach (dbarach) <dbar...@cisco.com>
Date: Mon, Mar 6, 2017 at 7:49 AM
Subject: RE: configure .ac file missing when creating plugin
To: "pxv3...@rit.edu" <pxv3...@rit.edu>
Cc: Arjun Dhuliya <amd5...@rit.edu>, Shailesh Vajpayee <srv6...@rit.edu>,
"Neale Ranns (nranns)" <nra...@cisco.com>, "Damjan Marion (damarion)" <
damar...@cisco.com>, "John Marshall (jwm)" <j...@cisco.com>, Minseok Kwon <
mxk...@rit.edu>


The plugin “myplug.so” must be installed e.g. in /usr/lib/vpp_plugins, or
you’ll need to set the plugin path via the command line.



If you enable your plugin in src/configure.ac and e.g. “make PLATFORM=vpp
TAG=vpp_debug install-deb”, the vpp-plugin Debian package [or equivalent
RPM] will be happy to install the goods for you.



Thanks… Dave



*From:* Pragash Vijayaragavan [mailto:pxv3...@rit.edu]
*Sent:* Sunday, March 5, 2017 9:43 PM
*To:* Dave Barach (dbarach) <dbar...@cisco.com>
*Cc:* Arjun Dhuliya <amd5...@rit.edu>; Shailesh Vajpayee <srv6...@rit.edu>;
Neale Ranns (nranns) <nra...@cisco.com>; Damjan Marion (damarion) <
damar...@cisco.com>; John Marshall (jwm) <j...@cisco.com>; Minseok Kwon <
mxk...@rit.edu>
*Subject:* Re: configure .ac file missing when creating plugin



Hi Dave,



Thanks for your mail.



I did the steps which you mentioned but still i am not able to see my
plugin running in the "vppctl sh run"



After the above steps, i did a



autoreconf -i -f



../configure



sudo restart vpp



Am i missing anything here, should i do a make to build vpp again with the
plugin





Thanks,



Pragash Vijayaragavan

Grad Student at Rochester Institute of Technology

email : pxv3...@rit.edu

ph : 585 764 4662 <(585)%20764-4662>





On Thu, Mar 2, 2017 at 12:46 PM, Dave Barach (dbarach) <dbar...@cisco.com>
wrote:

It’s “missing” because it’s not needed...



Add a line to .../src/configure.ac of the form:



PLUGIN_ENABLED(your-plugin



Add this to .../src/plugins/Makefile.am:



if ENABLE_YOUR_PLUGIN

include your_plugin.am

endif



The makefile.am fragment in “your_plugin.am” should have been generated by
the emacs lisp code.



HTH… Dave



*From:* Pragash Vijayaragavan [mailto:pxv3...@rit.edu]
*Sent:* Thursday, March 2, 2017 12:38 PM
*To:* Dave Barach (dbarach) <dbar...@cisco.com>; Arjun Dhuliya <
amd5...@rit.edu>; Shailesh Vajpayee <srv6...@rit.edu>; Neale Ranns (nranns)
<nra...@cisco.com>; Damjan Marion (damarion) <damar...@cisco.com>
*Cc:* John Marshall (jwm) <j...@cisco.com>; Minseok Kwon <mxk...@rit.edu>
*Subject:* configure .ac file missing when creating plugin



Hi,



we tried to create a plugin using the emacs and all skeleton file but when
loading and creating the libraries the configure.ac file was missing and so
we could not proceed further.



What we are trying to do is to create a plugin and put our cuckoo filter
code in the plugin so that the ip6_fib will make calls to the functions in
the plugin and do the appropriate operations. Should we create a plugin to
add our functions or can we add those in the ip6 files itself.



Is this the right way to do this or is there any other way.




Thanks,



Pragash Vijayaragavan

Grad Student at Rochester Institute of Technology

email : pxv3...@rit.edu

ph : 585 764 4662 <(585)%20764-4662>






Thanks,

Pragash Vijayaragavan
Grad Student at Rochester Institute of Technology
email : pxv3...@rit.edu
ph : 585 764 4662


On Fri, Mar 24, 2017 at 11:14 AM, Łukasz Chrustek <luk...@chrustek.net>
wrote:

> Hi Dave,
>
> but this plugin isn't part of core vpp code tree, it is from vppsb.
>
> Regards
> Luk
> > Dear Luk,
>
> > The "vpp-install," "install-packags," "install-deb" etc. targets
> > will build the set of plugins configured in src/configure.ac:
>
> > For example:
>
> > $ cd build-root
> > $ make PLATFORM=vpp TAG=vpp vpp-install
>
> > HTH. Dave
>
> > -Original Message-
> > From: vpp-dev-boun...@lists.fd.io
> > [mailto:vpp-dev-boun...@lists.fd.io] On Behalf Of Lukasz Chrustek
> > Sent: Friday, March 24, 2017 4:55 AM
> > To: vpp-dev@lists.fd.io
> > Subject: [vpp-dev] Building router plugin
>
> > Hi,
>
> > Can You advice what is the proper way for building router plugin now ?
>
> > I'm trying to build this plugin as stated in README, but I get:
>
> > # ./bootstrap.sh
>
> > Saving PATH settings in /git/vpp/build-root/path_setup
> > Source this file lat

[vpp-dev] Threads in vpp

2017-02-21 Thread Pragash Vijayaragavan
Hi,

I am having trouble in identifying the code which is responsible for
creating the threads..


So basically i compared /vnet/ip/ip4_forward.c ip4_lookup_inline and
/vnet/ip/ip4_forward.c
ip6 ip6_lookup_inline functions to understand where our
filter has to be plugged in for ip6 lookup.

But as we discussed we need to create 2 threads for our filter, one for
filling up the filter from fib
and other to filter the ips for lookup.

The "ip6_lookup_inline" performs the lookup part, what i really want to
know is, which part of the code
is used to create the thread for filling the filter

So i took a deeper look into the ip4_lookup_inline and ip4_mtrie,

the [ ply_create ] <-- [ ip4_mtrie_init ] <-- [
ip4_create_fib_with_table_id ]

Code :

static ip4_fib_mtrie_leaf_t
<https://docs.fd.io/vpp/16.12/d1/d88/ip4__mtrie_8h.html#a9403a0965f788d1806c6c78011789a47>
91
<https://docs.fd.io/vpp/16.12/df/d21/ip4__mtrie_8c.html#a66f2991da9e4aa0a5e6137c8342e0306>
 ply_create
<https://docs.fd.io/vpp/16.12/df/d21/ip4__mtrie_8c.html#a66f2991da9e4aa0a5e6137c8342e0306>
(ip4_fib_mtrie_t
<https://docs.fd.io/vpp/16.12/dd/d4b/structip4__fib__mtrie__t.html> * m,
ip4_fib_mtrie_leaf_t
<https://docs.fd.io/vpp/16.12/d1/d88/ip4__mtrie_8h.html#a9403a0965f788d1806c6c78011789a47>
init_leaf, uword
<https://docs.fd.io/vpp/16.12/d9/d49/types_8h.html#a5e075c110dd9125d1a56d211f60f788c>
prefix_len)
92 {
93  ip4_fib_mtrie_ply_t
<https://docs.fd.io/vpp/16.12/d7/dda/structip4__fib__mtrie__ply__t.html> *
p;
94
95  /* Get cache aligned ply. */
96  pool_get_aligned
<https://docs.fd.io/vpp/16.12/db/db7/pool_8h.html#aa046d24c36bc67690ab4ea2f819db598>
(m->ply_pool
<https://docs.fd.io/vpp/16.12/dd/d4b/structip4__fib__mtrie__t.html#a48d8ee171a480a983178c5e193414df8>,
p, sizeof (p[0]));
97
98  ply_init
<https://docs.fd.io/vpp/16.12/df/d21/ip4__mtrie_8c.html#a53d4b06f2a0c9ca0f6f50999b9501663>
(p, init_leaf, prefix_len);
99  return ip4_fib_mtrie_leaf_set_next_ply_index
<https://docs.fd.io/vpp/16.12/d1/d88/ip4__mtrie_8h.html#a34174646d1821083326bb968e715dc7c>
(p - m->ply_pool
<https://docs.fd.io/vpp/16.12/dd/d4b/structip4__fib__mtrie__t.html#a48d8ee171a480a983178c5e193414df8>
);
100 }


functions are used to create the trie, but there is no threads involved in
this is what i am seeing,
please correct me if my understanding is wrong.

Does the fib itself run on a multi threaded way so that, we dont have to
explicitly create new threads for all
functions which are utilized/called by the fib, or is there the plugin
module which takes care of creating threads
whenever new features are plugged in, say for example we create our
"cuckoo_filter_create" function in "cuckoo.c"
and we just call those functions with the prefix entries from the fib table
and create the filter and return a pointer to our filter ?
will this suffice in terms of threading.

Hope i am clear and sorry for a long mail.


Thanks,

Pragash Vijayaragavan (pxv3...@rit.edu)
ph : 585 764 4662
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] vpp query

2017-02-19 Thread Pragash Vijayaragavan
Hi,

Thanks for the help, we will look into the suggested and get back if we
have any concerns.

Thanks
Pragash Vijayragavan
pxv3...@rit.edu
585 764 4662

On Sat, Feb 18, 2017 at 4:12 PM, Dave Barach (dbarach) <dbar...@cisco.com>
wrote:

> A bit more inline, see drb>>>
>
> Thanks… Dave
>
> -Original Message-
> From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On
> Behalf Of Neale Ranns (nranns)
> Sent: Saturday, February 18, 2017 3:52 PM
> To: pxv3...@rit.edu; vpp-dev@lists.fd.io
> Cc: Shailesh Vajpayee <srv6...@rit.edu>; Arjun Dhuliya <amd5...@rit.edu>;
> Minseok Kwon <mxk...@rit.edu>
> Subject: Re: [vpp-dev] vpp query
>
>
> Hi Pragash,
>
> Some answers inline @[NR]
>
>
> From: <vpp-dev-boun...@lists.fd.io> on behalf of Pragash Vijayaragavan <
> pxv3...@rit.edu>
> Reply-To: "pxv3...@rit.edu" <pxv3...@rit.edu>
> Date: Saturday, 18 February 2017 at 03:56
> To: "vpp-dev@lists.fd.io" <vpp-dev@lists.fd.io>
> Cc: Shailesh Vajpayee <srv6...@rit.edu>, Arjun Dhuliya <amd5...@rit.edu>,
> Minseok Kwon <mxk...@rit.edu>
> Subject: [vpp-dev] vpp query
>
> Hi,
>
>
> We are working on a bloom Filter algorithm for ipv6 lookup in vpp and we
> have a few queries.
> It would be great if someone could help us with the following queries.
>
> 1. from the code, ip6_forward.c file processes 2 packets at a time from
> the frame.
> how are threads used in this. Is there a thread for each frame to be
> processed or a thread per packet.
>
> [NR] there is one thread per frame. If multiple threads are used, then
> each thread will invoke ip6_lookup_inline will a different frame.
> Processing two packets at once in the same thread is to improve d-cache
> performance.
>
> drb>>> By default, the dual-loop pattern prefetches 2 packets ahead. It
> sometimes helps a bit to change the prefetch stride, e.g. to 3 or 4 ahead.
> Vpp runs with a measured instructions-per-clock > 2. Until the compiler
> runs out of registers, processing multiple packets at once helps the
> hardware exploit fine-grained parallelism. Simple story: packet i uses a
> set of registers to do . Packet i+1 uses a disjoint set of
> registers to do .
>
> Selected routines benefit from quad-looping, processing 4 packets at a
> time. Take a look at dpdk_device_input(...) in .../src/vnet/devices/dpdk/
> node.c.
>
> 2. A problem which we came across was synchronization for the lookup and
> filling of the filter, Dave suggested
> we use 2 filters and swap between them to address this issue
> Is there any limitation on the amount of memory to be used for the filter.
>
> [NR] the only limit is the heap size. This is (1<<30) bytes by default.
> You can change it;
>  https://wiki.fd.io/view/VPP/Command-line_Arguments#.
> 22heapsize.22_parameter
>
> drb>>> Note that if you need a heap > 4gb, special arrangements are
> required. Ping me if you think you need to go there.
>
> 3. Also we are required to handle 2 threads for our filter, one is to fill
> up the filter using the fib entries
> and the other is for lookup
> - the ip6_forward.c -> ip6_lookup_inline function does the lookup part.
> From our understanding this function
> is processing the packets and checking if the destination ip is in the
> fib. But we are required to fill our filter
> with all fib entries dynamically as well, we understand this is also
> similar to the mtrie code, but we are not able to
> get how it works and threading is done here.
>
> [NR]
> there’s a brief into the thread models here:
>   https://wiki.fd.io/view/VPP/Using_VPP_In_A_Multi-thread_Model
>
> in multi-thread mode worker threads invoke ip6_lookup_inline, the main
> thread does the filing. In single thread mode the main thread does both.
>
> The IPv6 FIB is composed of 2 tables (where a table in this context means
> a DB keyed by prefix); forwarding and non-forwarding.
> The non-forwarding table contains ALL the prefixes that VPP is aware of –
> a lookup in this table will result in a fib_entry_t.
> The forwarding table contains the sub-set of prefixes that can be used for
> forwarding – a lookup in this table will result in a load_balance_t.
> I suspect you want to use bloom filters for the forwarding table.
> Additions and removals from this table are via ip6_fib_table_fwding_dpo_update
> and ip6_fib_table_fwding_dpo_remove respectively.
>
> HTH,
> neale
>
> --
> Thanks,
>
> Pragash Vijayaragavan (pxv3...@rit.edu)
> ph : 585 764 4662
>
>
>
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev
>



-- 
Thanks,

Pragash Vijayaragavan (pxv3...@rit.edu)
ph : 585 764 4662
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] vpp query

2017-02-18 Thread Pragash Vijayaragavan
Hi,


We are working on a bloom Filter algorithm for ipv6 lookup in vpp and we
have a few queries.
It would be great if someone could help us with the following queries.

1. from the code, ip6_forward.c file processes 2 packets at a time from the
frame.
how are threads used in this. Is there a thread for each frame to be
processed or a thread per packet.

2. A problem which we came across was synchronization for the lookup and
filling of the filter, Dave suggested
we use 2 filters and swap between them to address this issue
Is there any limitation on the amount of memory to be used for the filter.

3. Also we are required to handle 2 threads for our filter, one is to fill
up the filter using the fib entries
and the other is for lookup
- the ip6_forward.c -> ip6_lookup_inline function does the lookup part.
>From our understanding this function
is processing the packets and checking if the destination ip is in the fib.
But we are required to fill our filter
with all fib entries dynamically as well, we understand this is also
similar to the mtrie code, but we are not able to
get how it works and threading is done here.

-- 
Thanks,

Pragash Vijayaragavan (pxv3...@rit.edu)
ph : 585 764 4662
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev