Re: [EXT] [vpp-dev] Regarding buffers-per-numa parameter

2020-02-17 Thread Nitin Saxena
HI Chethan,

Your packet trace shows that packet data is all 0 and that’s why you are 
running into l3 mac mismatch.
I am guessing something messed with IOMMU due to which translation is not 
happening. Although packet length is correct.
You can try out AVF plugin to iron out where problem exists, in dpdk plugin or 
vlib

Thanks,
Nitin

From: chetan bhasin 
Sent: Tuesday, February 18, 2020 12:50 PM
To: me 
Cc: Nitin Saxena ; vpp-dev 
Subject: Re: [EXT] [vpp-dev] Regarding buffers-per-numa parameter

Hi,
One more finding related to intel nic and number of buffers (537600)

vpp branch
driver
card
buffers
Traffic
Err
stable/1908
uio_pci_genric
X722(10G)
537600
 Working

stable/1908
vfio-pci
XL710(40G)
537600
Not Working
l3 mac mismatch
stable/2001
uio_pci_genric
X722(10G)
537600
 Working

stable/2001
vfio-pci
XL710(40G)
537600
 Working



Thanks,
Chetan

On Mon, Feb 17, 2020 at 7:17 PM chetan bhasin via 
Lists.Fd.Io
 mailto:gmail@lists.fd.io>> wrote:
Hi Nitin,

https://github.com/FDio/vpp/commits/stable/2001/src/vlib
As per stable/2001 branch , the given change is checked-in around Oct 28 2019.

df0191ead2cf39611714b6603cdc5bdddc445b57 is previous commit of 
b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897?
Yes (branch vpp 20.01)

Thanks,
Chetan Bhasin

On Mon, Feb 17, 2020 at 5:33 PM Nitin Saxena 
mailto:nsax...@marvell.com>> wrote:
Hi Damjan,

>> if you read Chetan’s email bellow, you will see that this one is already 
>> excluded…
Sorry I missed that part. After seeing diffs between stable/1908 and 
stable/2001, commit: b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897 is the only 
visible git commit in dpdk plugin which is playing with mempool buffers. If it 
does not solve the problem then I suspect problem lies outside dpdk plugin. I 
am guessing DPDK-19.08 is being used here with VPP-19.08

Hi Chetan,
> > 3) I took previous commit of  "vlib: don't use vector for keeping buffer
> indices in the pool " ie "df0191ead2cf39611714b6603cdc5bdddc445b57" :
> Everything looks fine with Buffers 537600.
In which branch, Commit: df0191ead2cf39611714b6603cdc5bdddc445b57 is previous 
commit of b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897?

Thanks,
Nitin
> -Original Message-
> From: Damjan Marion mailto:dmar...@me.com>>
> Sent: Monday, February 17, 2020 3:47 PM
> To: Nitin Saxena mailto:nsax...@marvell.com>>
> Cc: chetan bhasin 
> mailto:chetan.bhasin...@gmail.com>>; 
> vpp-dev@lists.fd.io
> Subject: Re: [EXT] [vpp-dev] Regarding buffers-per-numa parameter
>
>
> Dear Nitin,
>
> if you read Chetan’s email bellow, you will see that this one is already
> excluded…
>
> Also, it will not be easy to explain how this patch blows tx function in dpdk
> mlx5 pmd…
>
> —
> Damjan
>
> > On 17 Feb 2020, at 11:12, Nitin Saxena 
> > mailto:nsax...@marvell.com>> wrote:
> >
> > Hi Prashant/Chetan,
> >
> > I would try following change first to solve the problem in 1908
> >
> > commit b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897b
> > Author: Damjan Marion mailto:damar...@cisco.com>>
> > Date:   Tue Mar 12 18:14:15 2019 +0100
> >
> > vlib: don't use vector for keeping buffer indices in
> >
> > Type: refactor
> >
> > Change-Id: I72221b97d7e0bf5c93e20bbda4473ca67bfcdeb4
> > Signed-off-by: Damjan Marion 
> > damar...@cisco.com
> >
> > You can also try copying src/plugins/dpdk/buffer.c from stable/2001
> branch to stable/1908
> >
> > Thanks,
> > Nitin
> >
> > From: vpp-dev@lists.fd.io 
> > mailto:vpp-dev@lists.fd.io>> On Behalf Of Damjan
> Marion via 
> Lists.Fd.Io
> > Sent: Monday, February 17, 2020 1:52 PM
> > To: chetan bhasin 
> > mailto:chetan.bhasin...@gmail.com>>
> > Cc: vpp-dev@lists.fd.io
> > Subject: [EXT] Re: [vpp-dev] Regarding buffers-per-numa parameter
> >
> > External Email
> >
> > On 17 Feb 2020, at 07:37, chetan bhasin 
> > mailto:chetan.bhasin...@gmail.com>>
> wrote:
> >
> > Bottom line is stable/vpp 908 does not work with higher number of buffers
> but stable/vpp2001 does. Could you please advise which area we can look at
> ,as it would be difficult for us to move to vpp2001 at this time.
> >
> > I really don’t have idea what caused this problem to disappear.
> > You may try to use “git bisect” to find out 

Re: [EXT] [vpp-dev] Regarding buffers-per-numa parameter

2020-02-17 Thread chetan bhasin
Hi,
One more finding related to intel nic and number of buffers (537600)

vpp branch driver card buffers Traffic Err
stable/1908 uio_pci_genric X722(10G) 537600  Working
*stable/1908* *vfio-pci* *XL710(40G)* *537600 * *Not Working* *l3 mac
mismatch*
stable/2001 uio_pci_genric X722(10G) 537600  Working
stable/2001 vfio-pci XL710(40G) 537600  Working

Thanks,
Chetan

On Mon, Feb 17, 2020 at 7:17 PM chetan bhasin via Lists.Fd.Io
 wrote:

> Hi Nitin,
>
> https://github.com/FDio/vpp/commits/stable/2001/src/vlib
> As per stable/2001 branch , the given change is checked-in around Oct 28
> 2019.
>
> df0191ead2cf39611714b6603cdc5bdddc445b57 is previous commit of
> b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897?
> Yes (branch vpp 20.01)
>
> Thanks,
> Chetan Bhasin
>
> On Mon, Feb 17, 2020 at 5:33 PM Nitin Saxena  wrote:
>
>> Hi Damjan,
>>
>> >> if you read Chetan’s email bellow, you will see that this one is
>> already excluded…
>> Sorry I missed that part. After seeing diffs between stable/1908 and
>> stable/2001, commit: b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897 is the only
>> visible git commit in dpdk plugin which is playing with mempool buffers. If
>> it does not solve the problem then I suspect problem lies outside dpdk
>> plugin. I am guessing DPDK-19.08 is being used here with VPP-19.08
>>
>> Hi Chetan,
>> > > 3) I took previous commit of  "vlib: don't use vector for keeping
>> buffer
>> > indices in the pool " ie "df0191ead2cf39611714b6603cdc5bdddc445b57" :
>> > Everything looks fine with Buffers 537600.
>> In which branch, Commit: df0191ead2cf39611714b6603cdc5bdddc445b57 is
>> previous commit of b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897?
>>
>> Thanks,
>> Nitin
>> > -Original Message-
>> > From: Damjan Marion 
>> > Sent: Monday, February 17, 2020 3:47 PM
>> > To: Nitin Saxena 
>> > Cc: chetan bhasin ; vpp-dev@lists.fd.io
>> > Subject: Re: [EXT] [vpp-dev] Regarding buffers-per-numa parameter
>> >
>> >
>> > Dear Nitin,
>> >
>> > if you read Chetan’s email bellow, you will see that this one is already
>> > excluded…
>> >
>> > Also, it will not be easy to explain how this patch blows tx function
>> in dpdk
>> > mlx5 pmd…
>> >
>> > —
>> > Damjan
>> >
>> > > On 17 Feb 2020, at 11:12, Nitin Saxena  wrote:
>> > >
>> > > Hi Prashant/Chetan,
>> > >
>> > > I would try following change first to solve the problem in 1908
>> > >
>> > > commit b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897b
>> > > Author: Damjan Marion 
>> > > Date:   Tue Mar 12 18:14:15 2019 +0100
>> > >
>> > > vlib: don't use vector for keeping buffer indices in
>> > >
>> > > Type: refactor
>> > >
>> > > Change-Id: I72221b97d7e0bf5c93e20bbda4473ca67bfcdeb4
>> > > Signed-off-by: Damjan Marion damar...@cisco.com
>> > >
>> > > You can also try copying src/plugins/dpdk/buffer.c from stable/2001
>> > branch to stable/1908
>> > >
>> > > Thanks,
>> > > Nitin
>> > >
>> > > From: vpp-dev@lists.fd.io  On Behalf Of Damjan
>> > Marion via Lists.Fd.Io
>> > > Sent: Monday, February 17, 2020 1:52 PM
>> > > To: chetan bhasin 
>> > > Cc: vpp-dev@lists.fd.io
>> > > Subject: [EXT] Re: [vpp-dev] Regarding buffers-per-numa parameter
>> > >
>> > > External Email
>> > >
>> > > On 17 Feb 2020, at 07:37, chetan bhasin 
>> > wrote:
>> > >
>> > > Bottom line is stable/vpp 908 does not work with higher number of
>> buffers
>> > but stable/vpp2001 does. Could you please advise which area we can look
>> at
>> > ,as it would be difficult for us to move to vpp2001 at this time.
>> > >
>> > > I really don’t have idea what caused this problem to disappear.
>> > > You may try to use “git bisect” to find out which commit fixed it….
>> > >
>> > > —
>> > > Damjan
>> > >
>> > >
>> > >
>> > > On Mon, Feb 17, 2020 at 11:01 AM chetan bhasin via Lists.Fd.Io
>> >  wrote:
>> > > Thanks Damjan for the reply!
>> > >
>> > > Following are my observations on Intel X710/XL710 pci-
>> > > 1) I took latest code base from stable/vpp19.08  : Seeing error as "
>> > ethernet-input l3 mac mismatch"
>> > > With Buffers 537600
>> > > vpp# show buffers
>> > |
>> > > Pool NameIndex NUMA  Size  Data Size  Total  Avail
>> Cached   Used
>> > > default-numa-0 0 0   2496 2048   537600 510464
>>  131925817
>> > > default-numa-1 1 1   2496 2048   537600 528896
>> 3908314
>> > >
>> > > vpp# show hardware-interfaces
>> > >   NameIdx   Link  Hardware
>> > > BondEthernet0  3 up   BondEthernet0
>> > >   Link speed: unknown
>> > >   Ethernet address 3c:fd:fe:b5:5e:40
>> > > FortyGigabitEthernet12/0/0 1 up
>>  FortyGigabitEthernet12/0/0
>> > >   Link speed: 40 Gbps
>> > >   Ethernet address 3c:fd:fe:b5:5e:40
>> > >   Intel X710/XL710 Family
>> > > carrier up full duplex mtu 9206
>> > > flags: admin-up pmd rx-ip4-cksum
>> > > rx: queues 16 (max 320), desc 1024 (min 64 max 4096 align 32)
>> > > tx: queues 16 (max 320), desc 4096 (min 64 max 4096 align 32)
>> 

[vpp-dev] Issue faced while bringing up kernel interfaces

2020-02-17 Thread Chinmaya Aggarwal
Hi,
We have two interfaces (ens8 and ens9). We made an entry of pci address of both 
the interfaces in /etc/vpp/startup.conf and brought down the interfaces. On 
restarting VPP, we can see vpp interfaces corresponding to both the interfaces 
(GigabitEthernet0/8/0 and GigabitEthernet0/9/0). We now want to bring up the 
kernel interfaces again but we are facing error and the error is :- *"ERROR 
while getting interface flags: No such device".
* We removed the pci address entry from /etc/vpp/startup.conf and restarted the 
vpp but still getting the same error while running "ifconfig ens8 up". But on 
rebooting the machine, we are able to see kernel interfaces in "ifconfig" 
command. *
* Can you suggest what could be the way to detach kernel interfaces from vpp 
without rebooting and use it as normal nic?
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15436): https://lists.fd.io/g/vpp-dev/message/15436
Mute This Topic: https://lists.fd.io/mt/71367026/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Issue coming while fib lookup in vpp 18.01 between /8 and default route

2020-02-17 Thread chetan bhasin
After taking patches related to ip4_mtrie.c we are not longer seeing the
issue related to /8 routes and default gateway.

Thanks a lot!
Chetan

On Tue, Feb 4, 2020 at 10:55 AM chetan bhasin 
wrote:

> Thanks Neale for response. I will take a look.
>
> On Mon, Feb 3, 2020 at 2:58 PM Neale Ranns (nranns) 
> wrote:
>
>>
>>
>> 18.01 might be missing patches in ip4_mtrie.c.
>>
>>
>>
>> /neale
>>
>>
>>
>> *From: * on behalf of chetan bhasin <
>> chetan.bhasin...@gmail.com>
>> *Date: *Friday 31 January 2020 at 11:47
>> *To: *vpp-dev 
>> *Subject: *[vpp-dev] Issue coming while fib lookup in vpp 18.01 between
>> /8 and default route
>>
>>
>>
>> Hello Everyone,
>>
>>
>>
>> I know that vpp 18.01 is not supported further, but can anybody please
>> provide a direction towards the below issue:
>>
>>
>>
>> We have two routes -
>>
>> 1) defaut gateway via 2.2.2.2
>>
>> 2) 10.0.0.0/8 via 3.3.3.3
>>
>>
>>
>> Trying to ping 10.20.x.x via VPP but it is going via 1) default gateway
>> but it should go via 2) 3.3.3.3
>>
>>
>>
>> Note : its working fine if we add route with subnet /16.
>>
>>
>>
>> show ip fib looks like as below :
>>
>> 0.0.0.0/0
>>   unicast-ip4-chain
>>   [@0]: dpo-load-balance: [proto:ip4 index:1 buckets:1 uRPF:43
>> to:[17:1012]]
>> [0] [@5]: ipv4 via 2.2.2.2 device_b/0/0: 0c9ff024000c29b196440800
>> 0.0.0.0/32
>>   unicast-ip4-chain
>>   [@0]: dpo-load-balance: [proto:ip4 index:2 buckets:1 uRPF:1 to:[0:0]]
>> [0] [@0]: dpo-drop ip4
>> 10.0.0.0/8
>>   unicast-ip4-chain
>>   [@0]: dpo-load-balance: [proto:ip4 index:25 buckets:1 uRPF:40 to:[0:0]]
>> [0] [@5]: ipv4 via 3.3.3.3 device_13/0/0: 0c9ff032000c29b1964e0800
>>
>>
>>
>> Thanks,
>>
>> Chetan Bhasin
>>
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15435): https://lists.fd.io/g/vpp-dev/message/15435
Mute This Topic: https://lists.fd.io/mt/70639820/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP ip4-input drops packets due to "ip4 length > l2 length" errors when using rdma with Mellanox mlx5 cards

2020-02-17 Thread Elias Rudberg
Hi Ben,

Thanks for your answer.

Now I think I found the problem, looks like a bug in
plugins/rdma/input.c related to what happens when the list of input
packets wrap around to the beginning of the ring buffer.
To fix it, the following change is needed:

diff --git a/src/plugins/rdma/input.c b/src/plugins/rdma/input.c
index 30fae83e0..f9979545d 100644
--- a/src/plugins/rdma/input.c
+++ b/src/plugins/rdma/input.c
@@ -318,7 +318,7 @@ rdma_device_input_inline (vlib_main_t * vm,
vlib_node_runtime_t * node,
);
   if (n_tail < n_rx_packets)
 n_rx_bytes +=
-  rdma_device_input_bufs (vm, rd, _next[n_tail], >bufs[0], 
wc,
+  rdma_device_input_bufs (vm, rd, _next[n_tail], >bufs[0], 
[n_tail],
  n_rx_packets - n_tail, );
   rdma_device_input_ethernet (vm, node, rd, next_index);

At that point in the code, the rdma_device_input_bufs() function is
called twice to handle the n_rx_packets that have arrived. First it is
called for the part up to the end of the buffer, and then a second call
is made to handle the remaining part, starting from the beginning of
the buffer. The problem is that the same "wc" argument is passed both
times, when in fact that pointer needs to be moved forward for the
second call, so we need [n_tail] instead of just wc for the second
call to rdma_device_input_bufs() -- n_tail is the number of packets
that were handled by the first rdma_device_input_bufs() call.

In my tests so far it looks like the above change fixes the problem
completely, after the fix there are no longer any "ip4 length > l2
length" errors.

This explanation fits with what we saw in our tests earlier, that the
problem with erroneous packets became smaller when the buffer size was
increased, since the second call to rdma_device_input_bufs() only comes
into play at the end of the ring buffer, which happens more rarely when
the buffer is larger. (But after the fix above there is no longer any
need to increase the buffer size.)

What do you think, does this seem right?

Best regards,
Elias



On Mon, 2020-02-17 at 15:38 +, Benoit Ganne (bganne) via
Lists.Fd.Io wrote:
> Hi Elias,
> 
> As the problem only arise with VPP rdma driver and not the DPDK
> driver, it is fair to say it is a VPP rdma driver issue.
> I'll try to reproduce the issue on my setup and keep you posted.
> In the meantime I do not see a big issue increasing the rx-queue-size 
> to mitigate it.
> 
> ben
> 
> > -Original Message-
> > From: vpp-dev@lists.fd.io  On Behalf Of Elias
> > Rudberg
> > Sent: vendredi 14 février 2020 16:56
> > To: vpp-dev@lists.fd.io
> > Subject: [vpp-dev] VPP ip4-input drops packets due to "ip4 length >
> > l2
> > length" errors when using rdma with Mellanox mlx5 cards
> > 
> > Hello VPP developers,
> > 
> > We have a problem with VPP used for NAT on Ubuntu 18.04 servers
> > equipped with Mellanox ConnectX-5 network cards (ConnectX-5 EN
> > network
> > interface card; 100GbE dual-port QSFP28; PCIe3.0 x16; tall bracket;
> > ROHS R6).
> > 
> > VPP is dropping packets in the ip4-input node due to "ip4 length >
> > l2
> > length" errors, when we use the RDMA plugin.
> > 
> > The interfaces are configured like this:
> > 
> > create int rdma host-if enp101s0f1 name Interface101 num-rx-queues
> > 1
> > create int rdma host-if enp179s0f1 name Interface179 num-rx-queues
> > 1
> > 
> > (we have set num-rx-queues 1 now to simplify while troubleshooting,
> > in
> > production we use num-rx-queues 4)
> > 
> > We see some packets dropped due to "ip4 length > l2 length" for
> > example
> > in TCP tests with around 100 Mbit/s -- running such a test for a
> > few
> > seconds already gives some errors. More traffic gives more errors
> > and
> > it seems to be unrelated to the contents of the packets, it seems
> > to
> > happen quite randomly and already at such moderate amounts of
> > traffic,
> > very far below what should be the capacity of the hardware.
> > 
> > Only a small fraction of packets are dropped: in tests at 100
> > Mbit/s
> > and packet size 500, for each million packets about 3 or 4 packets
> > get
> > the "ip4 length > l2 length" drop problem. However, the effect
> > appears
> > stronger for larger amounts of traffic and has impacted some of our
> > end
> > users who observe decresed TCP speed as a result of these drops.
> > 
> > The "ip4 length > l2 length" errors can be seen using vppctl "show
> > errors":
> > 
> > 142ip4-input   ip4 length > l2
> > length
> > 
> > To get more info about the "ip4 length > l2 length" error we
> > printed
> > the involved sizes when the error happens (ip_len0 and cur_len0 in
> > src/vnet/ip/ip4_input.h), which shows that the actual packet size
> > is
> > often much smaller than the ip_len0 value which is what the IP
> > packet
> > size should be according to the IP header. For example, when
> > ip_len0=500 as is the case for many of our packets in the test
> > runs,
> > the cur_len0 value is 

Re: [vpp-dev] VCL client connect error

2020-02-17 Thread Florin Coras
Hi Satya, 

Why are you commenting out the group (gid) configuration? The complaint is that 
vcl cannot connect to vpp’s binary api, so that may be part of the problem, if 
the user running vcl cannot read the binary api shared memory segment. 

You could also try connecting using private connections, instead of using shm 
based binary api segments, by configuring vpp to use the sock transport of the 
binary api. For that, add to startup.conf socksvr { socket-name 
/run/vpp-api.sock} and session { evt_qs_memfd_seg }

And then in vcl.conf add "api-socket-name /run/vpp-api.sock”. See [1] for a 
simple config. 

Regards, 
Florin

[1] https://wiki.fd.io/view/VPP/HostStack/LDP/iperf

> On Feb 17, 2020, at 7:08 AM, Satya Murthy  wrote:
> 
> Hi,
> 
> We are seeing following error when we try to connect to VPP via VCL test 
> client.
> Is this a known issue? 
> 
> startup file that we are using on VPP:
> 
> 
> unix {
>   nodaemon
>   log /tmp/vpp.log
>   full-coredump
>   cli-listen /run/vpp/cli.sock
> #  gid vpp
> } 
>
> #api-segment {
> #  gid vpp
> #}
> 
> Error:
> ==
> ./vcl_test_client 127.0.0.1 12344
> VCL<1273>: using default heapsize 268435456 (0x1000)
> VCL<1273>: allocated VCL heap = 0x7fe8a141f010, size 268435456 (0x1000)
> VCL<1273>: using default configuration.
> vppcom_connect_to_vpp:577: vcl: VCL<1273>: app (vcl_test_client) 
> connecting to VPP api (/vpe-api)...
> vl_map_shmem:637: region init fail
> connect_to_vlib_internal:410: vl_client_api map rv -2
> vppcom_connect_to_vpp:583: VCL<1273>: app (vcl_test_client) connect failed!
> vppcom_app_create:724: VCL<1273>: ERROR: couldn't connect to VPP!
> ERROR when calling vppcom_app_create(): Connection refused
>
>
> ERROR: vppcom_app_create() failed (errno = 111)!
> 
> Any inputs on this please ?
> 
> -- 
> Thanks & Regards,
> Murthy 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15433): https://lists.fd.io/g/vpp-dev/message/15433
Mute This Topic: https://lists.fd.io/mt/71351013/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP packet capture via DPDK #vpp #dpdk #span #pcap

2020-02-17 Thread Dave Barach via Lists.Fd.Io
Pcap trace support lives in two places these days: ethernet-input, and 
interface_output. Unless something is wrong, it should work on any interface 
type.

See 
https://fd.io/docs/vpp/master/gettingstarted/developers/vnet.html#pcap-rx-tx-and-drop-tracing
 

N-tuple trace classification works as Ben described. It is not free.

FWIW... Dave

-Original Message-
From: vpp-dev@lists.fd.io  On Behalf Of Benoit Ganne 
(bganne) via Lists.Fd.Io
Sent: Monday, February 17, 2020 10:14 AM
To: Chris King ; vpp-dev@lists.fd.io
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] VPP packet capture via DPDK #vpp #dpdk #span #pcap

Hi Chris,

> Does a more recent version of VPP rely on either DPDK 19.08 or DPDK 
> 19.11?

VPP 20.01 has just been released and uses DPDK 19.08.

> Does anyone have ideas on how I could use VPP, but capture packets at 
> the DPDK layer (on Azure)?

I do not think we support that today, however you can always use VPP pcap 
support: 
https://fd.io/docs/vpp/master/gettingstarted/developers/vnet.html#pcap-rx-tx-and-drop-tracing
You can even attach filters to only dump packets from specific flows. I do not 
know how much it will impact your performance though, you'll need to experiment.

Best
ben
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15432): https://lists.fd.io/g/vpp-dev/message/15432
Mute This Topic: https://lists.fd.io/mt/71250451/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp=1480452
Mute #span: https://lists.fd.io/mk?hashtag=span=1480452
Mute #dpdk: https://lists.fd.io/mk?hashtag=dpdk=1480452
Mute #pcap: https://lists.fd.io/mk?hashtag=pcap=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP ip4-input drops packets due to "ip4 length > l2 length" errors when using rdma with Mellanox mlx5 cards

2020-02-17 Thread Benoit Ganne (bganne) via Lists.Fd.Io
Hi Elias,

As the problem only arise with VPP rdma driver and not the DPDK driver, it is 
fair to say it is a VPP rdma driver issue.
I'll try to reproduce the issue on my setup and keep you posted.
In the meantime I do not see a big issue increasing the rx-queue-size to 
mitigate it.

ben

> -Original Message-
> From: vpp-dev@lists.fd.io  On Behalf Of Elias Rudberg
> Sent: vendredi 14 février 2020 16:56
> To: vpp-dev@lists.fd.io
> Subject: [vpp-dev] VPP ip4-input drops packets due to "ip4 length > l2
> length" errors when using rdma with Mellanox mlx5 cards
> 
> Hello VPP developers,
> 
> We have a problem with VPP used for NAT on Ubuntu 18.04 servers
> equipped with Mellanox ConnectX-5 network cards (ConnectX-5 EN network
> interface card; 100GbE dual-port QSFP28; PCIe3.0 x16; tall bracket;
> ROHS R6).
> 
> VPP is dropping packets in the ip4-input node due to "ip4 length > l2
> length" errors, when we use the RDMA plugin.
> 
> The interfaces are configured like this:
> 
> create int rdma host-if enp101s0f1 name Interface101 num-rx-queues 1
> create int rdma host-if enp179s0f1 name Interface179 num-rx-queues 1
> 
> (we have set num-rx-queues 1 now to simplify while troubleshooting, in
> production we use num-rx-queues 4)
> 
> We see some packets dropped due to "ip4 length > l2 length" for example
> in TCP tests with around 100 Mbit/s -- running such a test for a few
> seconds already gives some errors. More traffic gives more errors and
> it seems to be unrelated to the contents of the packets, it seems to
> happen quite randomly and already at such moderate amounts of traffic,
> very far below what should be the capacity of the hardware.
> 
> Only a small fraction of packets are dropped: in tests at 100 Mbit/s
> and packet size 500, for each million packets about 3 or 4 packets get
> the "ip4 length > l2 length" drop problem. However, the effect appears
> stronger for larger amounts of traffic and has impacted some of our end
> users who observe decresed TCP speed as a result of these drops.
> 
> The "ip4 length > l2 length" errors can be seen using vppctl "show
> errors":
> 
> 142ip4-input   ip4 length > l2 length
> 
> To get more info about the "ip4 length > l2 length" error we printed
> the involved sizes when the error happens (ip_len0 and cur_len0 in
> src/vnet/ip/ip4_input.h), which shows that the actual packet size is
> often much smaller than the ip_len0 value which is what the IP packet
> size should be according to the IP header. For example, when
> ip_len0=500 as is the case for many of our packets in the test runs,
> the cur_len0 value is sometimes much smaller. The smallest case we have
> seen was cur_len0 = 59 with ip_len0 = 500 -- the IP header said the IP
> packet size was 500 bytes, but the actual size was only 59 bytes. So it
> seems some data is lost, packets have been truncated, sometimes large
> parts of the packets are missing.
> 
> The problems disappear if we skip using the RDMA plugin and use the
> (old?) dpdk way of handling the interfaces, then there are no "ip4
> length > l2 length" drops at all. That makes us think there is
> something wrong with the rdma plugin, perhaps a bug or something wrong
> with how it is configured.
> 
> We have tested this with both the current master branch and the
> stable/1908 branch, we see the same problem for both.
> 
> We tried updating the Mellanox driver from v4.6 to v4.7 (latest
> version) but that did not help.
> 
> After trying some different values of the rx-queue-size parameter to
> the "create int rdma" command, it seems like the "ip4 length > l2
> length" becomes smaller as the rx-queue-size is increased, perhaps
> indicating the problem has to do with what happens when the end of that
> queue is reached.
> 
> Do you agree that the above points to a problem with the RDMA plugin in
> VPP?
> 
> Are there known bugs or other issues that could explain the "ip4 length
> > l2 length" drops?
> 
> Does it seem like a good idea to set a very large value of the rx-
> queue-size parameter if that alleviates the "ip4 length > l2 length"
> problem, or are there big downsides of using a large rx-queue-size
> value?
> 
> What else could we do to troubleshoot this further, are there
> configuration options to the RDMA plugin that could be used to solve
> this and/or get more information about what is happening?
> 
> Best regards,
> Elias
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15431): https://lists.fd.io/g/vpp-dev/message/15431
Mute This Topic: https://lists.fd.io/mt/71273976/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP packet capture via DPDK #vpp #dpdk #span #pcap

2020-02-17 Thread Benoit Ganne (bganne) via Lists.Fd.Io
Hi Chris,

> Does a more recent version of VPP rely on
> either DPDK 19.08 or DPDK 19.11?

VPP 20.01 has just been released and uses DPDK 19.08.

> Does anyone have ideas on how I could use VPP, but capture packets at the
> DPDK layer (on Azure)?

I do not think we support that today, however you can always use VPP pcap 
support: 
https://fd.io/docs/vpp/master/gettingstarted/developers/vnet.html#pcap-rx-tx-and-drop-tracing
You can even attach filters to only dump packets from specific flows. I do not 
know how much it will impact your performance though, you'll need to experiment.

Best
ben
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15430): https://lists.fd.io/g/vpp-dev/message/15430
Mute This Topic: https://lists.fd.io/mt/71250451/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp=1480452
Mute #span: https://lists.fd.io/mk?hashtag=span=1480452
Mute #dpdk: https://lists.fd.io/mk?hashtag=dpdk=1480452
Mute #pcap: https://lists.fd.io/mk?hashtag=pcap=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] VCL client connect error

2020-02-17 Thread Satya Murthy
Hi,

We are seeing following error when we try to connect to VPP via VCL test client.
Is this a known issue?

startup file that we are using on VPP:


unix {

nodaemon

log /tmp/vpp.log

full-coredump

cli-listen /run/vpp/cli.sock

#  gid vpp

}

#api-segment {

#  gid vpp

#}

Error:
==

./vcl_test_client 127.0.0.1 12344

VCL<1273>: using default heapsize 268435456 (0x1000)

VCL<1273>: allocated VCL heap = 0x7fe8a141f010, size 268435456 (0x1000)

VCL<1273>: using default configuration.

vppcom_connect_to_vpp:577: vcl: VCL<1273>: app (vcl_test_client) connecting 
to VPP api (/vpe-api)...

vl_map_shmem:637: region init fail

connect_to_vlib_internal:410: vl_client_api map rv -2

vppcom_connect_to_vpp:583: VCL<1273>: app (vcl_test_client) connect failed!

vppcom_app_create:724: VCL<1273>: ERROR: couldn't connect to VPP!

ERROR when calling vppcom_app_create(): Connection refused

ERROR: vppcom_app_create() failed (errno = 111)!

Any inputs on this please ?

--
Thanks & Regards,
Murthy
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15429): https://lists.fd.io/g/vpp-dev/message/15429
Mute This Topic: https://lists.fd.io/mt/71351013/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] using fib id after ipsec encap

2020-02-17 Thread Liran
I want to create IPsec tunnel while the tunnel src IP interface is associated 
with default router VR0. the IPsec port and the LAN interface is associated 
with VR4.

For this purpose, I am trying to use " tx_table_id - the FIB id used after 
packet encap" argument of the vl_api_ipsec_tunnel_if_add_del_t function of the 
vpp api (version 19.01.2) to forward the IPsec traffic via the tunnel src 
interface, it seems this argument is not used and the traffic is forwarded 
related to the inbound interface fib.

for example:
tunnel src interface vpp0 IP:20.20.20.1 - VR0.
IPsec port vppIpsec1 IP:60.60.60.1 - VR4.
LAN interface vpp4 IP:40.40.40.1 - VR4.
tx_table_id = 0.
the traffic is not forwarded from the interface vpp4 to interface vpp0.
in case of associate also vpp0 to VR4 the traffic is forwarded correctly.
what is wrong?
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15428): https://lists.fd.io/g/vpp-dev/message/15428
Mute This Topic: https://lists.fd.io/mt/71350848/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] vl_ vs vapi?

2020-02-17 Thread Christian Hopps
Hi vpp-dev,

I'm working to get strongswan IKE working with VPP, I've found some code on 
github that was doing this as PoC for vpp 18.10. The code is using vl_msg_* 
calls, and handling various duties to manage this API connection.

I also see there is this vapi code in the current VPP. Would it be recommended 
to switch over to vapi instead of this lower level vl_msg* code?

I was hoping the answer would be obvious from looking at the code, but I'm 
uncertain since I see things in the strongswan code like handling thread 
suspend, read timeout and thread exit conditions that I don't see in the vapi.c 
code.

Thanks,
Chris.-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15427): https://lists.fd.io/g/vpp-dev/message/15427
Mute This Topic: https://lists.fd.io/mt/71349657/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Coverity run FAILED as of 2020-02-17 14:00:24 UTC

2020-02-17 Thread Noreply Jenkins
Coverity run failed today.

Current number of outstanding issues are 1
Newly detected: 0
Eliminated: 1
More details can be found at  
https://scan.coverity.com/projects/fd-io-vpp/view_defects
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15426): https://lists.fd.io/g/vpp-dev/message/15426
Mute This Topic: https://lists.fd.io/mt/71349487/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [EXT] [vpp-dev] Regarding buffers-per-numa parameter

2020-02-17 Thread chetan bhasin
Hi Nitin,

https://github.com/FDio/vpp/commits/stable/2001/src/vlib
As per stable/2001 branch , the given change is checked-in around Oct 28
2019.

df0191ead2cf39611714b6603cdc5bdddc445b57 is previous commit of
b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897?
Yes (branch vpp 20.01)

Thanks,
Chetan Bhasin

On Mon, Feb 17, 2020 at 5:33 PM Nitin Saxena  wrote:

> Hi Damjan,
>
> >> if you read Chetan’s email bellow, you will see that this one is
> already excluded…
> Sorry I missed that part. After seeing diffs between stable/1908 and
> stable/2001, commit: b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897 is the only
> visible git commit in dpdk plugin which is playing with mempool buffers. If
> it does not solve the problem then I suspect problem lies outside dpdk
> plugin. I am guessing DPDK-19.08 is being used here with VPP-19.08
>
> Hi Chetan,
> > > 3) I took previous commit of  "vlib: don't use vector for keeping
> buffer
> > indices in the pool " ie "df0191ead2cf39611714b6603cdc5bdddc445b57" :
> > Everything looks fine with Buffers 537600.
> In which branch, Commit: df0191ead2cf39611714b6603cdc5bdddc445b57 is
> previous commit of b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897?
>
> Thanks,
> Nitin
> > -Original Message-
> > From: Damjan Marion 
> > Sent: Monday, February 17, 2020 3:47 PM
> > To: Nitin Saxena 
> > Cc: chetan bhasin ; vpp-dev@lists.fd.io
> > Subject: Re: [EXT] [vpp-dev] Regarding buffers-per-numa parameter
> >
> >
> > Dear Nitin,
> >
> > if you read Chetan’s email bellow, you will see that this one is already
> > excluded…
> >
> > Also, it will not be easy to explain how this patch blows tx function in
> dpdk
> > mlx5 pmd…
> >
> > —
> > Damjan
> >
> > > On 17 Feb 2020, at 11:12, Nitin Saxena  wrote:
> > >
> > > Hi Prashant/Chetan,
> > >
> > > I would try following change first to solve the problem in 1908
> > >
> > > commit b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897b
> > > Author: Damjan Marion 
> > > Date:   Tue Mar 12 18:14:15 2019 +0100
> > >
> > > vlib: don't use vector for keeping buffer indices in
> > >
> > > Type: refactor
> > >
> > > Change-Id: I72221b97d7e0bf5c93e20bbda4473ca67bfcdeb4
> > > Signed-off-by: Damjan Marion damar...@cisco.com
> > >
> > > You can also try copying src/plugins/dpdk/buffer.c from stable/2001
> > branch to stable/1908
> > >
> > > Thanks,
> > > Nitin
> > >
> > > From: vpp-dev@lists.fd.io  On Behalf Of Damjan
> > Marion via Lists.Fd.Io
> > > Sent: Monday, February 17, 2020 1:52 PM
> > > To: chetan bhasin 
> > > Cc: vpp-dev@lists.fd.io
> > > Subject: [EXT] Re: [vpp-dev] Regarding buffers-per-numa parameter
> > >
> > > External Email
> > >
> > > On 17 Feb 2020, at 07:37, chetan bhasin 
> > wrote:
> > >
> > > Bottom line is stable/vpp 908 does not work with higher number of
> buffers
> > but stable/vpp2001 does. Could you please advise which area we can look
> at
> > ,as it would be difficult for us to move to vpp2001 at this time.
> > >
> > > I really don’t have idea what caused this problem to disappear.
> > > You may try to use “git bisect” to find out which commit fixed it….
> > >
> > > —
> > > Damjan
> > >
> > >
> > >
> > > On Mon, Feb 17, 2020 at 11:01 AM chetan bhasin via Lists.Fd.Io
> >  wrote:
> > > Thanks Damjan for the reply!
> > >
> > > Following are my observations on Intel X710/XL710 pci-
> > > 1) I took latest code base from stable/vpp19.08  : Seeing error as "
> > ethernet-input l3 mac mismatch"
> > > With Buffers 537600
> > > vpp# show buffers
> > |
> > > Pool NameIndex NUMA  Size  Data Size  Total  Avail
> Cached   Used
> > > default-numa-0 0 0   2496 2048   537600 510464   1319
>   25817
> > > default-numa-1 1 1   2496 2048   537600 528896390
>   8314
> > >
> > > vpp# show hardware-interfaces
> > >   NameIdx   Link  Hardware
> > > BondEthernet0  3 up   BondEthernet0
> > >   Link speed: unknown
> > >   Ethernet address 3c:fd:fe:b5:5e:40
> > > FortyGigabitEthernet12/0/0 1 up
>  FortyGigabitEthernet12/0/0
> > >   Link speed: 40 Gbps
> > >   Ethernet address 3c:fd:fe:b5:5e:40
> > >   Intel X710/XL710 Family
> > > carrier up full duplex mtu 9206
> > > flags: admin-up pmd rx-ip4-cksum
> > > rx: queues 16 (max 320), desc 1024 (min 64 max 4096 align 32)
> > > tx: queues 16 (max 320), desc 4096 (min 64 max 4096 align 32)
> > > pci: device 8086:1583 subsystem 8086:0001 address :12:00.00
> numa
> > 0
> > > max rx packet len: 9728
> > > promiscuous: unicast off all-multicast on
> > > vlan offload: strip off filter off qinq off
> > > rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum
> qinq-strip
> > >outer-ipv4-cksum vlan-filter vlan-extend
> jumbo-frame
> > >scatter keep-crc
> > > rx offload active: ipv4-cksum
> > > tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum
> sctp-cksum
> > >  

Re: [vpp-dev] sh hardware-interfaces extended stats are not showing up

2020-02-17 Thread Christian Hopps
Hi,

Would it be appropriate to open a bug against 19.08 for this?

Thanks,
Chris.

> On Feb 16, 2020, at 5:57 PM, carlito nueno  wrote:
> 
> Hi Damjan,
> 
> Sorry for the late reply. I tested it on v20.01 and this is now working.
> 
> Thanks!
> 
> On Fri, Sep 20, 2019 at 2:07 PM Damjan Marion  wrote:
> 
> AFAIK it is fixed, please try latest master and report back if it doesn't 
> work.
> 
>> On 20 Sep 2019, at 19:53, Devis Reagan  wrote:
>> 
>> Hi David ,
>> 
>> Is there any fix or work around for this extended stats issue 
>> 
>> Thanks 
>> 
>> On Thu, Aug 29, 2019 at 6:58 AM Carlito Nueno  wrote:
>> Hi David,
>> 
>> I tried "vppctl interface collect detailed-stats enable" but it doesn't work.
>> 
>> I will git bisect as Damjan mentioned and try to see what changed.
>> 
>> Thanks
>> 
>> On Wed, Aug 28, 2019 at 8:00 AM Damjan Marion via Lists.Fd.Io 
>>  wrote:
>> 
>> It is not intentional so somebody needs to debug it… "git bisect" might be 
>> good choice here.
>> 
>>> On 28 Aug 2019, at 13:50, Devis Reagan  wrote:
>>> 
>>> Can any one help on this ? Extended stats not shown  in vpp 19.08 via ‘show 
>>> hardware-interfaces’ command 
>>> 
>>> Thanks 
>>> 
>>> On Tue, Aug 27, 2019 at 12:49 PM Devis Reagan via Lists.Fd.Io 
>>>  wrote:
>>> Even I am using vpp 19.08 & don’t see the extended stats which I used to 
>>> see in other vpp release . 
>>> There was not change in the configuration but with vpp 19.08 it’s not 
>>> showing up .
>>> 
>>> When I use dpdk application called testpmd the extended stats just show up 
>>> fine . It’s only the vpp not showing it . 
>>> 
>>> Do we need to configure any thing to get it ? 
>>> 
>>> Note : In the release note of 19.08 I saw some changes gone in for extended 
>>> stats .
>>> 
>>> Thanks
>>> 
>>> 
>>> On Tue, Aug 27, 2019 at 7:12 AM David Cornejo  wrote:
>>> did you make sure that you have detailed stats collection enabled for
>>> the interface?
>>> 
>>> (see vl_api_collect_detailed_interface_stats_t)
>>> 
>>> On Mon, Aug 26, 2019 at 2:24 PM carlito nueno  
>>> wrote:
>>> >
>>> > Hi all,
>>> >
>>> > I am using: vpp v19.08-release built by root on 365637461ad3 at Wed Aug 
>>> > 21 18:20:49 UTC 2019
>>> >
>>> > When I do sh hardware-interfaces or sh hardware-interfaces detail or 
>>> > verbose, extended stats are not showing.
>>> >
>>> > On 19.08 I only see stats like below:
>>> >
>>> > rss active:none
>>> > tx burst function: eth_igb_xmit_pkts
>>> > rx burst function: eth_igb_recv_scattered_pkts
>>> >
>>> > tx frames ok   26115
>>> > tx bytes ok 34203511
>>> > rx frames ok   12853
>>> > rx bytes ok  1337944
>>> >
>>> > On 19.04 I am able to see:
>>> >
>>> > rss active:none
>>> > tx burst function: eth_igb_xmit_pkts
>>> > rx burst function: eth_igb_recv_scattered_pkts
>>> >
>>> > tx frames ok21535933
>>> > tx bytes ok  21806938127
>>> > rx frames ok13773533
>>> > rx bytes ok   3642009224
>>> > extended stats:
>>> >   rx good packets   13773533
>>> >   tx good packets   21535933
>>> >   rx good bytes   3642009224
>>> >   tx good bytes  21806938127
>>> >   rx size 64 packets 1171276
>>> >   rx size 65 to 127 packets  8462547
>>> >   rx size 128 to 255 packets 1506266
>>> >   rx size 256 to 511 packets  606052
>>> >   rx size 512 to 1023 packets 560122
>>> >   rx size 1024 to max packets1467270
>>> >   rx broadcast packets383890
>>> >   rx multicast packets291769
>>> >   rx total packets  13773533
>>> >   tx total packets  21535933
>>> >   rx total bytes  3642009224
>>> >   tx total bytes 21806938127
>>> >   tx size 64 packets  397270
>>> >   tx size 65 to 127 packets  3649953
>>> >   tx size 128 to 255 packets 1817099
>>> >   tx size 256 to 511 packets  976902
>>> >   tx size 512 to 1023 packets 773963
>>> >   tx size 1023 to max packets   13920746
>>> >   tx multicast packets   893
>>> >   tx broadcast packets356966
>>> >   rx sent to host packets 59
>>> >   tx sent by host packets 81

Re: [EXT] [vpp-dev] Regarding buffers-per-numa parameter

2020-02-17 Thread Nitin Saxena
>> I am guessing DPDK-19.08 is being used here with VPP-19.08
Typo, dpdk-19.05 and not dpdk-19.08

> -Original Message-
> From: vpp-dev@lists.fd.io  On Behalf Of Nitin Saxena
> Sent: Monday, February 17, 2020 5:34 PM
> To: Damjan Marion 
> Cc: chetan bhasin ; vpp-dev@lists.fd.io
> Subject: Re: [EXT] [vpp-dev] Regarding buffers-per-numa parameter
> 
> Hi Damjan,
> 
> >> if you read Chetan’s email bellow, you will see that this one is
> >> already excluded…
> Sorry I missed that part. After seeing diffs between stable/1908 and
> stable/2001, commit: b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897 is the
> only visible git commit in dpdk plugin which is playing with mempool buffers.
> If it does not solve the problem then I suspect problem lies outside dpdk
> plugin. I am guessing DPDK-19.08 is being used here with VPP-19.08
> 
> Hi Chetan,
> > > 3) I took previous commit of  "vlib: don't use vector for keeping
> > > buffer
> > indices in the pool " ie "df0191ead2cf39611714b6603cdc5bdddc445b57" :
> > Everything looks fine with Buffers 537600.
> In which branch, Commit: df0191ead2cf39611714b6603cdc5bdddc445b57 is
> previous commit of b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897?
> 
> Thanks,
> Nitin
> > -Original Message-
> > From: Damjan Marion 
> > Sent: Monday, February 17, 2020 3:47 PM
> > To: Nitin Saxena 
> > Cc: chetan bhasin ; vpp-dev@lists.fd.io
> > Subject: Re: [EXT] [vpp-dev] Regarding buffers-per-numa parameter
> >
> >
> > Dear Nitin,
> >
> > if you read Chetan’s email bellow, you will see that this one is
> > already excluded…
> >
> > Also, it will not be easy to explain how this patch blows tx function
> > in dpdk
> > mlx5 pmd…
> >
> > —
> > Damjan
> >
> > > On 17 Feb 2020, at 11:12, Nitin Saxena  wrote:
> > >
> > > Hi Prashant/Chetan,
> > >
> > > I would try following change first to solve the problem in 1908
> > >
> > > commit b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897b
> > > Author: Damjan Marion 
> > > Date:   Tue Mar 12 18:14:15 2019 +0100
> > >
> > > vlib: don't use vector for keeping buffer indices in
> > >
> > > Type: refactor
> > >
> > > Change-Id: I72221b97d7e0bf5c93e20bbda4473ca67bfcdeb4
> > > Signed-off-by: Damjan Marion damar...@cisco.com
> > >
> > > You can also try copying src/plugins/dpdk/buffer.c from stable/2001
> > branch to stable/1908
> > >
> > > Thanks,
> > > Nitin
> > >
> > > From: vpp-dev@lists.fd.io  On Behalf Of Damjan
> > Marion via Lists.Fd.Io
> > > Sent: Monday, February 17, 2020 1:52 PM
> > > To: chetan bhasin 
> > > Cc: vpp-dev@lists.fd.io
> > > Subject: [EXT] Re: [vpp-dev] Regarding buffers-per-numa parameter
> > >
> > > External Email
> > >
> > > On 17 Feb 2020, at 07:37, chetan bhasin 
> > wrote:
> > >
> > > Bottom line is stable/vpp 908 does not work with higher number of
> > > buffers
> > but stable/vpp2001 does. Could you please advise which area we can
> > look at ,as it would be difficult for us to move to vpp2001 at this time.
> > >
> > > I really don’t have idea what caused this problem to disappear.
> > > You may try to use “git bisect” to find out which commit fixed it….
> > >
> > > —
> > > Damjan
> > >
> > >
> > >
> > > On Mon, Feb 17, 2020 at 11:01 AM chetan bhasin via Lists.Fd.Io
> >  wrote:
> > > Thanks Damjan for the reply!
> > >
> > > Following are my observations on Intel X710/XL710 pci-
> > > 1) I took latest code base from stable/vpp19.08  : Seeing error as "
> > ethernet-input l3 mac mismatch"
> > > With Buffers 537600 vpp# show buffers
> > |
> > > Pool NameIndex NUMA  Size  Data Size  Total  Avail  Cached   
> > > Used
> > > default-numa-0 0 0   2496 2048   537600 510464   1319
> > > 25817
> > > default-numa-1 1 1   2496 2048   537600 528896390
> > > 8314
> > >
> > > vpp# show hardware-interfaces
> > >   NameIdx   Link  Hardware
> > > BondEthernet0  3 up   BondEthernet0
> > >   Link speed: unknown
> > >   Ethernet address 3c:fd:fe:b5:5e:40
> > > FortyGigabitEthernet12/0/0 1 up   FortyGigabitEthernet12/0/0
> > >   Link speed: 40 Gbps
> > >   Ethernet address 3c:fd:fe:b5:5e:40
> > >   Intel X710/XL710 Family
> > > carrier up full duplex mtu 9206
> > > flags: admin-up pmd rx-ip4-cksum
> > > rx: queues 16 (max 320), desc 1024 (min 64 max 4096 align 32)
> > > tx: queues 16 (max 320), desc 4096 (min 64 max 4096 align 32)
> > > pci: device 8086:1583 subsystem 8086:0001 address :12:00.00
> > > numa
> > 0
> > > max rx packet len: 9728
> > > promiscuous: unicast off all-multicast on
> > > vlan offload: strip off filter off qinq off
> > > rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum 
> > > qinq-strip
> > >outer-ipv4-cksum vlan-filter vlan-extend 
> > > jumbo-frame
> > >scatter keep-crc
> > > rx offload active: ipv4-cksum
> > > tx offload avail:  vlan-insert ipv4-cksum 

Re: [EXT] [vpp-dev] Regarding buffers-per-numa parameter

2020-02-17 Thread Nitin Saxena
Hi Damjan,

>> if you read Chetan’s email bellow, you will see that this one is already 
>> excluded…
Sorry I missed that part. After seeing diffs between stable/1908 and 
stable/2001, commit: b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897 is the only 
visible git commit in dpdk plugin which is playing with mempool buffers. If it 
does not solve the problem then I suspect problem lies outside dpdk plugin. I 
am guessing DPDK-19.08 is being used here with VPP-19.08

Hi Chetan,
> > 3) I took previous commit of  "vlib: don't use vector for keeping buffer
> indices in the pool " ie "df0191ead2cf39611714b6603cdc5bdddc445b57" :
> Everything looks fine with Buffers 537600.
In which branch, Commit: df0191ead2cf39611714b6603cdc5bdddc445b57 is previous 
commit of b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897?

Thanks,
Nitin
> -Original Message-
> From: Damjan Marion 
> Sent: Monday, February 17, 2020 3:47 PM
> To: Nitin Saxena 
> Cc: chetan bhasin ; vpp-dev@lists.fd.io
> Subject: Re: [EXT] [vpp-dev] Regarding buffers-per-numa parameter
> 
> 
> Dear Nitin,
> 
> if you read Chetan’s email bellow, you will see that this one is already
> excluded…
> 
> Also, it will not be easy to explain how this patch blows tx function in dpdk
> mlx5 pmd…
> 
> —
> Damjan
> 
> > On 17 Feb 2020, at 11:12, Nitin Saxena  wrote:
> >
> > Hi Prashant/Chetan,
> >
> > I would try following change first to solve the problem in 1908
> >
> > commit b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897b
> > Author: Damjan Marion 
> > Date:   Tue Mar 12 18:14:15 2019 +0100
> >
> > vlib: don't use vector for keeping buffer indices in
> >
> > Type: refactor
> >
> > Change-Id: I72221b97d7e0bf5c93e20bbda4473ca67bfcdeb4
> > Signed-off-by: Damjan Marion damar...@cisco.com
> >
> > You can also try copying src/plugins/dpdk/buffer.c from stable/2001
> branch to stable/1908
> >
> > Thanks,
> > Nitin
> >
> > From: vpp-dev@lists.fd.io  On Behalf Of Damjan
> Marion via Lists.Fd.Io
> > Sent: Monday, February 17, 2020 1:52 PM
> > To: chetan bhasin 
> > Cc: vpp-dev@lists.fd.io
> > Subject: [EXT] Re: [vpp-dev] Regarding buffers-per-numa parameter
> >
> > External Email
> >
> > On 17 Feb 2020, at 07:37, chetan bhasin 
> wrote:
> >
> > Bottom line is stable/vpp 908 does not work with higher number of buffers
> but stable/vpp2001 does. Could you please advise which area we can look at
> ,as it would be difficult for us to move to vpp2001 at this time.
> >
> > I really don’t have idea what caused this problem to disappear.
> > You may try to use “git bisect” to find out which commit fixed it….
> >
> > —
> > Damjan
> >
> >
> >
> > On Mon, Feb 17, 2020 at 11:01 AM chetan bhasin via Lists.Fd.Io
>  wrote:
> > Thanks Damjan for the reply!
> >
> > Following are my observations on Intel X710/XL710 pci-
> > 1) I took latest code base from stable/vpp19.08  : Seeing error as "
> ethernet-input l3 mac mismatch"
> > With Buffers 537600
> > vpp# show buffers
> |
> > Pool NameIndex NUMA  Size  Data Size  Total  Avail  Cached   
> > Used
> > default-numa-0 0 0   2496 2048   537600 510464   1319
> > 25817
> > default-numa-1 1 1   2496 2048   537600 528896390
> > 8314
> >
> > vpp# show hardware-interfaces
> >   NameIdx   Link  Hardware
> > BondEthernet0  3 up   BondEthernet0
> >   Link speed: unknown
> >   Ethernet address 3c:fd:fe:b5:5e:40
> > FortyGigabitEthernet12/0/0 1 up   FortyGigabitEthernet12/0/0
> >   Link speed: 40 Gbps
> >   Ethernet address 3c:fd:fe:b5:5e:40
> >   Intel X710/XL710 Family
> > carrier up full duplex mtu 9206
> > flags: admin-up pmd rx-ip4-cksum
> > rx: queues 16 (max 320), desc 1024 (min 64 max 4096 align 32)
> > tx: queues 16 (max 320), desc 4096 (min 64 max 4096 align 32)
> > pci: device 8086:1583 subsystem 8086:0001 address :12:00.00 numa
> 0
> > max rx packet len: 9728
> > promiscuous: unicast off all-multicast on
> > vlan offload: strip off filter off qinq off
> > rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum qinq-strip
> >outer-ipv4-cksum vlan-filter vlan-extend jumbo-frame
> >scatter keep-crc
> > rx offload active: ipv4-cksum
> > tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum sctp-cksum
> >tcp-tso outer-ipv4-cksum qinq-insert vxlan-tnl-tso
> >gre-tnl-tso ipip-tnl-tso geneve-tnl-tso multi-segs
> >mbuf-fast-free
> > tx offload active: none
> > rss avail: ipv4-frag ipv4-tcp ipv4-udp ipv4-sctp ipv4-other 
> > ipv6-frag
> >ipv6-tcp ipv6-udp ipv6-sctp ipv6-other l2-payload
> > rss active:ipv4-frag ipv4-tcp ipv4-udp ipv4-other ipv6-frag 
> > ipv6-tcp
> >ipv6-udp ipv6-other
> > tx burst function: i40e_xmit_pkts_vec_avx2
> > rx burst 

Fw: [vpp-dev] sub interface after virtual interfaces doesn't work

2020-02-17 Thread abbas ali chezgi via Lists.Fd.Io
 thanks neal,

in this scenario:   
h1  --  n1  -- n2 -- h2n1,n2: vpp routers.
h1,h2: hosts 
1- when we remove gre tunnel [ or other virtual interfaces] 2- then adding new 
sub interface [for connecting n1 to n2 ]
we can't ping from  h1 to h2.

On Monday, February 17, 2020, 02:05:47 PM GMT+3:30, Neale Ranns via 
Lists.Fd.Io  wrote:
 
 
  
 
Please be more specific about what ‘doesn’t work’.
 
  
 
You script on n1 does:
 
  
 
  #add gre tunnel
 
  create gre tunnel src 200.1.2.1 dst 200.1.2.2
 
  set interface state gre0 up
 
  set interface ip address gre0 10.10.10.11/32
 
  ip route add 2.1.1.0/24 via gre0
 
  
 
  #del gre tunnel
 
  set interface state gre0 down
 
  create gre tunnel src 200.1.2.1 dst 200.1.2.2 del
 
  ip route del 2.1.1.0/24 via 200.1.2.2.
 
  
 
that last route delete was not the route you added, so it won’t remove 
anything. What you need is
 
  ip route del 2.1.1.0/24 via gre0
 
  
 
and of course you need to do that before you delete the tunnel.
 
  
 
/neale
 
  
 
  
 
  
 
  
 
From: on behalf of "abbas ali chezgi via Lists.Fd.Io" 

Reply to: "che...@yahoo.com" 
Date: Monday 17 February 2020 at 05:43
To: Vpp-dev 
Cc: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] sub interface after virtual interfaces doesn't work
 
  
 
1- create gre tunnel between n1--n2
2- add ip and route . it works
3- delete ip and route from gre and delete gre tunnel
4- create sub interface
5- add ip and route.
 
it doesn't work
 
  
 
[VPP-1841] sub interface after gre doesn't work - FD.io Jira
 
  
 
| 
| 
|  | 
[VPP-1841] sub interface after gre doesn't work - FD.io Jira
  |

 |

 |


  
 
  
 
  
 
where can i look for it
 
thanks.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15421): https://lists.fd.io/g/vpp-dev/message/15421
Mute This Topic: https://lists.fd.io/mt/71347386/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [EXT] [vpp-dev] Regarding buffers-per-numa parameter

2020-02-17 Thread chetan bhasin
Thanks Damjan and Nikhil for your time.

I also find below logs via dmesg (Intel X710/XL710 )

[root@bfs-dl360g10-25 vpp]# uname -a
Linux bfs-dl360g10-25 3.10.0-957.5.1.el7.x86_64 #1 SMP Wed Dec 19 10:46:58
EST 2018 x86_64 x86_64 x86_64 GNU/Linux
[root@bfs-dl360g10-25 vpp]# uname -r
3.10.0-957.5.1.el7.x86_64


Feb 17 10:38:46 bfs-dl360g10-05 kernel: DMAR: DRHD: handling fault status
reg 400
Feb 17 10:38:46 bfs-dl360g10-05 kernel: DMAR: DRHD: handling fault status
reg 402
Feb 17 10:38:46 bfs-dl360g10-05 kernel: DMAR: [DMA Read] Request device
[12:00.0] fault addr 5ec7f31000 [fault reason 06] PTE Read access is not set
Feb 17 10:38:46 bfs-dl360g10-05 kernel: DMAR: DRHD: handling fault status
reg 502
Feb 17 10:38:46 bfs-dl360g10-05 kernel: DMAR: [DMA Read] Request device
[12:00.0] fault addr 5ec804 [fault reason 06] PTE Read access is not set
Feb 17 10:38:46 bfs-dl360g10-05 kernel: DMAR: [DMA Read] Request device
[12:00.0] fault addr 5ec53be000 [fault reason 06] PTE Read access is not set
Feb 17 10:38:46 bfs-dl360g10-05 kernel: DMAR: DRHD: handling fault status
reg 700
Feb 17 10:38:46 bfs-dl360g10-05 kernel: DMAR: DRHD: handling fault status
reg 702
Feb 17 10:38:46 bfs-dl360g10-05 kernel: DMAR: [DMA Read] Request device
[12:00.0] fault addr 5ec6f24000 [fault reason 06] PTE Read access is not set
Feb 17 10:38:46 bfs-dl360g10-05 kernel: DMAR: [DMA Write] Request device
[12:00.0] fault addr 5ec60eb000 [fault reason 05] PTE Write access is not
set
Feb 17 10:38:46 bfs-dl360g10-05 kernel: DMAR: [DMA Write] Request device
[12:00.0] fault addr 5ec6684000 [fault reason 05] PTE Write access is not
set
Feb 17 10:38:46 bfs-dl360g10-05 kernel: DMAR: [DMA Write] Request device
[12:00.0] fault addr 5ec607d000 [fault reason 05] PTE Write access is not
set
Feb 17 10:38:46 bfs-dl360g10-05 kernel: DMAR: DRHD: handling fault status
reg 300
Feb 17 10:38:46 bfs-dl360g10-05 kernel: DMAR: DRHD: handling fault status
reg 302

Thanks,
Chetan

On Mon, Feb 17, 2020 at 3:47 PM Damjan Marion  wrote:

>
> Dear Nitin,
>
> if you read Chetan’s email bellow, you will see that this one is already
> excluded…
>
> Also, it will not be easy to explain how this patch blows tx function in
> dpdk mlx5 pmd…
>
> —
> Damjan
>
> > On 17 Feb 2020, at 11:12, Nitin Saxena  wrote:
> >
> > Hi Prashant/Chetan,
> >
> > I would try following change first to solve the problem in 1908
> >
> > commit b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897b
> > Author: Damjan Marion 
> > Date:   Tue Mar 12 18:14:15 2019 +0100
> >
> > vlib: don't use vector for keeping buffer indices in
> >
> > Type: refactor
> >
> > Change-Id: I72221b97d7e0bf5c93e20bbda4473ca67bfcdeb4
> > Signed-off-by: Damjan Marion damar...@cisco.com
> >
> > You can also try copying src/plugins/dpdk/buffer.c from stable/2001
> branch to stable/1908
> >
> > Thanks,
> > Nitin
> >
> > From: vpp-dev@lists.fd.io  On Behalf Of Damjan
> Marion via Lists.Fd.Io
> > Sent: Monday, February 17, 2020 1:52 PM
> > To: chetan bhasin 
> > Cc: vpp-dev@lists.fd.io
> > Subject: [EXT] Re: [vpp-dev] Regarding buffers-per-numa parameter
> >
> > External Email
> >
> > On 17 Feb 2020, at 07:37, chetan bhasin 
> wrote:
> >
> > Bottom line is stable/vpp 908 does not work with higher number of
> buffers but stable/vpp2001 does. Could you please advise which area we can
> look at ,as it would be difficult for us to move to vpp2001 at this time.
> >
> > I really don’t have idea what caused this problem to disappear.
> > You may try to use “git bisect” to find out which commit fixed it….
> >
> > —
> > Damjan
> >
> >
> >
> > On Mon, Feb 17, 2020 at 11:01 AM chetan bhasin via Lists.Fd.Io
>  wrote:
> > Thanks Damjan for the reply!
> >
> > Following are my observations on Intel X710/XL710 pci-
> > 1) I took latest code base from stable/vpp19.08  : Seeing error as "
> ethernet-input l3 mac mismatch"
> > With Buffers 537600
> > vpp# show buffers
>|
> > Pool NameIndex NUMA  Size  Data Size  Total  Avail  Cached
>  Used
> > default-numa-0 0 0   2496 2048   537600 510464   1319
> 25817
> > default-numa-1 1 1   2496 2048   537600 528896390
> 8314
> >
> > vpp# show hardware-interfaces
> >   NameIdx   Link  Hardware
> > BondEthernet0  3 up   BondEthernet0
> >   Link speed: unknown
> >   Ethernet address 3c:fd:fe:b5:5e:40
> > FortyGigabitEthernet12/0/0 1 up   FortyGigabitEthernet12/0/0
> >   Link speed: 40 Gbps
> >   Ethernet address 3c:fd:fe:b5:5e:40
> >   Intel X710/XL710 Family
> > carrier up full duplex mtu 9206
> > flags: admin-up pmd rx-ip4-cksum
> > rx: queues 16 (max 320), desc 1024 (min 64 max 4096 align 32)
> > tx: queues 16 (max 320), desc 4096 (min 64 max 4096 align 32)
> > pci: device 8086:1583 subsystem 8086:0001 address :12:00.00 numa
> 0
> > max rx packet len: 9728
> > promiscuous: 

Re: [vpp-dev] sub interface after virtual interfaces doesn't work

2020-02-17 Thread Neale Ranns via Lists.Fd.Io

Please be more specific about what ‘doesn’t work’.

You script on n1 does:

  #add gre tunnel
  create gre tunnel src 200.1.2.1 dst 200.1.2.2
  set interface state gre0 up
  set interface ip address gre0 10.10.10.11/32
  ip route add 2.1.1.0/24 via gre0

  #del gre tunnel
  set interface state gre0 down
  create gre tunnel src 200.1.2.1 dst 200.1.2.2 del
  ip route del 2.1.1.0/24 via 200.1.2.2.

that last route delete was not the route you added, so it won’t remove 
anything. What you need is
  ip route del 2.1.1.0/24 via gre0

and of course you need to do that before you delete the tunnel.

/neale




From:  on behalf of "abbas ali chezgi via Lists.Fd.Io" 

Reply to: "che...@yahoo.com" 
Date: Monday 17 February 2020 at 05:43
To: Vpp-dev 
Cc: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] sub interface after virtual interfaces doesn't work


1- create gre tunnel between n1--n2
2- add ip and route . it works
3- delete ip and route from gre and delete gre tunnel
4- create sub interface
5- add ip and route.

it doesn't work


[VPP-1841] sub interface after gre doesn't work - FD.io 
Jira

[VPP-1841] sub interface after gre doesn't work - FD.io Jira



where can i look for it
thanks.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15419): https://lists.fd.io/g/vpp-dev/message/15419
Mute This Topic: https://lists.fd.io/mt/71343791/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [EXT] [vpp-dev] Regarding buffers-per-numa parameter

2020-02-17 Thread Damjan Marion via Lists.Fd.Io

Dear Nitin,

if you read Chetan’s email bellow, you will see that this one is already 
excluded…

Also, it will not be easy to explain how this patch blows tx function in dpdk 
mlx5 pmd…

— 
Damjan

> On 17 Feb 2020, at 11:12, Nitin Saxena  wrote:
> 
> Hi Prashant/Chetan,
>
> I would try following change first to solve the problem in 1908
>
> commit b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897b
> Author: Damjan Marion 
> Date:   Tue Mar 12 18:14:15 2019 +0100
>
> vlib: don't use vector for keeping buffer indices in
>
> Type: refactor
>
> Change-Id: I72221b97d7e0bf5c93e20bbda4473ca67bfcdeb4
> Signed-off-by: Damjan Marion damar...@cisco.com
>
> You can also try copying src/plugins/dpdk/buffer.c from stable/2001 branch to 
> stable/1908
>
> Thanks,
> Nitin
>
> From: vpp-dev@lists.fd.io  On Behalf Of Damjan Marion 
> via Lists.Fd.Io
> Sent: Monday, February 17, 2020 1:52 PM
> To: chetan bhasin 
> Cc: vpp-dev@lists.fd.io
> Subject: [EXT] Re: [vpp-dev] Regarding buffers-per-numa parameter
>
> External Email
>
> On 17 Feb 2020, at 07:37, chetan bhasin  wrote:
>
> Bottom line is stable/vpp 908 does not work with higher number of buffers but 
> stable/vpp2001 does. Could you please advise which area we can look at ,as it 
> would be difficult for us to move to vpp2001 at this time. 
>
> I really don’t have idea what caused this problem to disappear.
> You may try to use “git bisect” to find out which commit fixed it….
>
> — 
> Damjan
> 
> 
>
> On Mon, Feb 17, 2020 at 11:01 AM chetan bhasin via Lists.Fd.Io 
>  wrote:
> Thanks Damjan for the reply!
>
> Following are my observations on Intel X710/XL710 pci-
> 1) I took latest code base from stable/vpp19.08  : Seeing error as " 
> ethernet-input l3 mac mismatch"
> With Buffers 537600
> vpp# show buffers 
>   |
> Pool NameIndex NUMA  Size  Data Size  Total  Avail  Cached   Used
> default-numa-0 0 0   2496 2048   537600 510464   131925817
> default-numa-1 1 1   2496 2048   537600 5288963908314
>
> vpp# show hardware-interfaces
>   NameIdx   Link  Hardware
> BondEthernet0  3 up   BondEthernet0
>   Link speed: unknown
>   Ethernet address 3c:fd:fe:b5:5e:40
> FortyGigabitEthernet12/0/0 1 up   FortyGigabitEthernet12/0/0
>   Link speed: 40 Gbps
>   Ethernet address 3c:fd:fe:b5:5e:40
>   Intel X710/XL710 Family
> carrier up full duplex mtu 9206
> flags: admin-up pmd rx-ip4-cksum
> rx: queues 16 (max 320), desc 1024 (min 64 max 4096 align 32)
> tx: queues 16 (max 320), desc 4096 (min 64 max 4096 align 32)
> pci: device 8086:1583 subsystem 8086:0001 address :12:00.00 numa 0
> max rx packet len: 9728
> promiscuous: unicast off all-multicast on
> vlan offload: strip off filter off qinq off
> rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum qinq-strip
>outer-ipv4-cksum vlan-filter vlan-extend jumbo-frame
>scatter keep-crc
> rx offload active: ipv4-cksum
> tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum sctp-cksum
>tcp-tso outer-ipv4-cksum qinq-insert vxlan-tnl-tso
>gre-tnl-tso ipip-tnl-tso geneve-tnl-tso multi-segs
>mbuf-fast-free
> tx offload active: none
> rss avail: ipv4-frag ipv4-tcp ipv4-udp ipv4-sctp ipv4-other 
> ipv6-frag
>ipv6-tcp ipv6-udp ipv6-sctp ipv6-other l2-payload
> rss active:ipv4-frag ipv4-tcp ipv4-udp ipv4-other ipv6-frag 
> ipv6-tcp
>ipv6-udp ipv6-other
> tx burst function: i40e_xmit_pkts_vec_avx2
> rx burst function: i40e_recv_pkts_vec_avx2
> tx errors 17
> rx frames ok4585
> rx bytes ok   391078
> extended stats:
>   rx good packets   4585
>   rx good bytes   391078
>   tx errors   17
>   rx multicast packets  4345
>   rx broadcast packets   243
>   rx unknown protocol packets   4588
>   rx size 65 to 127 packets 4529
>   rx size 128 to 255 packets  32
>   rx size 256 to 511 packets  26
>   rx size 1024 to 1522 packets 1
>   tx size 65 to 127 packets   33
> FortyGigabitEthernet12/0/1 2 up   FortyGigabitEthernet12/0/1
>   Link speed: 40 Gbps
>   Ethernet address 3c:fd:fe:b5:5e:40
>   Intel X710/XL710 Family
> carrier up full 

Re: [EXT] Re: [vpp-dev] Regarding buffers-per-numa parameter

2020-02-17 Thread Nitin Saxena
Hi Prashant/Chetan,

I would try following change first to solve the problem in 1908

commit b6e8b1a7c8bf9f9fbd05cdc3c90111d9e7a6897b
Author: Damjan Marion 
Date:   Tue Mar 12 18:14:15 2019 +0100

vlib: don't use vector for keeping buffer indices in

Type: refactor

Change-Id: I72221b97d7e0bf5c93e20bbda4473ca67bfcdeb4
Signed-off-by: Damjan Marion damar...@cisco.com

You can also try copying src/plugins/dpdk/buffer.c from stable/2001 branch to 
stable/1908

Thanks,
Nitin

From: vpp-dev@lists.fd.io  On Behalf Of Damjan Marion via 
Lists.Fd.Io
Sent: Monday, February 17, 2020 1:52 PM
To: chetan bhasin 
Cc: vpp-dev@lists.fd.io
Subject: [EXT] Re: [vpp-dev] Regarding buffers-per-numa parameter

External Email


On 17 Feb 2020, at 07:37, chetan bhasin 
mailto:chetan.bhasin...@gmail.com>> wrote:

Bottom line is stable/vpp 908 does not work with higher number of buffers but 
stable/vpp2001 does. Could you please advise which area we can look at ,as it 
would be difficult for us to move to vpp2001 at this time.

I really don’t have idea what caused this problem to disappear.
You may try to use “git bisect” to find out which commit fixed it….

—
Damjan



On Mon, Feb 17, 2020 at 11:01 AM chetan bhasin via 
Lists.Fd.Io
 mailto:gmail@lists.fd.io>> wrote:
Thanks Damjan for the reply!

Following are my observations on Intel X710/XL710 pci-
1) I took latest code base from stable/vpp19.08  : Seeing error as " 
ethernet-input l3 mac mismatch"
With Buffers 537600
vpp# show buffers   
|
Pool NameIndex NUMA  Size  Data Size  Total  Avail  Cached   Used
default-numa-0 0 0   2496 2048   537600 510464   131925817
default-numa-1 1 1   2496 2048   537600 5288963908314

vpp# show hardware-interfaces
  NameIdx   Link  Hardware
BondEthernet0  3 up   BondEthernet0
  Link speed: unknown
  Ethernet address 3c:fd:fe:b5:5e:40
FortyGigabitEthernet12/0/0 1 up   FortyGigabitEthernet12/0/0
  Link speed: 40 Gbps
  Ethernet address 3c:fd:fe:b5:5e:40
  Intel X710/XL710 Family
carrier up full duplex mtu 9206
flags: admin-up pmd rx-ip4-cksum
rx: queues 16 (max 320), desc 1024 (min 64 max 4096 align 32)
tx: queues 16 (max 320), desc 4096 (min 64 max 4096 align 32)
pci: device 8086:1583 subsystem 8086:0001 address :12:00.00 numa 0
max rx packet len: 9728
promiscuous: unicast off all-multicast on
vlan offload: strip off filter off qinq off
rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum qinq-strip
   outer-ipv4-cksum vlan-filter vlan-extend jumbo-frame
   scatter keep-crc
rx offload active: ipv4-cksum
tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum sctp-cksum
   tcp-tso outer-ipv4-cksum qinq-insert vxlan-tnl-tso
   gre-tnl-tso ipip-tnl-tso geneve-tnl-tso multi-segs
   mbuf-fast-free
tx offload active: none
rss avail: ipv4-frag ipv4-tcp ipv4-udp ipv4-sctp ipv4-other 
ipv6-frag
   ipv6-tcp ipv6-udp ipv6-sctp ipv6-other l2-payload
rss active:ipv4-frag ipv4-tcp ipv4-udp ipv4-other ipv6-frag ipv6-tcp
   ipv6-udp ipv6-other
tx burst function: i40e_xmit_pkts_vec_avx2
rx burst function: i40e_recv_pkts_vec_avx2
tx errors 17
rx frames ok4585
rx bytes ok   391078
extended stats:
  rx good packets   4585
  rx good bytes   391078
  tx errors   17
  rx multicast packets  4345
  rx broadcast packets   243
  rx unknown protocol packets   4588
  rx size 65 to 127 packets 4529
  rx size 128 to 255 packets  32
  rx size 256 to 511 packets  26
  rx size 1024 to 1522 packets 1
  tx size 65 to 127 packets   33
FortyGigabitEthernet12/0/1 2 up   FortyGigabitEthernet12/0/1
  Link speed: 40 Gbps
  Ethernet address 3c:fd:fe:b5:5e:40
  Intel X710/XL710 Family
carrier up full duplex mtu 9206
flags: admin-up pmd rx-ip4-cksum
rx: queues 16 (max 320), desc 1024 (min 64 max 4096 

Re: [vpp-dev] Regarding buffers-per-numa parameter

2020-02-17 Thread Damjan Marion via Lists.Fd.Io

> On 17 Feb 2020, at 07:37, chetan bhasin  wrote:
> 
> Bottom line is stable/vpp 908 does not work with higher number of buffers but 
> stable/vpp2001 does. Could you please advise which area we can look at ,as it 
> would be difficult for us to move to vpp2001 at this time. 

I really don’t have idea what caused this problem to disappear.
You may try to use “git bisect” to find out which commit fixed it….

— 
Damjan

> 
> On Mon, Feb 17, 2020 at 11:01 AM chetan bhasin via Lists.Fd.Io 
>   > wrote:
> Thanks Damjan for the reply!
> 
> Following are my observations on Intel X710/XL710 pci-
> 1) I took latest code base from stable/vpp19.08  : Seeing error as " 
> ethernet-input l3 mac mismatch"
> With Buffers 537600
> vpp# show buffers 
>   |
> Pool NameIndex NUMA  Size  Data Size  Total  Avail  Cached   Used
> default-numa-0 0 0   2496 2048   537600 510464   131925817
> default-numa-1 1 1   2496 2048   537600 5288963908314
> 
> vpp# show hardware-interfaces
>   NameIdx   Link  Hardware
> BondEthernet0  3 up   BondEthernet0
>   Link speed: unknown
>   Ethernet address 3c:fd:fe:b5:5e:40
> FortyGigabitEthernet12/0/0 1 up   FortyGigabitEthernet12/0/0
>   Link speed: 40 Gbps
>   Ethernet address 3c:fd:fe:b5:5e:40
>   Intel X710/XL710 Family
> carrier up full duplex mtu 9206
> flags: admin-up pmd rx-ip4-cksum
> rx: queues 16 (max 320), desc 1024 (min 64 max 4096 align 32)
> tx: queues 16 (max 320), desc 4096 (min 64 max 4096 align 32)
> pci: device 8086:1583 subsystem 8086:0001 address :12:00.00 numa 0
> max rx packet len: 9728
> promiscuous: unicast off all-multicast on
> vlan offload: strip off filter off qinq off
> rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum qinq-strip
>outer-ipv4-cksum vlan-filter vlan-extend jumbo-frame
>scatter keep-crc
> rx offload active: ipv4-cksum
> tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum sctp-cksum
>tcp-tso outer-ipv4-cksum qinq-insert vxlan-tnl-tso
>gre-tnl-tso ipip-tnl-tso geneve-tnl-tso multi-segs
>mbuf-fast-free
> tx offload active: none
> rss avail: ipv4-frag ipv4-tcp ipv4-udp ipv4-sctp ipv4-other 
> ipv6-frag
>ipv6-tcp ipv6-udp ipv6-sctp ipv6-other l2-payload
> rss active:ipv4-frag ipv4-tcp ipv4-udp ipv4-other ipv6-frag 
> ipv6-tcp
>ipv6-udp ipv6-other
> tx burst function: i40e_xmit_pkts_vec_avx2
> rx burst function: i40e_recv_pkts_vec_avx2
> tx errors 17
> rx frames ok4585
> rx bytes ok   391078
> extended stats:
>   rx good packets   4585
>   rx good bytes   391078
>   tx errors   17
>   rx multicast packets  4345
>   rx broadcast packets   243
>   rx unknown protocol packets   4588
>   rx size 65 to 127 packets 4529
>   rx size 128 to 255 packets  32
>   rx size 256 to 511 packets  26
>   rx size 1024 to 1522 packets 1
>   tx size 65 to 127 packets   33
> FortyGigabitEthernet12/0/1 2 up   FortyGigabitEthernet12/0/1
>   Link speed: 40 Gbps
>   Ethernet address 3c:fd:fe:b5:5e:40
>   Intel X710/XL710 Family
> carrier up full duplex mtu 9206
> flags: admin-up pmd rx-ip4-cksum
> rx: queues 16 (max 320), desc 1024 (min 64 max 4096 align 32)
> tx: queues 16 (max 320), desc 4096 (min 64 max 4096 align 32)
> pci: device 8086:1583 subsystem 8086: address :12:00.01 numa 0
> max rx packet len: 9728
> promiscuous: unicast off all-multicast on
> vlan offload: strip off filter off qinq off
> rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum qinq-strip
>outer-ipv4-cksum vlan-filter vlan-extend jumbo-frame
>scatter keep-crc
> rx offload active: ipv4-cksum
> tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum sctp-cksum
>tcp-tso outer-ipv4-cksum qinq-insert vxlan-tnl-tso
>gre-tnl-tso ipip-tnl-tso geneve-tnl-tso multi-segs
>mbuf-fast-free
> tx offload active: none
> rss avail: ipv4-frag ipv4-tcp ipv4-udp