Re: [vpp-dev] VPP - ixia tests failing

2019-03-01 Thread carlito nueno
In the above iperf3 test I only have three machines, macbook, windows and
vpp and all three connected via Ethernet.

macbook <--> vpp <--> windows

MTU on all interfaces is the default 1500

"show nat virtual reassembly" shows:
NAT IPv4 virtual fragmentation reassembly is ENABLED
 max-reassemblies 1024
 max-fragments 5
 timeout 2sec
 reassemblies:
NAT IPv6 virtual fragmentation reassembly is ENABLED
 max-reassemblies 1024
 max-fragments 5
 timeout 2sec
 reassemblies:

Thanks
On Fri, Mar 1, 2019 at 1:03 AM Benoit Ganne (bganne) 
wrote:

> Again, VPP tells you the error: on packet 20 (1st packet on
> GigabitEthernet5/0/0), you get a fragmented IPv4 packet and NAT reassembly
> drops the fragment. Check the status of the reassembly with "show nat
> virtual-reassembly" and update your conf accordingly with "nat
> virtual-reassembly".
> That said, you should not get fragmented packets in the 1st place in a
> correctly configured network. Check the MTU of all your interfaces
> (including clients, AP etc.).
>
> Best
> Ben
>
> > -Original Message-
> > From: Carlito Nueno 
> > Sent: vendredi 1 mars 2019 00:09
> > To: Carlito Nueno 
> > Cc: Benoit Ganne (bganne) ; vpp-dev@lists.fd.io
> > Subject: Re: [vpp-dev] VPP - ixia tests failing
> >
> > Ethernet hardware:
> > Ethernet controller: Intel Corporation I211 Gigabit Network Connection
> > (rev 03)
> >
> > I ran a few tests using iperf3, between:
> > macbook pro: 10.155.3.21 <--> connected to vpp port 10.155.3.1
> > Windows: 10.155.6.111 <--> connected to vpp port 10.155.6.1
> >
> > Ping works from macbook to windows and vice versa.
> >
> > iperf3 TCP works: iperf3 -s -B
> > macbook (10.155.3.21) as server <--> windows (10.155.6.111) as client and
> > vice versa
> >
> > iperf3 tcp trace macbook server:
> > https://gist.github.com/ironpillow/3540616a5b32638e895023e3b3e13be8
> > iperf3 tcp trace windows server:
> > https://gist.github.com/ironpillow/2b9a421d4e6fbb2a4751727b34f8f5c8
> >
> > iperf3 UDP ONLY works
> > windows (10.155.6.111) as server <--> macbook (10.155.3.21) as client.
> > iperf3 udp trace windows server:
> > https://gist.github.com/ironpillow/baecf1391864fba4e79a24670116db60
> >
> > iperf3 UDP does NOT work
> > macbook ((10.155.3.21) as server <---> windows (10.155.6.111) as client.
> >
> > 01:17:50:122097: error-drop
> >   nat44-in2out-reass: Maximum reassemblies exceeded
> >
> > iperf3 udp trace macbook server:
> > https://gist.github.com/ironpillow/ae93db2224de2730ce0115d8df22c9d1
> >
> > Thanks.
> >
> > On Thu, Feb 28, 2019 at 10:22 AM carlito nueno via Lists.Fd.Io
> >  wrote:
> > >
> > > Hi Benoit,
> > >
> > > I had a similar issue without the AP. I connected:
> > > client (ixia) --> GigabitEthernet4/0/0.3 --> vpp -->
> > > GigabitEthernet5/0/0 (ixia)
> > >
> > > Same problem. Ixia on GigabitEthernet5/0/0 was not receiving packets.
> > > But traffic the other way was working fine.
> > >
> > > Thanks
> > >
> > > On Thu, Feb 28, 2019 at 12:49 AM Benoit Ganne (bganne)
> >  wrote:
> > > >
> > > > Hi Carlito,
> > > >
> > > > Something looks fishy in the 1st trace (the failing one): dpdk-input
> > advertise a 60B packet length, (which should not happen, this is too
> small
> > for Ethernet anyway), and you can see the ip4-input reporting that the
> > advertised packet length in the IP header is 768B + incorrect checksum.
> > > > Finally, error-drop gracefully tells you why it decided to drop it:
> > ip4-input: ip4 length > l2 length. And it is probably right.
> > > > I would 1st check the packets you receive from the AP as they seem to
> > be truncated. That could be an AP issue or (more probable) a dpdk driver
> > issue.
> > > >
> > > > Best
> > > > Ben
> > > >
> > > > > -Original Message-
> > > > > From: vpp-dev@lists.fd.io  On Behalf Of
> > > > > carlito nueno
> > > > > Sent: jeudi 28 février 2019 03:44
> > > > > To: vpp-dev@lists.fd.io
> > > > > Subject: [vpp-dev] VPP - ixia tests failing
> > > > >
> > > > > Hi all,
> > > > >
> > > > > I got a chance to get my hands on an ixia testing box.
> > > > > Unfortunately I was not able to test because upstream (from
> > > > > ethernet to client) was not
> > > > > working:
> > > > >
> > > > > Not working: ixia on ethernet is not receiving packets client
> > > > > (ixia) --> WiFi AP --> GigabitEthernet4/0/0.3 --> vpp -->
> > > > > GigabitEthernet5/0/0 (ixia)
> > > > >
> > > > > The other way is working: ixia client is receiving packets
> > > > > (ixia)GigabitEthernet5/0/0 --> vpp --> GigabitEthernet4/0/0.3 -->
> > > > > wifi AP
> > > > > --> client (ixia)
> > > > >
> > > > > Both TCP and UDP tests failed. Packets are being dropped by VPP
> > > > > (error- drop, null-node: blackholed packets).
> > > > >
> > > > > running: vpp v18.10-rc0~229-g869031c5
> > > > >
> > > > > ixia mac addresses:
> > > > > client: 00:21:dd:xx:xx:xx
> > > > > server: 00:11:dd:xx:xx:xx
> > > > >
> > > > > wifi access point mac address:
> > > > > AP: a4:c5:ef:xx:xx:xx
> > > > >
> > > > > I don't have ACLs setup.
> > > > 

Re: [vpp-dev] Can I increase vlib_buffer->opaque[10]?

2019-03-01 Thread Dave Barach via Lists.Fd.Io
Please put your data into the opaque2 [union].

Thanks... Dave

P.S. Any patch which resizes the primary buffer opaque will be rejected.

From: vpp-dev@lists.fd.io  On Behalf Of Gudimetla, Leela 
Sankar
Sent: Thursday, February 28, 2019 6:20 PM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] Can I increase vlib_buffer->opaque[10]?

Hello,

I have a need to add some custom fields to the existing vlib_buffer metadata 
space i.e. opaque[10].
The new field is related to L2, so it makes logically sense to add it to the l2 
structure in vnet_buffer_opaque_t.

But it does not fit into the current real estate i.e. opaque[10]. So, I have 
increased it to opaque[12] in vlib_buffer_t.
No compilation failure, the functionality also looks fine so far.

Is it valid to increase the opaque[10] in vlib_buffer?
Is there any restriction based on cacheline ‘CLIB_CACHE_LINE_ALIGN_MARK 
(cacheline1);’ ?

Thanks,
Leela sankar
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12405): https://lists.fd.io/g/vpp-dev/message/12405
Mute This Topic: https://lists.fd.io/mt/30169235/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [csit-dev] [vpp-dev] Heads up: API cleanup

2019-03-01 Thread Maciek Konstantynowicz (mkonstan) via Lists.Fd.Io
// re-adding vpp-dev, Neale, Ole

Vratko, thanks for spotting!
Spoke with Neale, he confirmed route add/del API changes should be transparent 
to VAT per his updates in [1].
If you spotted any VatExecutor breakage, then this would need to be fixed 
before [2] is merged.
And as CSIT embarked on VAT to PAPI migration, it makes sense to use the new 
PAPI syntax for all PAPI L1 keywords listed in [3].
Neale is kind to give here heads-up before merging [2], most likely next week.
Cheers,
-Maciek

[1] https://gerrit.fd.io/r/#/c/12296/29/src/vat/api_format.c
[2] https://gerrit.fd.io/r/#/c/12296/
[3] https://git.fd.io/csit/tree/resources/libraries/python

On 28 Feb 2019, at 12:19, Vratko Polak -X (vrpolak - PANTHEON TECHNOLOGIES at 
Cisco) via Lists.Fd.Io 
mailto:vrpolak=cisco@lists.fd.io>> wrote:

I see this having a big impact on our tests
(breaking VatExecutor, needing to update L1 keywords with PapiExecutor).
The change is big, look at .api files.
For example, I see ip_add_del_route
is being renamed to ip_route_add_del.

I believe we should start preparing in advance.

Vratko.

From: vpp-dev@lists.fd.io 
mailto:vpp-dev@lists.fd.io>> On Behalf Of Neale Ranns via 
Lists.Fd.Io
Sent: Thursday, 2019-February-28 09:52
To: Ole Troan mailto:otr...@employees.org>>; vpp-dev 
mailto:vpp-dev@lists.fd.io>>
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Heads up: API cleanup


Hi,

In the spirit of this work I’d like to propose a change to the route add/del 
APIs to make use of the fib_path structure. The fib_path structure, which 
describes how to deal which the matched packets, is consistent across each of 
the route types; IP, MPLS, BIER and ABF.
By using a fib_path we can also pass more than one path for each route update, 
which is how most IP unicast protocols would normally operate. It is notably 
faster to add the route once with multiple paths, than add the route multiple 
times with one path.

  https://gerrit.fd.io/r/#/c/12296/

regards,
neale


De : mailto:vpp-dev@lists.fd.io>> au nom de Ole Troan 
mailto:otr...@employees.org>>
Date : lundi 25 février 2019 à 10:02
À : vpp-dev mailto:vpp-dev@lists.fd.io>>
Objet : [vpp-dev] Heads up: API cleanup

Apologies in advance. This is going to be painful for everyone using the API. 
In the neverending saga of cleaning up the APIs and making types more explicit, 
here are the changes I’d like to get in for 19.04:

u32 sw_if_index -> vl_api_interface_index_t

u8 ip4_address[4] -> vl_api_ip4_address_t

u8 ip6_address[16] -> vl_api_ip6_address_t

u8 is_ip6
u8 address[16] -> vl_api_address_t

u8 prefix_len
u8 prefix[4/16] -> vl_api_ip4_prefix_t / vl_api_ip6_prefix_t / vl_api_prefix_t

u8 is_ -> bool

u8 name[64] -> string name

u8 mac_address[6] -> vl_api_mac_address_t

u8 data[0]
u32 length -> u8 data[length]

The explicit types allow for much better type checking on the client side, as 
well as automatic mapping into the respective types on the client side. E.g. 
vl_api_address_t mapping into Python IPAddress object.

Best regards,
Ole
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12335): https://lists.fd.io/g/vpp-dev/message/12335
Mute This Topic: https://lists.fd.io/mt/30035998/675192
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  
[nra...@cisco.com]
-=-=-=-=-=-=-=-=-=-=-=-
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12385): https://lists.fd.io/g/vpp-dev/message/12385
Mute This Topic: https://lists.fd.io/mt/30035998/870305
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  
[vrpo...@cisco.com]
-=-=-=-=-=-=-=-=-=-=-=-
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#3352): https://lists.fd.io/g/csit-dev/message/3352
Mute This Topic: https://lists.fd.io/mt/30162681/675185
Group Owner: csit-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/csit-dev/unsub  
[mkons...@cisco.com]
-=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12404): https://lists.fd.io/g/vpp-dev/message/12404
Mute This Topic: https://lists.fd.io/mt/30173719/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP - ixia tests failing

2019-03-01 Thread Benoit Ganne (bganne) via Lists.Fd.Io
Again, VPP tells you the error: on packet 20 (1st packet on 
GigabitEthernet5/0/0), you get a fragmented IPv4 packet and NAT reassembly 
drops the fragment. Check the status of the reassembly with "show nat 
virtual-reassembly" and update your conf accordingly with "nat 
virtual-reassembly".
That said, you should not get fragmented packets in the 1st place in a 
correctly configured network. Check the MTU of all your interfaces (including 
clients, AP etc.).

Best
Ben

> -Original Message-
> From: Carlito Nueno 
> Sent: vendredi 1 mars 2019 00:09
> To: Carlito Nueno 
> Cc: Benoit Ganne (bganne) ; vpp-dev@lists.fd.io
> Subject: Re: [vpp-dev] VPP - ixia tests failing
> 
> Ethernet hardware:
> Ethernet controller: Intel Corporation I211 Gigabit Network Connection
> (rev 03)
> 
> I ran a few tests using iperf3, between:
> macbook pro: 10.155.3.21 <--> connected to vpp port 10.155.3.1
> Windows: 10.155.6.111 <--> connected to vpp port 10.155.6.1
> 
> Ping works from macbook to windows and vice versa.
> 
> iperf3 TCP works: iperf3 -s -B
> macbook (10.155.3.21) as server <--> windows (10.155.6.111) as client and
> vice versa
> 
> iperf3 tcp trace macbook server:
> https://gist.github.com/ironpillow/3540616a5b32638e895023e3b3e13be8
> iperf3 tcp trace windows server:
> https://gist.github.com/ironpillow/2b9a421d4e6fbb2a4751727b34f8f5c8
> 
> iperf3 UDP ONLY works
> windows (10.155.6.111) as server <--> macbook (10.155.3.21) as client.
> iperf3 udp trace windows server:
> https://gist.github.com/ironpillow/baecf1391864fba4e79a24670116db60
> 
> iperf3 UDP does NOT work
> macbook ((10.155.3.21) as server <---> windows (10.155.6.111) as client.
> 
> 01:17:50:122097: error-drop
>   nat44-in2out-reass: Maximum reassemblies exceeded
> 
> iperf3 udp trace macbook server:
> https://gist.github.com/ironpillow/ae93db2224de2730ce0115d8df22c9d1
> 
> Thanks.
> 
> On Thu, Feb 28, 2019 at 10:22 AM carlito nueno via Lists.Fd.Io
>  wrote:
> >
> > Hi Benoit,
> >
> > I had a similar issue without the AP. I connected:
> > client (ixia) --> GigabitEthernet4/0/0.3 --> vpp -->
> > GigabitEthernet5/0/0 (ixia)
> >
> > Same problem. Ixia on GigabitEthernet5/0/0 was not receiving packets.
> > But traffic the other way was working fine.
> >
> > Thanks
> >
> > On Thu, Feb 28, 2019 at 12:49 AM Benoit Ganne (bganne)
>  wrote:
> > >
> > > Hi Carlito,
> > >
> > > Something looks fishy in the 1st trace (the failing one): dpdk-input
> advertise a 60B packet length, (which should not happen, this is too small
> for Ethernet anyway), and you can see the ip4-input reporting that the
> advertised packet length in the IP header is 768B + incorrect checksum.
> > > Finally, error-drop gracefully tells you why it decided to drop it:
> ip4-input: ip4 length > l2 length. And it is probably right.
> > > I would 1st check the packets you receive from the AP as they seem to
> be truncated. That could be an AP issue or (more probable) a dpdk driver
> issue.
> > >
> > > Best
> > > Ben
> > >
> > > > -Original Message-
> > > > From: vpp-dev@lists.fd.io  On Behalf Of
> > > > carlito nueno
> > > > Sent: jeudi 28 février 2019 03:44
> > > > To: vpp-dev@lists.fd.io
> > > > Subject: [vpp-dev] VPP - ixia tests failing
> > > >
> > > > Hi all,
> > > >
> > > > I got a chance to get my hands on an ixia testing box.
> > > > Unfortunately I was not able to test because upstream (from
> > > > ethernet to client) was not
> > > > working:
> > > >
> > > > Not working: ixia on ethernet is not receiving packets client
> > > > (ixia) --> WiFi AP --> GigabitEthernet4/0/0.3 --> vpp -->
> > > > GigabitEthernet5/0/0 (ixia)
> > > >
> > > > The other way is working: ixia client is receiving packets
> > > > (ixia)GigabitEthernet5/0/0 --> vpp --> GigabitEthernet4/0/0.3 -->
> > > > wifi AP
> > > > --> client (ixia)
> > > >
> > > > Both TCP and UDP tests failed. Packets are being dropped by VPP
> > > > (error- drop, null-node: blackholed packets).
> > > >
> > > > running: vpp v18.10-rc0~229-g869031c5
> > > >
> > > > ixia mac addresses:
> > > > client: 00:21:dd:xx:xx:xx
> > > > server: 00:11:dd:xx:xx:xx
> > > >
> > > > wifi access point mac address:
> > > > AP: a4:c5:ef:xx:xx:xx
> > > >
> > > > I don't have ACLs setup.
> > > >
> > > > Here is my vpp.conf and packet capture:
> > > > https://gist.github.com/ironpillow/9b1c5dd0905135ff09eba6067db179a
> > > > e
> > > >
> > > > Any advice?
> > > >
> > > > Thanks
> > -=-=-=-=-=-=-=-=-=-=-=-
> > Links: You receive all messages sent to this group.
> >
> > View/Reply Online (#12391):
> > https://lists.fd.io/g/vpp-dev/message/12391
> > Mute This Topic: https://lists.fd.io/mt/30159793/675621
> > Group Owner: vpp-dev+ow...@lists.fd.io
> > Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub
> > [carlitonu...@gmail.com]
> > -=-=-=-=-=-=-=-=-=-=-=-
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12403): https://lists.fd.io/g/vpp-dev/message/12403
Mute This Topic: https://lists.fd.io/mt/30159793/21656