Re: [vpp-dev] dpdk tx drops

2017-08-04 Thread Dave Barach (dbarach)
I don’t think you’ll have a very hard time moving to 17.07, but your mileage 
may vary.  [“Famous last words”]

It would pay to look at differences in the sample plugins, to work out what’s 
changed. We’ve changed the tree layout a good deal, and we’ve changed the way 
that plugins are built.

Thanks… Dave

From: SAKTHIVEL ANAND S [mailto:anand.s...@gmail.com]
Sent: Friday, August 4, 2017 10:27 AM
To: Dave Barach (dbarach) 
Cc: vpp-dev 
Subject: Re: [vpp-dev] dpdk tx drops

Sure, Dave.. I will move to latest versions of VPP either 17.04 or 17.07.
Will my plugins written for 16.06 directly work on these newer versions?
By the way , I tried increasing mBufs from 16384(default) to 32768 and does not 
seem to help.
Thanks..   Sakthivel S

On Fri, Aug 4, 2017 at 6:51 PM, Dave Barach (dbarach) 
> wrote:
Does this problem occur w/ vpp 17.07?

The software you mention is a full year old, and is so different from 
master/latest that anything is possible. As of this writing, the community has 
not announced an LTS plan. As the vpp project tech lead, I would be shocked if 
the community decided to support 16.06 as an LTS release. It has had minimal 
ongoing maintenance for a year.

You might try increasing the number of mbufs configured in 
/etc/vpp/startup.conf, but I’m guessing that will have no effect.

“show hardware”, “show error”, and “show run” stats would help understand 
what’s going on.

Make sure that you haven’t oversubscribed the physical server. Otherwise, ESXi 
will deschedule vpp and cause massive traffic loss.

Thanks… Dave

From: vpp-dev-boun...@lists.fd.io 
[mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of SAKTHIVEL ANAND S
Sent: Friday, August 4, 2017 8:59 AM
To: vpp-dev >
Subject: [vpp-dev] dpdk tx drops


Hi
I am using VPP 16.06 on Ubuntu 16.04.2 VM using vmware ESXI5.5 for layer 4/5 
packet forwarding.
I use Intel 10G NICs with pcie-passthrough configuration at both ingress and 
egress. I use only one core for dpdk.
From another m/c I push traffic at rate of ~1Gbps.

I see the error "dpdk tx drops" on the m/c running VPP packet forwarding on tx 
interface. Rx interface is fine.
Can some one throw some clues on this error and how to avoid this?

Nic Type: Intel X710 10Gig

Thanks in advance.
--
Thanks
Sakthivel S OM



--
Thanks
Sakthivel S OM
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] query on L2 ACL for VLANs

2017-08-04 Thread John Lo (loj)
Hi Balaj

I think the problem is that you did not configure an IP address on the 
sub-interface. Thus, IP4 forwarding is not enabled.   You can check state of 
various forwarding features on an interface or sub-interface using the command:
  show int feat TenGigabitEthernet1/0/0.100

If an interface does not have IP4 address configured, you will see the 
ip4-unitcast feature listed as ip4-drop:
   ip4-unicast:
ip4-drop

Regards,
John

From: Balaji Kn [mailto:balaji.s...@gmail.com]
Sent: Friday, August 04, 2017 7:28 AM
To: John Lo (loj) 
Cc: vpp-dev@lists.fd.io; l.s.abhil...@gmail.com
Subject: Re: [vpp-dev] query on L2 ACL for VLANs

Hi John,

Thanks for quick response.
I tried as you suggested to associate input ACL on IP-forwarding path for 
tagged packets. Ingress packets are not hitting ACL node and are dropped. 
However ACL with src/dst IP, MAC address, udp port numbers are fine.

Following are the configuration steps followed.

set int ip address TenGigabitEthernet1/0/0 172.27.28.5/24
set interface state  TenGigabitEthernet1/0/0 up
set int ip address TenGigabitEthernet1/0/1 172.27.29.5/24
set interface state  TenGigabitEthernet1/0/1 up
create sub-interfaces TenGigabitEthernet1/0/0  100
set interface state  TenGigabitEthernet1/0/0.100 up

ACL configuration
classify table mask l2 tag1
classify session acl-hit-next deny opaque-index 0 table-index 0 match l2 tag1 
100
set int input acl intfc TenGigabitEthernet1/0/0.100 ip4-table 0

Trace captured on VPP
00:16:11:820587: dpdk-input
  TenGigabitEthernet1/0/0 rx queue 0
  buffer 0x4d40: current data 0, length 124, free-list 0, clone-count 0, 
totlen-nifb 0, trace 0x0
  PKT MBUF: port 0, nb_segs 1, pkt_len 124
buf_len 2176, data_len 124, ol_flags 0x180, data_off 128, phys_addr 
0x6de35040
packet_type 0x291
Packet Offload Flags
  PKT_RX_IP_CKSUM_GOOD (0x0080) IP cksum of RX pkt. is valid
  PKT_RX_L4_CKSUM_GOOD (0x0100) L4 cksum of RX pkt. is valid
Packet Types
  RTE_PTYPE_L2_ETHER (0x0001) Ethernet packet
  RTE_PTYPE_L3_IPV4_EXT_UNKNOWN (0x0090) IPv4 packet with or without 
extension headers
  RTE_PTYPE_L4_UDP (0x0200) UDP packet
  IP4: 00:10:94:00:00:01 -> 24:6e:96:32:7f:98 802.1q vlan 100
  UDP: 172.27.28.6 -> 172.27.29.6
tos 0x00, ttl 255, length 106, checksum 0x2a38
fragment id 0x0008
  UDP: 1024 -> 1024
length 86, checksum 0x
00:16:11:820596: ethernet-input
  IP4: 00:10:94:00:00:01 -> 24:6e:96:32:7f:98 802.1q vlan 100
00:16:11:820616: ip4-input
  UDP: 172.27.28.6 -> 172.27.29.6
tos 0x00, ttl 255, length 106, checksum 0x2a38
fragment id 0x0008
  UDP: 1024 -> 1024
length 86, checksum 0x
00:16:11:820624: ip4-drop
UDP: 172.27.28.6 -> 172.27.29.6
  tos 0x00, ttl 255, length 106, checksum 0x2a38
  fragment id 0x0008
UDP: 1024 -> 1024
  length 86, checksum 0x
00:16:11:820627: error-drop
  ip4-input: ip4 adjacency drop

I verified in VPP code and packet is dropped while searching for intc arc 
(searching for feature enabled on interface). I assume associating 
sub-interface with ACL was enabling feature.

Let me know if i missed anything.

Regards,
Balaji









On Wed, Aug 2, 2017 at 9:26 PM, John Lo (loj) 
> wrote:
Hi Balaji,

In order to make input ACL work on the IPv4 forwarding path, you need to set it 
as ip4-table on the interface or sub-interface. For your case for packets with 
VLAN tags, it needs to be set on sub-interface:
set int input acl intfc TenGigabitEthernet1/0/0.100 ip4-table 0

The names in the CLI  [ip4-table|ip6-table|l2-table] indicate which forwarding 
path the ACL would be applied, not which packet header ACL will be matched. The 
match of the packet is specified with the table/session used in the ACL.

Regards,
John

From: vpp-dev-boun...@lists.fd.io 
[mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Balaji Kn
Sent: Wednesday, August 02, 2017 9:41 AM
To: vpp-dev@lists.fd.io
Cc: l.s.abhil...@gmail.com
Subject: [vpp-dev] query on L2 ACL for VLANs

Hello,

I am using VPP 17.07 release code (tag v17.07).

DBGvpp# show int address
TenGigabitEthernet1/0/0 (up):
  172.27.28.5/24
TenGigabitEthernet1/0/1 (up):
  172.27.29.5/24

My use case is to allow packets based on VLANs. I added an ACL rule in classify 
table as below.

classify table mask l2 tag1
classify session acl-hit-next permit opaque-index 0 table-index 0 match l2 tag1 
100
set int input acl intfc TenGigabitEthernet1/0/0 l2-table 0

Tagged packets were dropped in ethernet node.

00:08:39:270674: dpdk-input
  TenGigabitEthernet1/0/0 rx queue 0
  buffer 0x4d67: current data 0, length 124, free-list 0, clone-count 0, 
totlen-nifb 0, trace 0x1
  PKT MBUF: port 0, nb_segs 1, pkt_len 124
buf_len 2176, 

Re: [vpp-dev] Million prefixes - FIB test

2017-08-04 Thread Neale Ranns (nranns)
Hi Vitaly,

Can you explain to me what you mean by ‘stops responding’? If you can execute 
‘sh fib mem’ that implies it’s still responsive?

Are you sure you are adding 1 million unique addresses at step 2?

Regards,
neale

From:  on behalf of Vitaly I 
Date: Friday, 4 August 2017 at 14:56
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] Million prefixes - FIB test

Hi,
Vpp 17.04. I am trying to add 1 million prefix.
1) up interface
vpp# set int state GigabitEthernet82/0/3 up
vpp# set int l3 GigabitEthernet82/0/3
vpp# set int ip addr GigabitEthernet82/0/3 1.1.1.10/24
vpp# set ip arp GigabitEthernet82/0/3 1.1.1.1 11:22:33:44:55:66 static
2) and repeat vppctl with cmd like
vppctl ip route add 10.x.x.x/32 via 1.1.1.1
3) check fib
vpp# show fib mem

My tests always end at ~162k prefixes and vpp stops responding.
FIB memory
 Name   Size  in-use /allocated   totals
 Entry   72   161883 /  16188311655576/11655576
 Entry Source32   161884 /  1618845180288/5180288
 Entry Path-Extensions   56  0   /0   0/0
multicast-Entry 192  6   /6   1152/1152
   Path-list 40 15   /16  600/640
   uRPF-list 16 11   /11  176/176
 Path80 15   /16  1200/1280
  Node-list elements 20   161889 /  1618893237780/3237780
Node-list heads  8  18   /18  144/144

What do I need to do to increase the number of FIB entries?

Thanks.
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] Million prefixes - FIB test

2017-08-04 Thread Vitaly I
Hi,

Vpp 17.04. I am trying to add 1 million prefix.
1) up interface
vpp# set int state GigabitEthernet82/0/3 up
vpp# set int l3 GigabitEthernet82/0/3
vpp# set int ip addr GigabitEthernet82/0/3 1.1.1.10/24
vpp# set ip arp GigabitEthernet82/0/3 1.1.1.1 11:22:33:44:55:66 static
2) and repeat vppctl with cmd like
vppctl ip route add 10.x.x.x/32 via 1.1.1.1
3) check fib
vpp# show fib mem

My tests always end at ~162k prefixes and vpp stops responding.
FIB memory
 Name   Size  in-use /allocated   totals
 Entry   72   161883 /  16188311655576/11655576
 Entry Source32   161884 /  1618845180288/5180288
 Entry Path-Extensions   56  0   /0   0/0
multicast-Entry 192  6   /6   1152/1152
   Path-list 40 15   /16  600/640
   uRPF-list 16 11   /11  176/176
 Path80 15   /16  1200/1280
  Node-list elements 20   161889 /  1618893237780/3237780
Node-list heads  8  18   /18  144/144

What do I need to do to increase the number of FIB entries?

Thanks.
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] dpdk tx drops

2017-08-04 Thread Dave Barach (dbarach)
Does this problem occur w/ vpp 17.07?

The software you mention is a full year old, and is so different from 
master/latest that anything is possible. As of this writing, the community has 
not announced an LTS plan. As the vpp project tech lead, I would be shocked if 
the community decided to support 16.06 as an LTS release. It has had minimal 
ongoing maintenance for a year.

You might try increasing the number of mbufs configured in 
/etc/vpp/startup.conf, but I’m guessing that will have no effect.

“show hardware”, “show error”, and “show run” stats would help understand 
what’s going on.

Make sure that you haven’t oversubscribed the physical server. Otherwise, ESXi 
will deschedule vpp and cause massive traffic loss.

Thanks… Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of SAKTHIVEL ANAND S
Sent: Friday, August 4, 2017 8:59 AM
To: vpp-dev 
Subject: [vpp-dev] dpdk tx drops


Hi
I am using VPP 16.06 on Ubuntu 16.04.2 VM using vmware ESXI5.5 for layer 4/5 
packet forwarding.
I use Intel 10G NICs with pcie-passthrough configuration at both ingress and 
egress. I use only one core for dpdk.
From another m/c I push traffic at rate of ~1Gbps.

I see the error "dpdk tx drops" on the m/c running VPP packet forwarding on tx 
interface. Rx interface is fine.
Can some one throw some clues on this error and how to avoid this?

Nic Type: Intel X710 10Gig

Thanks in advance.
--
Thanks
Sakthivel S OM
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] dpdk tx drops

2017-08-04 Thread SAKTHIVEL ANAND S
adding the vpp show error dump:

root@ubuntu:~# vppctl sh err
   CountNode  Reason
 1 ip4-arpARP requests sent
   392 ip4-icmp-input echo replies sent
38 ethernet-input unknown ethernet type
13arp-input   ARP replies sent
 1arp-input   ARP replies received
104065   TenGigabitEthernet3/0/1-tx   Tx packet drops (dpdk tx
failure)
root@ubuntu:~#

-Sakthivel S

On Fri, Aug 4, 2017 at 6:28 PM, SAKTHIVEL ANAND S 
wrote:

>
> Hi
>
> I am using VPP 16.06 on Ubuntu 16.04.2 VM using vmware ESXI5.5 for layer
> 4/5 packet forwarding.
> I use Intel 10G NICs with pcie-passthrough configuration at both ingress
> and egress. I use only one core for dpdk.
> From another m/c I push traffic at rate of ~1Gbps.
>
> I see the error "dpdk tx drops" on the m/c running VPP packet forwarding
> on tx interface. Rx interface is fine.
>
> Can some one throw some clues on this error and how to avoid this?
>
> Nic Type: Intel X710 10Gig
>
> Thanks in advance.
> --
> Thanks
> Sakthivel S OM
>
>


-- 
Thanks
Sakthivel S OM
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev