Re: [vpp-dev] vppcom_session_connect blocking or non blocking

2019-09-04 Thread Florin Coras
Hi Max, 

Here’s the patch that allows non-blocking connects [1]. 

Florin

[1] https://gerrit.fd.io/r/c/vpp/+/21610 

> On Aug 15, 2019, at 7:41 AM, Florin Coras via Lists.Fd.Io 
>  wrote:
> 
> Hi Max,
> 
> Not at this time. It should be possible with a few changes for nonblocking 
> sessions. I’ll add it to my list, in case nobody else beats me to it. 
> 
> Florin
> 
>> On Aug 15, 2019, at 2:47 AM, Max A. via Lists.Fd.Io 
>>  wrote:
>> 
>> Hello,
>> 
>> Can vppcom_session_connect() function run in non-blocking mode? I see that 
>> there is a wait for the connection result in the 
>> vppcom_wait_for_session_state_change function.  Is it possible to get the 
>> result of the connection using vppcom_epoll_wait?
>> 
>> Thanks.
>> -=-=-=-=-=-=-=-=-=-=-=-
>> Links: You receive all messages sent to this group.
>> 
>> View/Reply Online (#13745): https://lists.fd.io/g/vpp-dev/message/13745
>> Mute This Topic: https://lists.fd.io/mt/32885087/675152
>> Group Owner: vpp-dev+ow...@lists.fd.io
>> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [fcoras.li...@gmail.com]
>> -=-=-=-=-=-=-=-=-=-=-=-
> 
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#13747): https://lists.fd.io/g/vpp-dev/message/13747
> Mute This Topic: https://lists.fd.io/mt/32885087/675152
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [fcoras.li...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13907): https://lists.fd.io/g/vpp-dev/message/13907
Mute This Topic: https://lists.fd.io/mt/32885087/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Published: FD.io CSIT-1908 Release Report

2019-09-04 Thread Maciek Konstantynowicz (mkonstan) via Lists.Fd.Io
Hi All,

FD.io CSIT-1908 report has been published on FD.io docs site:

html: https://docs.fd.io/csit/rls1908/report/
pdf: https://docs.fd.io/csit/rls1908/report/_static/archive/csit_rls1908.pdf

Many thanks to All in CSIT, VPP and wider FD.io community who
contributed and worked hard to make CSIT-1908 happen!

Below two summaries:
- CSIT-1908 Release Summary, a high-level summary.
- Points of Note in CSIT-1908 Report, with specific links to report.

Welcome all comments, best by email to csit-...@lists.fd.io.

Cheers,
-Maciek


CSIT-1908 Release Summary

1. CSIT-1908 Report

- html link: https://docs.fd.io/csit/rls1908/report/
- pdf link: 
https://docs.fd.io/csit/rls1908/report/_static/archive/csit_rls1908.pdf

2. New Tests

  - VM/VNF service chains with DPDK Testpmd and VPP L2/IPv4 workloads
and external VXLAN encapsulation.
  - IPsec with new VPP native cipher algorithms, baseline and large
scale (up to 60k tunnels).
  - VM/VNF service chains with VPP IPsec workloads, baseline and
horizontal scaling (experimental, in-testing).
  - GBP (Group Based Policy) with external dot1q encapsulation.
  - Extended test coverage with VPP native AVF driver: IPv4 scale tests.
  - Refreshed VPP TCP/HTTP tests.
  - Number of VPP functional device tests running in container based
environment.
  - Good VPP PAPI (Python API) test coverage, PAPI used for all VPP
tests.

3. Benchmarking

  - Added new processor micro-architectures: ARM/AArch64 (TaiShan) and
Atom (Denverton).

- New testbeds onboarded into FD.io CSIT CI/CD functional and
  performance test pipelines.
- Daily trending with throughput changes monitoring, analytics and
  anomaly auto-detection.
- Release reports with benchmarking data including throughput,
  latency, test repeatibility.

  - Updated CSIT benchmarking report specification

- Consistent selection of tests across all testbeds and processor
  microarchitectures present in FD.io labs (Xeon, Atom, ARM) for
  iterative benchmarking tests to verify results repeatibility. Data
  presented in graphs conveying NDR (non-drop rate, zero packet
  loss) and PDR (partial drop rate) throughput statistics.
  Multi-core speedup and latency are also presented.
- Consistent comparison of NDR and PDR throughput results across the
  releases.
- Updated graph naming and test grouping to improve browsability and
  access to test data.

  - Increased test coverage in 2-node testbed environment (2n-skx).

  - Updated soak testing methodology and new results, aligned with
latest IETF draft specification draft-vpolak-bmwg-plrsearch-02.

4. Infrastructure

- API
  - PAPI (Python API) used for all VPP tests, migrated away from VAT
(VPP API Test).
  - VPP API change detection and gating in VPP and CSIT CI/CD.

- Test Environments
  - VPP functional device tests: migrated away from VIRL (VM based) to
container environment (with Nomad).
  - Added new physical testbeds: ARM/AArch64 (TaiShan) and Atom
(Denverton).

- CSIT Framework
  - Conifguration keyword alignment across 2-node and 3-node testbeds to
ease test portability across environments.

- Installer
  - Updated bare-metal CSIT performance testbed installer (ansible).


Points of Note in CSIT-1908 Report

Indexed specific links listed at the bottom.

1. VPP release notes
   a. Changes in CSIT-1908: [1]
   b. Known issues: [2]

2. VPP performance - 64B/IMIX throughput graphs (selected NIC models):
   a. Graphs explained: [3]
   b. L2 Ethernet Switching:[4]
   c. IPv4 Routing: [5]
   d. IPv6 Routing: [6]
   e. SRv6 Routing: [7]
   f. IPv4 Tunnels: [8]
   g. KVM VMs vhost-user:   [9]
   h. LXC/DRC Container Memif: [10]
   e. IPsec IPv4 Routing:  [11]
   f. Virtual Topology System: [12]

3. VPP performance - multi-core and latency graphs:
   a. Speedup Multi-Core:  [13]
   b. Latency: [14]

4. VPP system performance - NFV service density and TCP/IP:
   a. VNF (VM) Service Chains:  [15]
   b. CNF (Container) Service Chains:   [16]
   c. CNF (Container) Service Pipelines:[17]
   d. HTTP and TCP/IP:  [18]

4. VPP performance comparisons
   a. VPP-19.08 vs. VPP-19.04:  [19]

5. VPP performance test details - all NICs:
   a. Detailed results 64B IMIX 1518B 9kB:  [20]
   b. Configuration:[21]

DPDK Testpmd and L3fwd performance sections follow similar structure.

6. DPDK applications:
  a. Release notes:   [22]
  b. DPDK performance - 64B throughput graphs:[23]
  c. DPDK performance - 

Re: [vpp-dev] VPP API sometimes hangs on ip_route_details response #vppcapi #vpp_stability #vpp #binapi

2019-09-04 Thread Vratko Polak -X (vrpolak - PANTHEON TECHNOLOGIES at Cisco) via Lists.Fd.Io
Looks like VPP-1753 to me.


I do not have much to add,

heisenbugs are hard to figure out.


Vratko.



From: vpp-dev@lists.fd.io  on behalf of 
sylvain.cadil...@jaguar-network.com 
Sent: Wednesday, September 4, 2019 01:36
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] VPP API sometimes hangs on ip_route_details response 
#vppcapi #vpp_stability #vpp #binapi


Hi vpp-dev,

I'm running a C program which adds/removes interfaces, IPs, routes, etc. to VPP 
via the C API, along with a Python test suite which checks the results of the 
first program multiple times by querying VPP using the Python bindings.

I'm seeing a fairly reproductible issue (when repeating the test suite a number 
of times), where the VPP API thread remains blocked, always after the 
clib_mem_free of a ip_route_details reply message (response to Python).

Traces:

  *   This is VPP 19.08 with slight plugin changes; I can provide the packages 
if needed.
  *   The backtrace is here: 
https://gist.github.com/SCadilhac/bb9ca9600d757b13726e05ae34923c1a#file-backtrace
  *   The startup config file is here: 
https://gist.github.com/SCadilhac/bb9ca9600d757b13726e05ae34923c1a#file-startup-conf
  *   I don't know how to extract the API dump, as the CLI gets unresponsive, 
and the process /tmp/api_post_mortem file remains unwritten.
  *   The gzipped core dump is here: 
http://www.netfishers.onl/downloads/core.24542.gz

Any help is welcome :-). Is there anything I'm obviously doing wrong?
I'm happy to open a Jira issue if this can help.

Thanks,

Sylvain
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13905): https://lists.fd.io/g/vpp-dev/message/13905
Mute This Topic: https://lists.fd.io/mt/33132760/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp&subid=1480452
Mute #binapi: https://lists.fd.io/mk?hashtag=binapi&subid=1480452
Mute #vpp_stability: https://lists.fd.io/mk?hashtag=vpp_stability&subid=1480452
Mute #vppcapi: https://lists.fd.io/mk?hashtag=vppcapi&subid=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] ACL based security group of VPP

2019-09-04 Thread cipher.chen2012
Hi vpp-dev,

I'm testing security group functions on VPP19.08, and got some questions here. 
I have two vms: A(172.16.0.1/24, using vxlan_tunnel10 / bridge 10) and 
B(172.16.1.1/24, using vxlan_tunnel11 / bridge 11). Both these two networks' 
gateway is X.254, configured on VPP bridges (10 and 11). And now A/B are 
reachable from each other. 

I try to configure acl as follow:

vat# acl_dump 4
vl_api_acl_details_t_handler:223: acl_index: 4, count: 3
   tag {}
   ipv4 action 0 src 0.0.0.0/0 dst 0.0.0.0/0 proto 1 sport 0-65535 dport 
0-65535 tcpflags 0 mask 0,
   ipv4 action 1 src 0.0.0.0/0 dst 0.0.0.0/0 proto 6 sport 0-65535 dport 
0-65535 tcpflags 0 mask 0
vat#

vat# acl_interface_set_acl_list sw_if_index 1 input 4

This rule is expected to 
1. deny icmp packets, and
2. allow tcp packets
but after applied to interface vxlan_tunnel10, ALL packets are deneid.

And after so much researches, I found 
https://lists.fd.io/g/vpp-dev/topic/10642768#8144 
 that VPP has an implicit 
"deny all" in the end, so I add one more rule "permit all":

vat# acl_dump 4
vl_api_acl_details_t_handler:223: acl_index: 4, count: 3
   tag {}
   ipv4 action 0 src 0.0.0.0/0 dst 0.0.0.0/0 proto 1 sport 0-65535 dport 
0-65535 tcpflags 0 mask 0,
   ipv4 action 1 src 0.0.0.0/0 dst 0.0.0.0/0 proto 6 sport 0-65535 dport 
0-65535 tcpflags 0 mask 0,
   ipv4 action 1 src 0.0.0.0/0 dst 0.0.0.0/0 proto 0 sport 0-65535 dport 
0-65535 tcpflags 0 mask 0
vat#

And now, ALL packets has been permited.

So these are my questions:

Q1: Why tcp packets still be dropped even has an explicit "allow ipv4 tcp 
packets" rule?
Q2: Why all packets are permitted after appending the "permit all"?
Q3: Since VPP docs are so pool, are there any official docs can be found that 
refer this "implicit deny all" and others (since so few non-officical docs can 
be found)?

Thanks.

Cipher Chen.-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13903): https://lists.fd.io/g/vpp-dev/message/13903
Mute This Topic: https://lists.fd.io/mt/33138734/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] VPP getting crashed with IPv6 traffic

2019-09-04 Thread via Lists.Fd.Io
Hi,

We are using VPP version 18.07 and we are running IPv6 traffic.
Following crash dump observed during one of the runs

0x7f75522001d7 in raise () from /usr/lib64/libc.so.6
#1  0x7f75522018c8 in abort () from /usr/lib64/libc.so.6
#2  0x55be72b431ae in os_exit (code=code@entry=1) at 
/home/utkarsh_extra/190821_clone/mn-vpp/fastpath/build-vpp-mav/vpp/build-data/../src/vpp/vnet/main.c:355
#3  0x7f75543f85c0 in unix_signal_handler (signum=, 
si=, uc=) at 
/home/utkarsh_extra/190821_clone/mn-vpp/fastpath/build-vpp-mav/vpp/build-data/../src/vlib/unix/main.c:157
#4  
#5  fib_entry_get_flags_i (fib_entry=0x7fb6f05d0cd8) at 
/home/utkarsh_extra/190821_clone/mn-vpp/fastpath/build-vpp-mav/vpp/build-data/../src/vnet/fib/fib_entry_src.c:1821
#6  0x7f7553f5c615 in fib_entry_get_flags (fib_entry_index=) 
at 
/home/utkarsh_extra/190821_clone/mn-vpp/fastpath/build-vpp-mav/vpp/build-data/../src/vnet/fib/fib_entry.c:278
#7  0x7f7553c4c3af in ip_is_local (fib_index=, 
ip46_address=ip46_address@entry=0x7f5401d169a6, is_ip4=is_ip4@entry=0 '\000')
at 
/home/utkarsh_extra/190821_clone/mn-vpp/fastpath/build-vpp-mav/vpp/build-data/../src/vnet/ip/ip.c:75
#8  0x7f6d4dadfb0f in ipv6_is_local_packet (ip6_addr=0x7f5401d169a6, 
fib_index=) at 
/home/utkarsh_extra/190821_clone/mn-vpp/fastpath/build-vpp-mav/vpp/build-data/../src/plugins/nfware/node.c:84
#9  nfware_node_inside6_fn (vm=0x7f6eef372ac0, node=0x7f6ef0a47140, 
frame=0x7f6ef0731a40, nat_type=2 '\002') at 
/home/utkarsh_extra/190821_clone/mn-vpp/fastpath/build-vpp-mav/vpp/build-data/../src/plugins/nfware/node.c:432
#10 0x7f75543be904 in dispatch_node (last_time_stamp=3648203020924, 
frame=0x7f6ef0731a40, dispatch_state=VLIB_NODE_STATE_POLLING, 
type=VLIB_NODE_TYPE_INTERNAL, node=0x7f6ef0a47140, vm=0x7f6eef372ac0)
at 
/home/utkarsh_extra/190821_clone/mn-vpp/fastpath/build-vpp-mav/vpp/build-data/../src/vlib/main.c:988

Just wanted to check if there is a known issue with this leg of code and any 
fixes done in this regard.

Regards
Inder

This e-mail message may contain confidential or proprietary information of 
Mavenir Systems, Inc. or its affiliates and is intended solely for the use of 
the intended recipient(s). If you are not the intended recipient of this 
message, you are hereby notified that any review, use or distribution of this 
information is absolutely prohibited and we request that you delete all copies 
in your control and contact us by e-mailing to secur...@mavenir.com. This 
message contains the views of its author and may not necessarily reflect the 
views of Mavenir Systems, Inc. or its affiliates, who employ systems to monitor 
email messages, but make no representation that such messages are authorized, 
secure, uncompromised, or free from computer viruses, malware, or other 
defects. Thank You
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13904): https://lists.fd.io/g/vpp-dev/message/13904
Mute This Topic: https://lists.fd.io/mt/33138735/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Intel XXV710 SR-IOV packet loss

2019-09-04 Thread Damjan Marion via Lists.Fd.Io

You will need to ask intel folks,
but generally it makes sense that if NIC needs to parse vlan tag and distribute 
packets to different queues performance will go down..


> On 4 Sep 2019, at 14:47, Miroslav Kováč  wrote:
> 
> Isn t sriov supposed to be as fast as physical function? and besides why 
> would we receive different number of processed packets with 7 VFs and dropped 
> by 16 millions of packets using 8 VFs? and the same result goes with 9 or 
> 10 VFS as with 8 VFs..
> Od: Damjan Marion via Lists.Fd.Io  >
> Odoslané: streda, 4. septembra 2019 12:46:55
> Komu: Miroslav Kováč
> Kópia: vpp-dev@lists.fd.io 
> Predmet: Re: [vpp-dev] Intel XXV710 SR-IOV packet loss
>  
> 
> Isn't that just a hardware limit of the card?
> 
> 
>> On 4 Sep 2019, at 12:45, Miroslav Kováč > > wrote:
>> 
>> Yes we have tried that as well, with AVF we received simlar results as well
>> Od: Damjan Marion mailto:dmar...@me.com>>
>> Odoslané: streda, 4. septembra 2019 12:44:33
>> Komu: Miroslav Kováč
>> Kópia: vpp-dev@lists.fd.io 
>> Predmet: Re: [vpp-dev] Intel XXV710 SR-IOV packet loss
>>  
>> 
>> Have you tried to use native AVF driver instead?
>> 
>> 
>>> On 4 Sep 2019, at 12:42, Miroslav Kováč >> > wrote:
>>> 
>>> Hello,
>>> 
>>> We are trying a setup with intel 25 GB card XXV710 and sr-iov. We need 
>>> sr-iov to sort packets based on vlan in between the VFs. We are using trex 
>>> on one machine to generate packets and multiple VPPs (each in docker 
>>> container, using one VF) on another one. Trex machine contains the exact 
>>> same hardware. 
>>> 
>>> Each VF contains one vlan with spoof checking off and trust on and specific 
>>> MAC address. For example ->
>>> 
>>> vf 0 MAC ba:dc:0f:fe:ed:00, vlan 1537, spoof checking off, link-state auto, 
>>> trust on
>>> 
>>> 
>>> We are generating packets with VF destination MACs with the corresponding 
>>> VLAN. When sending packets to 3 VFs trex shows 35 million tx-packets and 
>>> Dpdk stats on the trex machine show that 35 million were in fact sent out:
>>> 
>>> # DPDK Statistics port0 #
>>> {
>>> "tx_good_bytes": 2142835740,
>>> "tx_good_packets": 35713929,
>>> "tx_size_64_packets": 35713929,
>>> "tx_unicast_packets": 35713929
>>> }
>>> 
>>> rate= '96%'; pktSize=   64; frameLoss%=51.31%; bytesReceived/s=
>>> 1112966528.00; totalReceived=   17390102; totalSent=   35713929; frameLoss= 
>>>   18323827; bytesReceived=1112966528; targetDuration=1.0
>>> 
>>> 
>>> However VPP shows only 33 million rx-packets:
>>> 
>>> VirtualFunctionEthernet17/a/0 2  up  9000/0/0/0 
>>> rx packets   5718196
>>> rx bytes   343091760
>>> rx-miss  5572089 
>>> 
>>> VirtualFunctionEthernet17/a/1 2  up  9000/0/0/0 
>>> rx packets   5831396
>>> rx bytes   349883760
>>> rx-miss  5459089
>>> 
>>> VirtualFunctionEthernet17/a/2 2  up  9000/0/0/0 
>>> rx packets   5840512
>>> rx bytes   350430720
>>> rx-miss  5449466
>>> 
>>> Sum of rx packets and rx-miss is 33,870,748. About 2 million is missing.
>>> 
>>> 
>>> Even when I check VFs stats I see only 33 million to come (out of which 9.9 
>>> million are rx-missed):
>>> 
>>> root@protonet:/home/protonet# for f in $(ls 
>>> /sys/class/net/enp23s0f1/device/sriov/*/stats/rx_packets); do echo "$f: 
>>> $(cat $f)"; done | grep -v ' 0$'
>>> 
>>> /sys/class/net/enp23s0f1/device/sriov/0/stats/rx_packets: 11290290
>>> /sys/class/net/enp23s0f1/device/sriov/1/stats/rx_packets: 11290485
>>> /sys/class/net/enp23s0f1/device/sriov/2/stats/rx_packets: 11289978
>>> 
>>> 
>>> When increasing the number of VFs the number of rx-packets on VPP is 
>>> actually decreasing. Up to 6 or 7 VFs I still receive somewhere around 
>>> 28-33 million packets, but when I use 8 VFs all the sudden it drops to 16 
>>> million packets (no rx-miss any more). The same goes with trunk mode:
>>> 
>>> VirtualFunctionEthernet17/a/0 2  up  9000/0/0/0 
>>> rx packets   1959110
>>> rx bytes   117546600
>>> 
>>> VirtualFunctionEthernet17/a/1 2  up  9000/0/0/0 
>>> rx packets   1959181
>>> rx bytes   117550860
>>> 
>>> VirtualFunctionEthernet17/a/2 2  up  9000/0/0/0 
>>> rx packets   1956242
>>> rx bytes   117374520
>>> .
>>> .
>>> .
>>> Approximately the same amount of packets for each VPP instance which is 2 
>>> million packets * 8 = 16 million packets out of 35 million sent. Almost 20 
>>> million are gone
>>> 
>>> 
>>> We are using vfio-pci driver.
>>> 
>>> The strange thing is that when I use only PF, no sr-iov VFs are on and I 
>>> try the same vpp setup I can see all 35 million packets 

Re: [vpp-dev] Intel XXV710 SR-IOV packet loss

2019-09-04 Thread Miroslav Kováč
Isn t sriov supposed to be as fast as physical function? and besides why would 
we receive different number of processed packets with 7 VFs and dropped by 16 
millions of packets using 8 VFs? and the same result goes with 9 or 10 VFS 
as with 8 VFs..


Od: Damjan Marion via Lists.Fd.Io 
Odoslané: streda, 4. septembra 2019 12:46:55
Komu: Miroslav Kováč
Kópia: vpp-dev@lists.fd.io
Predmet: Re: [vpp-dev] Intel XXV710 SR-IOV packet loss


Isn't that just a hardware limit of the card?


On 4 Sep 2019, at 12:45, Miroslav Kováč 
mailto:miroslav.ko...@pantheon.tech>> wrote:

Yes we have tried that as well, with AVF we received simlar results as well

Od: Damjan Marion mailto:dmar...@me.com>>
Odoslané: streda, 4. septembra 2019 12:44:33
Komu: Miroslav Kováč
Kópia: vpp-dev@lists.fd.io
Predmet: Re: [vpp-dev] Intel XXV710 SR-IOV packet loss


Have you tried to use native AVF driver instead?


On 4 Sep 2019, at 12:42, Miroslav Kováč 
mailto:miroslav.ko...@pantheon.tech>> wrote:

Hello,

We are trying a setup with intel 25 GB card XXV710 and sr-iov. We need sr-iov 
to sort packets based on vlan in between the VFs. We are using trex on one 
machine to generate packets and multiple VPPs (each in docker container, using 
one VF) on another one. Trex machine contains the exact same hardware.

Each VF contains one vlan with spoof checking off and trust on and specific MAC 
address. For example ->

vf 0 MAC ba:dc:0f:fe:ed:00, vlan 1537, spoof checking off, link-state auto, 
trust on


We are generating packets with VF destination MACs with the corresponding VLAN. 
When sending packets to 3 VFs trex shows 35 million tx-packets and Dpdk stats 
on the trex machine show that 35 million were in fact sent out:


# DPDK Statistics port0 #
{
"tx_good_bytes": 2142835740,
"tx_good_packets": 35713929,
"tx_size_64_packets": 35713929,
"tx_unicast_packets": 35713929
}


rate= '96%'; pktSize=   64; frameLoss%=51.31%; bytesReceived/s=
1112966528.00; totalReceived=   17390102; totalSent=   35713929; frameLoss=   
18323827; bytesReceived=1112966528; targetDuration=1.0


However VPP shows only 33 million rx-packets:

VirtualFunctionEthernet17/a/0 2  up  9000/0/0/0
rx packets   5718196
rx bytes   343091760
rx-miss  5572089

VirtualFunctionEthernet17/a/1 2  up  9000/0/0/0
rx packets   5831396
rx bytes   349883760
rx-miss  5459089

VirtualFunctionEthernet17/a/2 2  up  9000/0/0/0
rx packets   5840512
rx bytes   350430720
rx-miss  5449466

Sum of rx packets and rx-miss is 33,870,748. About 2 million is missing.


Even when I check VFs stats I see only 33 million to come (out of which 9.9 
million are rx-missed):


root@protonet:/home/protonet# for f in $(ls 
/sys/class/net/enp23s0f1/device/sriov/*/stats/rx_packets); do echo "$f: $(cat 
$f)"; done | grep -v ' 0$'

/sys/class/net/enp23s0f1/device/sriov/0/stats/rx_packets: 11290290
/sys/class/net/enp23s0f1/device/sriov/1/stats/rx_packets: 11290485
/sys/class/net/enp23s0f1/device/sriov/2/stats/rx_packets: 11289978


When increasing the number of VFs the number of rx-packets on VPP is actually 
decreasing. Up to 6 or 7 VFs I still receive somewhere around 28-33 million 
packets, but when I use 8 VFs all the sudden it drops to 16 million packets (no 
rx-miss any more). The same goes with trunk mode:


VirtualFunctionEthernet17/a/0 2  up  9000/0/0/0
rx packets   1959110
rx bytes   117546600


VirtualFunctionEthernet17/a/1 2  up  9000/0/0/0
rx packets   1959181
rx bytes   117550860

VirtualFunctionEthernet17/a/2 2  up  9000/0/0/0
rx packets   1956242
rx bytes   117374520
.
.
.
Approximately the same amount of packets for each VPP instance which is 2 
million packets * 8 = 16 million packets out of 35 million sent. Almost 20 
million are gone


We are using vfio-pci driver.

The strange thing is that when I use only PF, no sr-iov VFs are on and I try 
the same vpp setup I can see all 35 million packets to come across.

We have also tested this with X710 10GB intel card and we have received similar 
results.

Regards,
Miroslav Kovac
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13894): https://lists.fd.io/g/vpp-dev/message/13894
Mute This Topic: https://lists.fd.io/mt/33136868/675642
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  
[dmar...@me.com]
-=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13896): https://lists.fd.io/g/vpp-dev/message/13896
Mute This Topic: https://lists.fd

Re: [vpp-dev] ACL not working #vpp

2019-09-04 Thread Andrew Yourtchenko
Hi Cipher,

Reply below inline 

> On 4 Sep 2019, at 12:36, Cipher Chen  wrote:
> 
> Thanks Andrew, I've successfully done acl_plugin test.
> 
> BTW, just reply here for latecomers, do "V=2 EXTENDED_TESTS=1 
> TEST=acl_plugin* make test" to do more test and print verbosely.

Yeah the connection tracking test takes time and needs some more love to be 
generally usable so it’s in the extended tests.

> 
> Since I'm testing stateful ACL by watching behavior of 
> test_acl_plugin_conns.py, along with explaination from  Statefull ACL ,
> 
> this test case was below, to test client 172.16.0.1 (call it A here) 
> accessing client 172.16.1.1 (call it B here):
> 
> set acl-plugin session timeout udp idle 200
> 
> set acl-plugin session timeout tcp idle 10
> 
> set acl-plugin session timeout tcp transient 1
> 
>  
> 
> acl_add_replace ipv4 permit+reflect src 172.16.0.1/32 dst 172.16.1.1/32 proto 
> 6 dport 80, ipv4 deny any # index 2
> 
> acl_add_replace ipv4 deny any # index 0
> 
>  
> 
> acl_interface_set_acl_list vxlan_tunnel10 input 2 output 0
> 
I assume this is the interface of the “side” to which the 172.16.0.1 is 
connected ?

> acl_interface_set_acl_list vxlan_tunnel11 input
> 
You don’t need this, in principle. It should just clear all ACLs from the 
interface - but if there were none, no need to clear.

> 
> The case behave like these:
> #1: A ping B, unreachable
> #2: A access B tcp port 22, unreachable
> #3: A access B tcp port 80, reachable


> 
> Q1: #1/#2 works well, but why #3 still work even when A has finished existing 
> connection and established a new tcp dport 80 to B, the connection still can 
> be established. Is this a bug or feature of 'permit+reflect'?

This is how you configured it. You specify that any connection to port 80 has 
to be permitted and will create the connection entries on that innterface that 
are checked before the acl. Again, packet tracer will help to see what is going 
on.

> Q2: How does ACL define 'stateful ACL' or 'connection', since new established 
> connection won't be treated as related connection in Netfilter?

https://docs.fd.io/vpp/19.08/acl_multicore.html

I need to update it to reflect some of the last fixes but it should help to 
understand the general logic.

> Q3: What's 'transient'?

The above doc talks about that :)

—a

> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#13893): https://lists.fd.io/g/vpp-dev/message/13893
> Mute This Topic: https://lists.fd.io/mt/33127037/675608
> Mute #vpp: https://lists.fd.io/mk?hashtag=vpp&subid=1480457
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [ayour...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13900): https://lists.fd.io/g/vpp-dev/message/13900
Mute This Topic: https://lists.fd.io/mt/33127037/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp&subid=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] vpp api's change

2019-09-04 Thread Ole Troan
> Vpp have changed it’s api  string type many times, Could we keep it stable?
> In the interface.api.h.vapi.h, the name_filter, interface_name, admin_up_down

Yes, we have. My fault. Apologies.

Let's hope this incarnation will stick:
https://gerrit.fd.io/r/c/vpp/+/21492

There is still some work underway between 19.08 and 20.01 on more explicit 
types in APIs.
e.g.:
u8 name[64] -> string name[64];

u32 sw_if_index -> vl_api_interface_index_t sw_if_index;

u8 is_ip6;
u8 address[16] -> vl_api_address_t address;

u8 mac_address[6] -> vl_api_mac_address_t mac_address;

These changes allow for auto-generating code to produce the API tracing 
correctly.
It allows for the language bindings to distinguish between binary and text 
strings.
It allows for auto-generated CLI to use interface-name instead of sw_if_index 
and so on.

I haven't found a better way than we collectively endure some pain while this 
process is ongoing...

Best regards,
Ole-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13899): https://lists.fd.io/g/vpp-dev/message/13899
Mute This Topic: https://lists.fd.io/mt/33136938/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] vpp api's change

2019-09-04 Thread Wang, Drenfong
Hi vpp-dev
Vpp have changed it's api  string type many times, Could we keep it stable?
In the interface.api.h.vapi.h, the name_filter, interface_name, admin_up_down

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13898): https://lists.fd.io/g/vpp-dev/message/13898
Mute This Topic: https://lists.fd.io/mt/33136938/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Intel XXV710 SR-IOV packet loss

2019-09-04 Thread Damjan Marion via Lists.Fd.Io

Isn't that just a hardware limit of the card?


> On 4 Sep 2019, at 12:45, Miroslav Kováč  wrote:
> 
> Yes we have tried that as well, with AVF we received simlar results as well
> Od: Damjan Marion mailto:dmar...@me.com>>
> Odoslané: streda, 4. septembra 2019 12:44:33
> Komu: Miroslav Kováč
> Kópia: vpp-dev@lists.fd.io 
> Predmet: Re: [vpp-dev] Intel XXV710 SR-IOV packet loss
>  
> 
> Have you tried to use native AVF driver instead?
> 
> 
>> On 4 Sep 2019, at 12:42, Miroslav Kováč > > wrote:
>> 
>> Hello,
>> 
>> We are trying a setup with intel 25 GB card XXV710 and sr-iov. We need 
>> sr-iov to sort packets based on vlan in between the VFs. We are using trex 
>> on one machine to generate packets and multiple VPPs (each in docker 
>> container, using one VF) on another one. Trex machine contains the exact 
>> same hardware. 
>> 
>> Each VF contains one vlan with spoof checking off and trust on and specific 
>> MAC address. For example ->
>> 
>> vf 0 MAC ba:dc:0f:fe:ed:00, vlan 1537, spoof checking off, link-state auto, 
>> trust on
>> 
>> 
>> We are generating packets with VF destination MACs with the corresponding 
>> VLAN. When sending packets to 3 VFs trex shows 35 million tx-packets and 
>> Dpdk stats on the trex machine show that 35 million were in fact sent out:
>> 
>> # DPDK Statistics port0 #
>> {
>> "tx_good_bytes": 2142835740,
>> "tx_good_packets": 35713929,
>> "tx_size_64_packets": 35713929,
>> "tx_unicast_packets": 35713929
>> }
>> 
>> rate= '96%'; pktSize=   64; frameLoss%=51.31%; bytesReceived/s=
>> 1112966528.00; totalReceived=   17390102; totalSent=   35713929; frameLoss=  
>>  18323827; bytesReceived=1112966528; targetDuration=1.0
>> 
>> 
>> However VPP shows only 33 million rx-packets:
>> 
>> VirtualFunctionEthernet17/a/0 2  up  9000/0/0/0 
>> rx packets   5718196
>> rx bytes   343091760
>> rx-miss  5572089 
>> 
>> VirtualFunctionEthernet17/a/1 2  up  9000/0/0/0 
>> rx packets   5831396
>> rx bytes   349883760
>> rx-miss  5459089
>> 
>> VirtualFunctionEthernet17/a/2 2  up  9000/0/0/0 
>> rx packets   5840512
>> rx bytes   350430720
>> rx-miss  5449466
>> 
>> Sum of rx packets and rx-miss is 33,870,748. About 2 million is missing.
>> 
>> 
>> Even when I check VFs stats I see only 33 million to come (out of which 9.9 
>> million are rx-missed):
>> 
>> root@protonet:/home/protonet# for f in $(ls 
>> /sys/class/net/enp23s0f1/device/sriov/*/stats/rx_packets); do echo "$f: 
>> $(cat $f)"; done | grep -v ' 0$'
>> 
>> /sys/class/net/enp23s0f1/device/sriov/0/stats/rx_packets: 11290290
>> /sys/class/net/enp23s0f1/device/sriov/1/stats/rx_packets: 11290485
>> /sys/class/net/enp23s0f1/device/sriov/2/stats/rx_packets: 11289978
>> 
>> 
>> When increasing the number of VFs the number of rx-packets on VPP is 
>> actually decreasing. Up to 6 or 7 VFs I still receive somewhere around 28-33 
>> million packets, but when I use 8 VFs all the sudden it drops to 16 million 
>> packets (no rx-miss any more). The same goes with trunk mode:
>> 
>> VirtualFunctionEthernet17/a/0 2  up  9000/0/0/0 
>> rx packets   1959110
>> rx bytes   117546600
>> 
>> VirtualFunctionEthernet17/a/1 2  up  9000/0/0/0 
>> rx packets   1959181
>> rx bytes   117550860
>> 
>> VirtualFunctionEthernet17/a/2 2  up  9000/0/0/0 
>> rx packets   1956242
>> rx bytes   117374520
>> .
>> .
>> .
>> Approximately the same amount of packets for each VPP instance which is 2 
>> million packets * 8 = 16 million packets out of 35 million sent. Almost 20 
>> million are gone
>> 
>> 
>> We are using vfio-pci driver.
>> 
>> The strange thing is that when I use only PF, no sr-iov VFs are on and I try 
>> the same vpp setup I can see all 35 million packets to come across. 
>> 
>> We have also tested this with X710 10GB intel card and we have received 
>> similar results.
>> 
>> Regards,
>> Miroslav Kovac
>> -=-=-=-=-=-=-=-=-=-=-=-
>> Links: You receive all messages sent to this group.
>> 
>> View/Reply Online (#13894): https://lists.fd.io/g/vpp-dev/message/13894 
>> 
>> Mute This Topic: https://lists.fd.io/mt/33136868/675642 
>> 
>> Group Owner: vpp-dev+ow...@lists.fd.io 
>> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub 
>>   [dmar...@me.com 
>> ]
>> -=-=-=-=-=-=-=-=-=-=-=-
> 
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#13896): https://lists.fd.io/g/vpp-dev/message/13896 
> 

Re: [vpp-dev] Intel XXV710 SR-IOV packet loss

2019-09-04 Thread Miroslav Kováč
Yes we have tried that as well, with AVF we received simlar results as well


Od: Damjan Marion 
Odoslané: streda, 4. septembra 2019 12:44:33
Komu: Miroslav Kováč
Kópia: vpp-dev@lists.fd.io
Predmet: Re: [vpp-dev] Intel XXV710 SR-IOV packet loss


Have you tried to use native AVF driver instead?


On 4 Sep 2019, at 12:42, Miroslav Kováč 
mailto:miroslav.ko...@pantheon.tech>> wrote:

Hello,

We are trying a setup with intel 25 GB card XXV710 and sr-iov. We need sr-iov 
to sort packets based on vlan in between the VFs. We are using trex on one 
machine to generate packets and multiple VPPs (each in docker container, using 
one VF) on another one. Trex machine contains the exact same hardware.

Each VF contains one vlan with spoof checking off and trust on and specific MAC 
address. For example ->

vf 0 MAC ba:dc:0f:fe:ed:00, vlan 1537, spoof checking off, link-state auto, 
trust on


We are generating packets with VF destination MACs with the corresponding VLAN. 
When sending packets to 3 VFs trex shows 35 million tx-packets and Dpdk stats 
on the trex machine show that 35 million were in fact sent out:


# DPDK Statistics port0 #
{
"tx_good_bytes": 2142835740,
"tx_good_packets": 35713929,
"tx_size_64_packets": 35713929,
"tx_unicast_packets": 35713929
}


rate= '96%'; pktSize=   64; frameLoss%=51.31%; bytesReceived/s=
1112966528.00; totalReceived=   17390102; totalSent=   35713929; frameLoss=   
18323827; bytesReceived=1112966528; targetDuration=1.0


However VPP shows only 33 million rx-packets:

VirtualFunctionEthernet17/a/0 2  up  9000/0/0/0
rx packets   5718196
rx bytes   343091760
rx-miss  5572089

VirtualFunctionEthernet17/a/1 2  up  9000/0/0/0
rx packets   5831396
rx bytes   349883760
rx-miss  5459089

VirtualFunctionEthernet17/a/2 2  up  9000/0/0/0
rx packets   5840512
rx bytes   350430720
rx-miss  5449466

Sum of rx packets and rx-miss is 33,870,748. About 2 million is missing.


Even when I check VFs stats I see only 33 million to come (out of which 9.9 
million are rx-missed):


root@protonet:/home/protonet# for f in $(ls 
/sys/class/net/enp23s0f1/device/sriov/*/stats/rx_packets); do echo "$f: $(cat 
$f)"; done | grep -v ' 0$'

/sys/class/net/enp23s0f1/device/sriov/0/stats/rx_packets: 11290290
/sys/class/net/enp23s0f1/device/sriov/1/stats/rx_packets: 11290485
/sys/class/net/enp23s0f1/device/sriov/2/stats/rx_packets: 11289978


When increasing the number of VFs the number of rx-packets on VPP is actually 
decreasing. Up to 6 or 7 VFs I still receive somewhere around 28-33 million 
packets, but when I use 8 VFs all the sudden it drops to 16 million packets (no 
rx-miss any more). The same goes with trunk mode:


VirtualFunctionEthernet17/a/0 2  up  9000/0/0/0
rx packets   1959110
rx bytes   117546600


VirtualFunctionEthernet17/a/1 2  up  9000/0/0/0
rx packets   1959181
rx bytes   117550860

VirtualFunctionEthernet17/a/2 2  up  9000/0/0/0
rx packets   1956242
rx bytes   117374520
.
.
.
Approximately the same amount of packets for each VPP instance which is 2 
million packets * 8 = 16 million packets out of 35 million sent. Almost 20 
million are gone


We are using vfio-pci driver.

The strange thing is that when I use only PF, no sr-iov VFs are on and I try 
the same vpp setup I can see all 35 million packets to come across.

We have also tested this with X710 10GB intel card and we have received similar 
results.

Regards,
Miroslav Kovac
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13894): https://lists.fd.io/g/vpp-dev/message/13894
Mute This Topic: https://lists.fd.io/mt/33136868/675642
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  
[dmar...@me.com]
-=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13896): https://lists.fd.io/g/vpp-dev/message/13896
Mute This Topic: https://lists.fd.io/mt/33136890/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Intel XXV710 SR-IOV packet loss

2019-09-04 Thread Damjan Marion via Lists.Fd.Io

Have you tried to use native AVF driver instead?


> On 4 Sep 2019, at 12:42, Miroslav Kováč  wrote:
> 
> Hello,
> 
> We are trying a setup with intel 25 GB card XXV710 and sr-iov. We need sr-iov 
> to sort packets based on vlan in between the VFs. We are using trex on one 
> machine to generate packets and multiple VPPs (each in docker container, 
> using one VF) on another one. Trex machine contains the exact same hardware. 
> 
> Each VF contains one vlan with spoof checking off and trust on and specific 
> MAC address. For example ->
> 
> vf 0 MAC ba:dc:0f:fe:ed:00, vlan 1537, spoof checking off, link-state auto, 
> trust on
> 
> 
> We are generating packets with VF destination MACs with the corresponding 
> VLAN. When sending packets to 3 VFs trex shows 35 million tx-packets and Dpdk 
> stats on the trex machine show that 35 million were in fact sent out:
> 
> # DPDK Statistics port0 #
> {
> "tx_good_bytes": 2142835740,
> "tx_good_packets": 35713929,
> "tx_size_64_packets": 35713929,
> "tx_unicast_packets": 35713929
> }
> 
> rate= '96%'; pktSize=   64; frameLoss%=51.31%; bytesReceived/s=
> 1112966528.00; totalReceived=   17390102; totalSent=   35713929; frameLoss=   
> 18323827; bytesReceived=1112966528; targetDuration=1.0
> 
> 
> However VPP shows only 33 million rx-packets:
> 
> VirtualFunctionEthernet17/a/0 2  up  9000/0/0/0 
> rx packets   5718196
> rx bytes   343091760
> rx-miss  5572089 
> 
> VirtualFunctionEthernet17/a/1 2  up  9000/0/0/0 
> rx packets   5831396
> rx bytes   349883760
> rx-miss  5459089
> 
> VirtualFunctionEthernet17/a/2 2  up  9000/0/0/0 
> rx packets   5840512
> rx bytes   350430720
> rx-miss  5449466
> 
> Sum of rx packets and rx-miss is 33,870,748. About 2 million is missing.
> 
> 
> Even when I check VFs stats I see only 33 million to come (out of which 9.9 
> million are rx-missed):
> 
> root@protonet:/home/protonet# for f in $(ls 
> /sys/class/net/enp23s0f1/device/sriov/*/stats/rx_packets); do echo "$f: $(cat 
> $f)"; done | grep -v ' 0$'
> 
> /sys/class/net/enp23s0f1/device/sriov/0/stats/rx_packets: 11290290
> /sys/class/net/enp23s0f1/device/sriov/1/stats/rx_packets: 11290485
> /sys/class/net/enp23s0f1/device/sriov/2/stats/rx_packets: 11289978
> 
> 
> When increasing the number of VFs the number of rx-packets on VPP is actually 
> decreasing. Up to 6 or 7 VFs I still receive somewhere around 28-33 million 
> packets, but when I use 8 VFs all the sudden it drops to 16 million packets 
> (no rx-miss any more). The same goes with trunk mode:
> 
> VirtualFunctionEthernet17/a/0 2  up  9000/0/0/0 
> rx packets   1959110
> rx bytes   117546600
> 
> VirtualFunctionEthernet17/a/1 2  up  9000/0/0/0 
> rx packets   1959181
> rx bytes   117550860
> 
> VirtualFunctionEthernet17/a/2 2  up  9000/0/0/0 
> rx packets   1956242
> rx bytes   117374520
> .
> .
> .
> Approximately the same amount of packets for each VPP instance which is 2 
> million packets * 8 = 16 million packets out of 35 million sent. Almost 20 
> million are gone
> 
> 
> We are using vfio-pci driver.
> 
> The strange thing is that when I use only PF, no sr-iov VFs are on and I try 
> the same vpp setup I can see all 35 million packets to come across. 
> 
> We have also tested this with X710 10GB intel card and we have received 
> similar results.
> 
> Regards,
> Miroslav Kovac
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#13894): https://lists.fd.io/g/vpp-dev/message/13894 
> 
> Mute This Topic: https://lists.fd.io/mt/33136868/675642 
> 
> Group Owner: vpp-dev+ow...@lists.fd.io 
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub 
>   [dmar...@me.com 
> ]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13895): https://lists.fd.io/g/vpp-dev/message/13895
Mute This Topic: https://lists.fd.io/mt/33136890/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Fw: Intel XXV710 SR-IOV packet loss

2019-09-04 Thread Miroslav Kováč
Hello,


We are trying a setup with intel 25 GB card XXV710 and sr-iov. We need sr-iov 
to sort packets based on vlan in between the VFs. We are using trex on one 
machine to generate packets and multiple VPPs (each in docker container, using 
one VF) on another one. Trex machine contains the exact same hardware.


Each VF contains one vlan with spoof checking off and trust on and specific MAC 
address. For example ->


vf 0 MAC ba:dc:0f:fe:ed:00, vlan 1537, spoof checking off, link-state auto, 
trust on



We are generating packets with VF destination MACs with the corresponding VLAN. 
When sending packets to 3 VFs trex shows 35 million tx-packets and Dpdk stats 
on the trex machine show that 35 million were in fact sent out:


# DPDK Statistics port0 #
{
"tx_good_bytes": 2142835740,
"tx_good_packets": 35713929,
"tx_size_64_packets": 35713929,
"tx_unicast_packets": 35713929
}


rate= '96%'; pktSize=   64; frameLoss%=51.31%; bytesReceived/s=
1112966528.00; totalReceived=   17390102; totalSent=   35713929; frameLoss=   
18323827; bytesReceived=1112966528; targetDuration=1.0


However VPP shows only 33 million rx-packets:

VirtualFunctionEthernet17/a/0 2  up  9000/0/0/0
rx packets   5718196
rx bytes   343091760
rx-miss  5572089

VirtualFunctionEthernet17/a/1 2  up  9000/0/0/0
rx packets   5831396
rx bytes   349883760
rx-miss  5459089

VirtualFunctionEthernet17/a/2 2  up  9000/0/0/0
rx packets   5840512
rx bytes   350430720
rx-miss  5449466

Sum of rx packets and rx-miss is 33,870,748. About 2 million is missing.



Even when I check VFs stats I see only 33 million to come (out of which 9.9 
million are rx-missed):


root@protonet:/home/protonet# for f in $(ls 
/sys/class/net/enp23s0f1/device/sriov/*/stats/rx_packets); do echo "$f: $(cat 
$f)"; done | grep -v ' 0$'

/sys/class/net/enp23s0f1/device/sriov/0/stats/rx_packets: 11290290
/sys/class/net/enp23s0f1/device/sriov/1/stats/rx_packets: 11290485
/sys/class/net/enp23s0f1/device/sriov/2/stats/rx_packets: 11289978



When increasing the number of VFs the number of rx-packets on VPP is actually 
decreasing. Up to 6 or 7 VFs I still receive somewhere around 28-33 million 
packets, but when I use 8 VFs all the sudden it drops to 16 million packets (no 
rx-miss any more). The same goes with trunk mode:


VirtualFunctionEthernet17/a/0 2  up  9000/0/0/0
rx packets   1959110
rx bytes   117546600


VirtualFunctionEthernet17/a/1 2  up  9000/0/0/0
rx packets   1959181
rx bytes   117550860

VirtualFunctionEthernet17/a/2 2  up  9000/0/0/0
rx packets   1956242
rx bytes   117374520
.
.
.
Approximately the same amount of packets for each VPP instance which is 2 
million packets * 8 = 16 million packets out of 35 million sent. Almost 20 
million are gone



We are using vfio-pci driver.


The strange thing is that when I use only PF, no sr-iov VFs are on and I try 
the same vpp setup I can see all 35 million packets to come across.


We have also tested this with X710 10GB intel card and we have received similar 
results.


Regards,

Miroslav Kovac
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13894): https://lists.fd.io/g/vpp-dev/message/13894
Mute This Topic: https://lists.fd.io/mt/33136868/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] ACL not working #vpp

2019-09-04 Thread Cipher Chen
Thanks Andrew, I've successfully done acl_plugin test.

BTW, just reply here for latecomers, do "V=2 EXTENDED_TESTS=1 TEST=acl_plugin* 
make test" to do more test and print verbosely.

Since I'm testing stateful ACL by watching behavior of 
test_acl_plugin_conns.py, along with explaination from Statefull ACL ( 
https://lists.fd.io/g/vpp-dev/topic/10641774#4928 ) ,

this test case was below, to test client 172.16.0.1 (call it A here) accessing 
client 172.16.1.1 (call it B here):

set acl-plugin session timeout udp idle 200

set acl-plugin session timeout tcp idle 10

set acl-plugin session timeout tcp transient 1

acl_add_replace ipv4 permit+reflect src 172.16.0.1/32 dst 172.16.1.1/32 proto 6 
dport 80, ipv4 deny any # index 2

acl_add_replace ipv4 deny any # index 0

acl_interface_set_acl_list vxlan_tunnel10 input 2 output 0

acl_interface_set_acl_list vxlan_tunnel11 input

The case behave like these:
#1: A ping B, unreachable
#2: A access B tcp port 22, unreachable
#3: A access B tcp port 80, reachable

Q1: #1/#2 works well, but why #3 still work even when A has finished existing 
connection and established a new tcp dport 80 to B, the connection still can be 
established. Is this a bug or feature of 'permit+reflect'?
Q2: How does ACL define 'stateful ACL' or 'connection', since new established 
connection won't be treated as related connection in Netfilter?
Q3: What's 'transient'?
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#13893): https://lists.fd.io/g/vpp-dev/message/13893
Mute This Topic: https://lists.fd.io/mt/33127037/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp&subid=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-