Re: [vpp-dev] Trying to build vpp on amd64 platform via qemu static library qemu-aarch64-static in Docker container

2018-04-20 Thread Brian Brooks
Can you copy/paste the compiler or linker error?

From: vpp-dev@lists.fd.io  On Behalf Of Stanislav Chlebec
Sent: Friday, April 20, 2018 6:05 AM
To: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Trying to build vpp on amd64 platform via qemu static 
library qemu-aarch64-static in Docker container

Hello Brian
I have not yet tried to build it in the arm64 VM.
What I found out more that build fails in the file src/vppinfra/time.h
...

#elif defined (__aarch64__)
always_inline u64
clib_cpu_time_now (void)
{
  u64 tsc;

  /* Works on Cavium ThunderX. Other platforms: YMMV */
  asm volatile ("mrs %0, cntvct_el0":"=r" (tsc));

  return tsc;
}
...

Stanislav


From: Brian Brooks [mailto:brian.bro...@arm.com]
Sent: Monday, April 16, 2018 7:51 PM
To: Stanislav Chlebec 
>; 
vpp-dev@lists.fd.io
Subject: RE: Trying to build vpp on amd64 platform via qemu static library 
qemu-aarch64-static in Docker container

Hi Stanislav,

Does the build work if you git clone and make build-release inside the arm64 VM 
(no docker)?

Brian

From: vpp-dev@lists.fd.io 
> On Behalf Of Stanislav Chlebec
Sent: Monday, April 16, 2018 2:27 AM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] Trying to build vpp on amd64 platform via qemu static 
library qemu-aarch64-static in Docker container

Hello all
I am trying to prepare arm64 docker image (based on arm64v8/ubuntu:latest) in 
which will be vpp compiled and installed.
I  do it on amd64 platform using qemu static library for qemu-aarch64-static 
for emulation of arm64 instructions.

(
I found how to do it here:
https://blog.hypriot.com/post/setup-simple-ci-pipeline-for-arm-images/
http://www.hotblackrobotics.com/en/blog/2018/01/22/docker-images-arm/
)

Everything goes more the less  well but it fails in this Dockerfile step:
{
RUN /bin/bash -c "\
git clone https://github.com/vpp-dev/vpp.git \
&& cd vpp \
&& git checkout ${VPP_COMMIT} \
&& UNATTENDED=y make vpp_configure_args_vpp='--disable-japi --disable-vom' 
install-dep bootstrap dpdk-install-dev build build-release;"
}

It seems that build of vpp is complete without errors but the next processes 
(installing od dpdk ?) will end with error 21326 Illegal instruction:
{

Build complete [arm64-armv8a-linuxapp-gcc]
== Installing /opt/vpp-agent/dev/vpp/dpdk/deb/debian/tmp/usr/


==
  version vpp 18.01
  prefix  
/opt/vpp-agent/dev/vpp/build-root/install-vpp_debug-native/vpp
  libdir  
/opt/vpp-agent/dev/vpp/build-root/install-vpp_debug-native/vpp/lib64
  includedir  ${prefix}/include
  CFLAGS   -g -O0 -DCLIB_DEBUG -DFORTIFY_SOURCE=2 
-fstack-protector-all -fPIC -Werror
  CPPFLAGS  
-I/opt/vpp-agent/dev/vpp/build-root/install-vpp_debug-native/dpdk/include/dpdk 
-I/usr/include/dpdk
  LDFLAGS  -g -O0 -DCLIB_DEBUG -DFORTIFY_SOURCE=2 
-fstack-protector-all -fPIC -Werror   
-L/opt/vpp-agent/dev/vpp/build-root/install-vpp_debug-native/dpdk/lib 
-Wl,-rpath 
-Wl,/opt/vpp-agent/dev/vpp/build-root/install-vpp_debug-native/dpdk/lib

with:
  libssl  yes
.
==
 Building vpp in 
/opt/vpp-agent/dev/vpp/build-root/build-vpp_debug-native/vpp 
make[2]: Entering directory 
'/opt/vpp-agent/dev/vpp/build-root/build-vpp_debug-native/vpp'
  YACC tools/vppapigen/gram.c
  CC   vppinfra/socket.lo
  CC   vppinfra/timer.lo
  CC   vppinfra/unix-formats.lo
  CC   vppinfra/unix-misc.lo
  VERSION  vpp/app/version.h (18.01-rc0~374-g2d36ed2)
  CC   tools/vppapigen/lex.o
  CC   tools/vppapigen/gram.o
  CC   tools/vppapigen/node.o
  CC   vppinfra/asm_x86.lo
 CC   vppinfra/backtrace.lo
  CC   vppinfra/cpu.lo
  CC   vppinfra/elf.lo
  CC   vppinfra/elog.lo
  CC   vppinfra/error.lo
  CC   vppinfra/fifo.lo
  CC   vppinfra/fheap.lo
  CC   vppinfra/format.lo
  CC   vppinfra/pool.lo
  CC   vppinfra/graph.lo
  CC   vppinfra/hash.lo
  CC   vppinfra/heap.lo
  CPPASvppinfra/longjmp.lo
  CC   vppinfra/macros.lo
  CC   vppinfra/mhash.lo
  CC   vppinfra/mheap.lo
  CC   vppinfra/md5.lo
  CC   vppinfra/mem_mheap.lo
  CC   vppinfra/ptclosure.lo
  CC   vppinfra/random.lo
  CC   vppinfra/random_buffer.lo
  CC   vppinfra/random_isaac.lo
  CC   vppinfra/serialize.lo
  CC   vppinfra/slist.lo
  CC   vppinfra/std-formats.lo
  CC   vppinfra/string.lo
  CC   vppinfra/time.lo
  CC   vppinfra/timing_wheel.lo
  CC   vppinfra/tw_timer_2t_1w_2048sl.lo
  CC   vppinfra/tw_timer_16t_2w_512sl.lo
  CC   

Re: [vpp-dev] VLAN to VLAN

2018-04-20 Thread carlito nueno
Hi Andrew,

VPP version: vpp v17.10-release

Packet trace:
- vpp# trace add dpdk-input 100
- started ping from 192.168.3.16 to 192.168.2.181
- vpp# show trace

GigabitEthernet0/14/0:idx 1
tap-0   :idx 9

GigabitEthernet0/14/0.2:idx 11
tap-1  :idx 12

GigabitEthernet0/14/0.3:idx 14
tap-2  :idx 15

Packet 3

18:47:54:765589: dpdk-input
  GigabitEthernet0/14/0 rx queue 0
  buffer 0x1ac8e: current data 0, length 60, free-list 0, clone-count
0, totlen-nifb 0, trace 0x2
  PKT MBUF: port 0, nb_segs 1, pkt_len 60
buf_len 2176, data_len 60, ol_flags 0x180, data_off 128, phys_addr
0x6b1b23c0
packet_type 0x0
Packet Offload Flags
  PKT_RX_IP_CKSUM_GOOD (0x0080) IP cksum of RX pkt. is valid
  PKT_RX_L4_CKSUM_GOOD (0x0100) L4 cksum of RX pkt. is valid
  0x0026: 40:a5:ef:89:fc:a0 -> 01:80:c2:00:00:00 802.1q vlan 2
18:47:54:765593: ethernet-input
  0x0026: 40:a5:ef:89:fc:a0 -> 01:80:c2:00:00:00 802.1q vlan 2
18:47:54:765597: l2-input
  l2-input: sw_if_index 11 dst 01:80:c2:00:00:00 src 40:a5:ef:89:fc:a0
18:47:54:765598: l2-input-classify
  l2-classify: sw_if_index 11, table -1, offset 0, next 12
18:47:54:765600: l2-input-vtr
  l2-input-vtr: sw_if_index 11 dst 01:80:c2:00:00:00 src
40:a5:ef:89:fc:a0 data 00 26 42 42 03 00 00 00 00 00 7f ff
18:47:54:765601: l2-learn
  l2-learn: sw_if_index 11 dst 01:80:c2:00:00:00 src 40:a5:ef:89:fc:a0
bd_index 2
18:47:54:765602: l2-flood
  l2-flood: sw_if_index 11 dst 01:80:c2:00:00:00 src 40:a5:ef:89:fc:a0
bd_index 2
18:47:54:765604: l2-output
  l2-output: sw_if_index 12 dst 01:80:c2:00:00:00 src
40:a5:ef:89:fc:a0 data 00 26 42 42 03 00 00 00 00 00 7f ff
18:47:54:765605: tap-1-output
  tap-1
  0x0026: 40:a5:ef:89:fc:a0 -> 01:80:c2:00:00:00
18:47:54:765620: l2-flood
  l2-flood: sw_if_index 11 dst 42:42:03:00:00:00 src 00:00:7f:ff:40:a5
bd_index 2
18:47:54:765622: error-drop
  l2-flood: BVI packet with unhandled ethertype

Packet 5

18:47:55:725667: dpdk-input
  GigabitEthernet0/14/0 rx queue 0
  buffer 0x3c987: current data 0, length 60, free-list 0, clone-count
0, totlen-nifb 0, trace 0x4
  PKT MBUF: port 0, nb_segs 1, pkt_len 60
buf_len 2176, data_len 60, ol_flags 0x180, data_off 128, phys_addr
0x6ba26200
packet_type 0x0
Packet Offload Flags
  PKT_RX_IP_CKSUM_GOOD (0x0080) IP cksum of RX pkt. is valid
  PKT_RX_L4_CKSUM_GOOD (0x0100) L4 cksum of RX pkt. is valid
  0x0026: 40:a5:ef:89:fc:a0 -> 01:80:c2:00:00:00 802.1q vlan 3
18:47:55:725672: ethernet-input
  0x0026: 40:a5:ef:89:fc:a0 -> 01:80:c2:00:00:00 802.1q vlan 3
18:47:55:725676: l2-input
  l2-input: sw_if_index 14 dst 01:80:c2:00:00:00 src 40:a5:ef:89:fc:a0
18:47:55:725677: l2-input-classify
  l2-classify: sw_if_index 14, table -1, offset 0, next 12
18:47:55:725678: l2-input-vtr
  l2-input-vtr: sw_if_index 14 dst 01:80:c2:00:00:00 src
40:a5:ef:89:fc:a0 data 00 26 42 42 03 00 00 00 00 00 7f ff
18:47:55:725678: l2-learn
  l2-learn: sw_if_index 14 dst 01:80:c2:00:00:00 src 40:a5:ef:89:fc:a0
bd_index 3
18:47:55:725679: l2-flood
  l2-flood: sw_if_index 14 dst 01:80:c2:00:00:00 src 40:a5:ef:89:fc:a0
bd_index 3
18:47:55:725680: l2-output
  l2-output: sw_if_index 15 dst 01:80:c2:00:00:00 src
40:a5:ef:89:fc:a0 data 00 26 42 42 03 00 00 00 00 00 7f ff
18:47:55:725681: tap-2-output
  tap-2
  0x0026: 40:a5:ef:89:fc:a0 -> 01:80:c2:00:00:00
18:47:55:725696: l2-flood
  l2-flood: sw_if_index 14 dst 42:42:03:00:00:00 src 00:00:7f:ff:aa:a9
bd_index 3
18:47:55:725697: error-drop
  l2-flood: BVI packet with unhandled ethertype


Packet 8

18:47:56:729547: dpdk-input
  GigabitEthernet0/14/0 rx queue 0
  buffer 0x2b6e: current data 0, length 330, free-list 0, clone-count
0, totlen-nifb 0, trace 0x7
  PKT MBUF: port 0, nb_segs 1, pkt_len 330
buf_len 2176, data_len 330, ol_flags 0x180, data_off 128,
phys_addr 0x6abadbc0
packet_type 0x211
Packet Offload Flags
  PKT_RX_IP_CKSUM_GOOD (0x0080) IP cksum of RX pkt. is valid
  PKT_RX_L4_CKSUM_GOOD (0x0100) L4 cksum of RX pkt. is valid
Packet Types
  RTE_PTYPE_L2_ETHER (0x0001) Ethernet packet
  RTE_PTYPE_L3_IPV4 (0x0010) IPv4 packet without extension headers
  RTE_PTYPE_L4_UDP (0x0200) UDP packet
  IP4: 74:da:38:0d:43:59 -> ff:ff:ff:ff:ff:ff 802.1q vlan 3
  UDP: 192.168.3.16 -> 192.168.3.255
tos 0x00, ttl 64, length 312, checksum 0xa64a
fragment id 0x4b0b
  UDP: 17500 -> 17500
length 292, checksum 0x5510
18:47:56:729550: ethernet-input
  IP4: 74:da:38:0d:43:59 -> ff:ff:ff:ff:ff:ff 802.1q vlan 3
18:47:56:729553: l2-input
  l2-input: sw_if_index 14 dst ff:ff:ff:ff:ff:ff src 74:da:38:0d:43:59
18:47:56:729554: l2-input-classify
  l2-classify: sw_if_index 14, table -1, offset 0, next 12
18:47:56:729555: l2-input-vtr
  l2-input-vtr: sw_if_index 14 dst ff:ff:ff:ff:ff:ff src
74:da:38:0d:43:59 data 08 00 45 00 01 38 4b 0b 00 00 40 11
18:47:56:729555: l2-learn
  l2-learn: sw_if_index 14 dst ff:ff:ff:ff:ff:ff 

Re: [vpp-dev] Trying to build vpp on amd64 platform via qemu static library qemu-aarch64-static in Docker container

2018-04-20 Thread Stanislav Chlebec
Hello Brian
I have not yet tried to build it in the arm64 VM.
What I found out more that build fails in the file src/vppinfra/time.h
...

#elif defined (__aarch64__)
always_inline u64
clib_cpu_time_now (void)
{
  u64 tsc;

  /* Works on Cavium ThunderX. Other platforms: YMMV */
  asm volatile ("mrs %0, cntvct_el0":"=r" (tsc));

  return tsc;
}
...

Stanislav


From: Brian Brooks [mailto:brian.bro...@arm.com]
Sent: Monday, April 16, 2018 7:51 PM
To: Stanislav Chlebec ; vpp-dev@lists.fd.io
Subject: RE: Trying to build vpp on amd64 platform via qemu static library 
qemu-aarch64-static in Docker container

Hi Stanislav,

Does the build work if you git clone and make build-release inside the arm64 VM 
(no docker)?

Brian

From: vpp-dev@lists.fd.io 
> On Behalf Of Stanislav Chlebec
Sent: Monday, April 16, 2018 2:27 AM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] Trying to build vpp on amd64 platform via qemu static 
library qemu-aarch64-static in Docker container

Hello all
I am trying to prepare arm64 docker image (based on arm64v8/ubuntu:latest) in 
which will be vpp compiled and installed.
I  do it on amd64 platform using qemu static library for qemu-aarch64-static 
for emulation of arm64 instructions.

(
I found how to do it here:
https://blog.hypriot.com/post/setup-simple-ci-pipeline-for-arm-images/
http://www.hotblackrobotics.com/en/blog/2018/01/22/docker-images-arm/
)

Everything goes more the less  well but it fails in this Dockerfile step:
{
RUN /bin/bash -c "\
git clone https://github.com/vpp-dev/vpp.git \
&& cd vpp \
&& git checkout ${VPP_COMMIT} \
&& UNATTENDED=y make vpp_configure_args_vpp='--disable-japi --disable-vom' 
install-dep bootstrap dpdk-install-dev build build-release;"
}

It seems that build of vpp is complete without errors but the next processes 
(installing od dpdk ?) will end with error 21326 Illegal instruction:
{

Build complete [arm64-armv8a-linuxapp-gcc]
== Installing /opt/vpp-agent/dev/vpp/dpdk/deb/debian/tmp/usr/


==
  version vpp 18.01
  prefix  
/opt/vpp-agent/dev/vpp/build-root/install-vpp_debug-native/vpp
  libdir  
/opt/vpp-agent/dev/vpp/build-root/install-vpp_debug-native/vpp/lib64
  includedir  ${prefix}/include
  CFLAGS   -g -O0 -DCLIB_DEBUG -DFORTIFY_SOURCE=2 
-fstack-protector-all -fPIC -Werror
  CPPFLAGS  
-I/opt/vpp-agent/dev/vpp/build-root/install-vpp_debug-native/dpdk/include/dpdk 
-I/usr/include/dpdk
  LDFLAGS  -g -O0 -DCLIB_DEBUG -DFORTIFY_SOURCE=2 
-fstack-protector-all -fPIC -Werror   
-L/opt/vpp-agent/dev/vpp/build-root/install-vpp_debug-native/dpdk/lib 
-Wl,-rpath 
-Wl,/opt/vpp-agent/dev/vpp/build-root/install-vpp_debug-native/dpdk/lib

with:
  libssl  yes
.
==
 Building vpp in 
/opt/vpp-agent/dev/vpp/build-root/build-vpp_debug-native/vpp 
make[2]: Entering directory 
'/opt/vpp-agent/dev/vpp/build-root/build-vpp_debug-native/vpp'
  YACC tools/vppapigen/gram.c
  CC   vppinfra/socket.lo
  CC   vppinfra/timer.lo
  CC   vppinfra/unix-formats.lo
  CC   vppinfra/unix-misc.lo
  VERSION  vpp/app/version.h (18.01-rc0~374-g2d36ed2)
  CC   tools/vppapigen/lex.o
  CC   tools/vppapigen/gram.o
  CC   tools/vppapigen/node.o
  CC   vppinfra/asm_x86.lo
 CC   vppinfra/backtrace.lo
  CC   vppinfra/cpu.lo
  CC   vppinfra/elf.lo
  CC   vppinfra/elog.lo
  CC   vppinfra/error.lo
  CC   vppinfra/fifo.lo
  CC   vppinfra/fheap.lo
  CC   vppinfra/format.lo
  CC   vppinfra/pool.lo
  CC   vppinfra/graph.lo
  CC   vppinfra/hash.lo
  CC   vppinfra/heap.lo
  CPPASvppinfra/longjmp.lo
  CC   vppinfra/macros.lo
  CC   vppinfra/mhash.lo
  CC   vppinfra/mheap.lo
  CC   vppinfra/md5.lo
  CC   vppinfra/mem_mheap.lo
  CC   vppinfra/ptclosure.lo
  CC   vppinfra/random.lo
  CC   vppinfra/random_buffer.lo
  CC   vppinfra/random_isaac.lo
  CC   vppinfra/serialize.lo
  CC   vppinfra/slist.lo
  CC   vppinfra/std-formats.lo
  CC   vppinfra/string.lo
  CC   vppinfra/time.lo
  CC   vppinfra/timing_wheel.lo
  CC   vppinfra/tw_timer_2t_1w_2048sl.lo
  CC   vppinfra/tw_timer_16t_2w_512sl.lo
  CC   vppinfra/tw_timer_16t_1w_2048sl.lo
  CC   vppinfra/tw_timer_4t_3w_256sl.lo
  CC   vppinfra/tw_timer_1t_3w_1024sl_ov.lo
  CC   vppinfra/unformat.lo
  CC   vppinfra/vec.lo
  CC   vppinfra/vector.lo
  CC   vppinfra/zvec.lo
  CC   vppinfra/elf_clib.lo
  CC   vppinfra/linux/mem.lo
  CC   vppinfra/linux/sysfs.lo
  CCLD libvppinfra.la
  CCLD vppapigen
  APIGEN 

Re: [csit-dev] [vpp-dev] ARM vpp-dpdk-dkms nexus artifacts - CSIT

2018-04-20 Thread Peter Mikus
Please see my comments in gerrit. As well as below.

Peter Mikus
Engineer – Software
Cisco Systems Limited

From: Ed Kern (ejk)
Sent: Thursday, April 19, 2018 6:58 PM
To: Peter Mikus -X (pmikus - PANTHEON TECHNOLOGIES at Cisco) 
Cc: Vratko Polak -X (vrpolak - PANTHEON TECHNOLOGIES at Cisco) 
; Marek Gradzki -X (mgradzki - PANTHEON TECHNOLOGIES at 
Cisco) ; Maciek Konstantynowicz (mkonstan) 
; Vanessa Valderrama ; 
csit-...@lists.fd.io; vpp-dev ; hc2...@lists.fd.io; 
honeycomb-...@lists.fd.io
Subject: Re: [csit-dev] [vpp-dev] ARM vpp-dpdk-dkms nexus artifacts - CSIT




On Apr 19, 2018, at 6:59 AM, Peter Mikus -X (pmikus - PANTHEON TECHNOLOGIES at 
Cisco) > wrote:

Hello Ed,

I rewrite the script to use apt-get/yum instead, but I have one concern if this 
is in compliance with you POC system. Can you please check?

Dont see anything here that gives me pause about converting to container 
builds..

Many other questions about this gerrit..most I will include on the gerrit but…

[pm] replied in gerrit, mainly done

Without vratko working on the repo manifests those will still be incorrect is 
that right?

[pm] yes

This seems to lock to master branch only…you dont want/need ability to test 
release branches with whatever job runs this script?

[pm] No, in CSIT we have special file with branches and the codes does 
recognize the different one

Not using ‘standard’ REPO_NAME also locks you into xenial which means this will 
be one more thing we need to revisit when we do bionic swing

[pm] done? -> gerrit

until im told otherwise opensuse is a ‘full citizen’ so please take that into 
account (in this case its easy since its the same as centos
with the exception of the ARTIFACTS list.

[pm] Agree, done


Ed


I tested locally and looks like it is ok. But need to properly review.

[1] https://gerrit.fd.io/r/#/c/11928/

Thank you.

Peter Mikus
Engineer – Software
Cisco Systems Limited

From: Ed Kern (ejk)
Sent: Wednesday, April 18, 2018 5:10 PM
To: Peter Mikus -X (pmikus - PANTHEON TECHNOLOGIES at Cisco) 
>
Cc: Vratko Polak -X (vrpolak - PANTHEON TECHNOLOGIES at Cisco) 
>; Marek Gradzki -X (mgradzki - 
PANTHEON TECHNOLOGIES at Cisco) 
>; Maciek Konstantynowicz 
(mkonstan) >; Vanessa Valderrama 
>; 
csit-...@lists.fd.io; vpp-dev 
>; 
hc2...@lists.fd.io; 
honeycomb-...@lists.fd.io
Subject: Re: [csit-dev] [vpp-dev] ARM vpp-dpdk-dkms nexus artifacts - CSIT

Dont get me wrong..im behind vratko’s thinking about doing the restructure…just 
didnt want to rush that in..(unless that fix is more simple than it appears) 
but wanted
to get you working again right away.

There is no point in just deleting the arm packages since they would just get 
repopulated quickly

although I am still thinking about other options to explore..

Ed



On Apr 18, 2018, at 9:05 AM, Peter Mikus -X (pmikus - PANTHEON TECHNOLOGIES at 
Cisco) > wrote:

Thank you for the inputs. I agree that we can put temporary workaround 
although. Unless someone beats me I will do tomorrow.
I think that long term solution is more than welcomed. Looking on this not only 
thru optics of CSIT but anyonw who will look on Nexus and would wonder why 
RELEASE is only arm64.

Any views who is maintaining nexus storage from configuration point of view?

Peter Mikus
Engineer – Software
Cisco Systems Limited

From: Ed Kern (ejk)
Sent: Wednesday, April 18, 2018 5:00 PM
To: Vratko Polak -X (vrpolak - PANTHEON TECHNOLOGIES at Cisco) 
>; Peter Mikus -X (pmikus - 
PANTHEON TECHNOLOGIES at Cisco) >
Cc: Marek Gradzki -X (mgradzki - PANTHEON TECHNOLOGIES at Cisco) 
>; Maciek Konstantynowicz 
(mkonstan) >; Vanessa Valderrama 
>; 
csit-...@lists.fd.io; vpp-dev 
>; 
hc2...@lists.fd.io; 
honeycomb-...@lists.fd.io
Subject: Re: [csit-dev] [vpp-dev] ARM vpp-dpdk-dkms nexus artifacts - CSIT

vratko: responding to the thread but NOT to your email..  im going to assume 
your correct that it is abusing the version field
and that nexus could/should be doing something different..and what your saying 
about version timing 

Re: [csit-dev] [vpp-dev] ARM vpp-dpdk-dkms nexus artifacts - CSIT

2018-04-20 Thread Vratko Polak -X (vrpolak - PANTHEON TECHNOLOGIES at Cisco)
> vratko working on the repo manifests

I have created a possible fix [2],
but I am only 80% confident it will work as intended
(without breaking something else).

I do not know how to test it before merge.
Any reviewers?

Vratko.

[2] https://gerrit.fd.io/r/11958

From: vpp-dev@lists.fd.io  On Behalf Of Tina Tsou
Sent: Thursday, 2018-April-19 19:16
To: Ed Kern (ejk) 
Cc: Peter Mikus -X (pmikus - PANTHEON TECHNOLOGIES at Cisco) 
; Vratko Polak -X (vrpolak - PANTHEON TECHNOLOGIES at Cisco) 
; Marek Gradzki -X (mgradzki - PANTHEON TECHNOLOGIES at 
Cisco) ; Maciek Konstantynowicz (mkonstan) 
; Vanessa Valderrama ; 
csit-...@lists.fd.io; vpp-dev ; hc2...@lists.fd.io; 
honeycomb-...@lists.fd.io
Subject: Re: [csit-dev] [vpp-dev] ARM vpp-dpdk-dkms nexus artifacts - CSIT

Dear Ed,

What do we need to do to pass CI?
Make build
Make test?

Also including
Make verify?

Thank you,
Tina

On Apr 19, 2018, at 9:58 AM, Ed Kern > 
wrote:



On Apr 19, 2018, at 6:59 AM, Peter Mikus -X (pmikus - PANTHEON TECHNOLOGIES at 
Cisco) > wrote:

Hello Ed,

I rewrite the script to use apt-get/yum instead, but I have one concern if this 
is in compliance with you POC system. Can you please check?

Dont see anything here that gives me pause about converting to container 
builds..

Many other questions about this gerrit..most I will include on the gerrit but…

Without vratko working on the repo manifests those will still be incorrect is 
that right?

This seems to lock to master branch only…you dont want/need ability to test 
release branches with whatever job runs this script?

Not using ‘standard’ REPO_NAME also locks you into xenial which means this will 
be one more thing we need to revisit when we do bionic swing

until im told otherwise opensuse is a ‘full citizen’ so please take that into 
account (in this case its easy since its the same as centos
with the exception of the ARTIFACTS list.


Ed


I tested locally and looks like it is ok. But need to properly review.

[1] https://gerrit.fd.io/r/#/c/11928/

Thank you.

Peter Mikus
Engineer – Software
Cisco Systems Limited

From: Ed Kern (ejk)
Sent: Wednesday, April 18, 2018 5:10 PM
To: Peter Mikus -X (pmikus - PANTHEON TECHNOLOGIES at Cisco) 
>
Cc: Vratko Polak -X (vrpolak - PANTHEON TECHNOLOGIES at Cisco) 
>; Marek Gradzki -X (mgradzki - 
PANTHEON TECHNOLOGIES at Cisco) 
>; Maciek Konstantynowicz 
(mkonstan) >; Vanessa Valderrama 
>; 
csit-...@lists.fd.io; vpp-dev 
>; 
hc2...@lists.fd.io; 
honeycomb-...@lists.fd.io
Subject: Re: [csit-dev] [vpp-dev] ARM vpp-dpdk-dkms nexus artifacts - CSIT

Dont get me wrong..im behind vratko’s thinking about doing the restructure…just 
didnt want to rush that in..(unless that fix is more simple than it appears) 
but wanted
to get you working again right away.

There is no point in just deleting the arm packages since they would just get 
repopulated quickly

although I am still thinking about other options to explore..

Ed



On Apr 18, 2018, at 9:05 AM, Peter Mikus -X (pmikus - PANTHEON TECHNOLOGIES at 
Cisco) > wrote:

Thank you for the inputs. I agree that we can put temporary workaround 
although. Unless someone beats me I will do tomorrow.
I think that long term solution is more than welcomed. Looking on this not only 
thru optics of CSIT but anyonw who will look on Nexus and would wonder why 
RELEASE is only arm64.

Any views who is maintaining nexus storage from configuration point of view?

Peter Mikus
Engineer – Software
Cisco Systems Limited

From: Ed Kern (ejk)
Sent: Wednesday, April 18, 2018 5:00 PM
To: Vratko Polak -X (vrpolak - PANTHEON TECHNOLOGIES at Cisco) 
>; Peter Mikus -X (pmikus - 
PANTHEON TECHNOLOGIES at Cisco) >
Cc: Marek Gradzki -X (mgradzki - PANTHEON TECHNOLOGIES at Cisco) 
>; Maciek Konstantynowicz 
(mkonstan) >; Vanessa Valderrama 
>; 
csit-...@lists.fd.io; vpp-dev 
>; 
hc2...@lists.fd.io; 
honeycomb-...@lists.fd.io
Subject: Re: [csit-dev] [vpp-dev] ARM vpp-dpdk-dkms nexus 

Re: [vpp-dev] mheap performance issue and fixup

2018-04-20 Thread Damjan Marion
Thanks,

i added -2 before it is discussed. Dave is back from vacation next week and he 
is most familiar with that code...

--
Damjan

On 20 Apr 2018, at 11:29, Kingwel Xie 
> wrote:

Hi,

Finally I managed to create 3 patches to include all modifications to mheap. 
Please check below for details. I’ll do some other patches later…

https://gerrit.fd.io/r/11950
https://gerrit.fd.io/r/11952
https://gerrit.fd.io/r/11957

Hi Xue, you need at least the first one for your test.

Regards,
Kingwel

From: Kingwel Xie
Sent: Thursday, April 19, 2018 9:20 AM
To: Damjan Marion >
Cc: vpp-dev@lists.fd.io
Subject: RE: [vpp-dev] mheap performance issue and fixup

Hi Damjan,

We will do it asap. Actually we are quite new to vPP and even don’t know how to 
make bug report and code contribution or so.

Regards,
Kingwel

From: vpp-dev@lists.fd.io 
[mailto:vpp-dev@lists.fd.io] On Behalf Of Damjan Marion
Sent: Wednesday, April 18, 2018 11:30 PM
To: Kingwel Xie >
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] mheap performance issue and fixup

Dear Kingwel,

Thank you for your email. It will be really appreciated if you can submit your 
changes to gerrit, preferably each point in separate patch.
That will be best place to discuss those changes...

Thanks in Advance,

--
Damjan

On 16 Apr 2018, at 10:13, Kingwel Xie 
> wrote:

Hi all,

We recently worked on GTPU tunnel and our target is to create 2M tunnels. It is 
not as easy as it looks like, and it took us quite some time to figure it out. 
The biggest problem we found is about mheap, which as you know is the low layer 
memory management function of vPP. We believe it makes sense to share what we 
found and what we’ve done to improve the performance of mheap.

First of all, mheap is fast. It has well-designed small object cache and 
multi-level free lists, to speed up the get/put. However, as discussed in the 
mail list before, it has a performance issue when dealing with 
align/align_offset allocation. We managed to locate the problem is brought by a 
pointer ‘rewrite’ in gtp_tunnel_t. This rewrite is a vector and required to be 
aligned to 64B cache line, therefore with 4 bytes align offset. We realized 
that it is because that the free list must be very long, meaning so many 
mheap_elts, but unfortunately it doesn’t have an element which fits to all 3 
prerequisites: size, align, and align offset. In this case,  each allocation 
has to traverse all elements till it reaches the end of element. As a result, 
you might observe each allocation is greater than 10 clocks/call with ‘show 
memory verbose’. It indicates the allocation takes too long, while it should be 
200~300 clocks/call in general. Also you should have noticed ‘per-attempt’ is 
quite high, even more than 100.

The fix is straight and simple : as discussed int his mail list before, to 
allocate ‘rewrite’ from a pool, instead of from mheap. Frankly speaking, it 
looks like a workaround not a real fix, so we spent some time fix the problem 
thoroughly. The idea is to add a few more bytes to the original required block 
size so that mheap will always lookup in a bigger free list, then most likely a 
suitable block can be easily located. Well, now the problem becomes how big is 
this extra size? It should be at least align+align_offset, not hard to 
understand. But after careful analysis we think it is better to be like this, 
see code below:

Mheap.c:545
  word modifier = (align > MHEAP_USER_DATA_WORD_BYTES ? align + align_offset + 
sizeof(mheap_elt_t) : 0);
  bin = user_data_size_to_bin_index (n_user_bytes + modifier);

The reason of extra sizeof(mheap_elt_t) is to avoid lo_free_size is too small 
to hold a complete free element. You will understand it if you really know how 
mheap_get_search_free_bin is working. I am not going to go through the detail 
of it. In short, every lookup in free list will always locate a suitable 
element, in other words, the hit rate of free list will be almost 100%, and the 
‘per-attempt’ will be always around 1. The test result looks very promising, 
please see below after adding 2M gtpu tunnels and 2M routing entries:

Thread 0 vpp_main
13689507 objects, 3048367k of 3505932k used, 243663k free, 243656k reclaimed, 
106951k overhead, 4194300k capacity
  alloc. from small object cache: 47325868 hits 65271210 attempts (72.51%) 
replacements 8266122
  alloc. from free-list: 21879233 attempts, 21877898 hits (99.99%), 21882794 
considered (per-attempt 1.00)
  alloc. low splits: 13355414, high splits: 512984, combined: 281968
  alloc. from vector-expand: 81907
  allocs: 69285673 276.00 clocks/call
  frees: 55596166 173.09 clocks/call
Free list:
bin 3:
20(82220170 48)
total 1
bin 273:

Re: [vpp-dev] mheap performance issue and fixup

2018-04-20 Thread xyxue
Hi Kingwel,

Thank you very much for your help. 

Thanks,
Xyxue


 
From: Kingwel Xie
Date: 2018-04-20 17:29
To: Damjan Marion; Neale Ranns (nranns); 薛欣颖
CC: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] mheap performance issue and fixup
Hi,
 
Finally I managed to create 3 patches to include all modifications to mheap. 
Please check below for details. I’ll do some other patches later…
 
https://gerrit.fd.io/r/11950
https://gerrit.fd.io/r/11952
https://gerrit.fd.io/r/11957
 
Hi Xue, you need at least the first one for your test.
 
Regards,
Kingwel
 
From: Kingwel Xie 
Sent: Thursday, April 19, 2018 9:20 AM
To: Damjan Marion 
Cc: vpp-dev@lists.fd.io
Subject: RE: [vpp-dev] mheap performance issue and fixup
 
Hi Damjan,
 
We will do it asap. Actually we are quite new to vPP and even don’t know how to 
make bug report and code contribution or so. 
 
Regards,
Kingwel
 
From: vpp-dev@lists.fd.io [mailto:vpp-dev@lists.fd.io] On Behalf Of Damjan 
Marion
Sent: Wednesday, April 18, 2018 11:30 PM
To: Kingwel Xie 
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] mheap performance issue and fixup
 
Dear Kingwel, 
 
Thank you for your email. It will be really appreciated if you can submit your 
changes to gerrit, preferably each point in separate patch.
That will be best place to discuss those changes...
 
Thanks in Advance,
 
-- 
Damjan
 
On 16 Apr 2018, at 10:13, Kingwel Xie  wrote:
 
Hi all,
 
We recently worked on GTPU tunnel and our target is to create 2M tunnels. It is 
not as easy as it looks like, and it took us quite some time to figure it out. 
The biggest problem we found is about mheap, which as you know is the low layer 
memory management function of vPP. We believe it makes sense to share what we 
found and what we’ve done to improve the performance of mheap.
 
First of all, mheap is fast. It has well-designed small object cache and 
multi-level free lists, to speed up the get/put. However, as discussed in the 
mail list before, it has a performance issue when dealing with 
align/align_offset allocation. We managed to locate the problem is brought by a 
pointer ‘rewrite’ in gtp_tunnel_t. This rewrite is a vector and required to be 
aligned to 64B cache line, therefore with 4 bytes align offset. We realized 
that it is because that the free list must be very long, meaning so many 
mheap_elts, but unfortunately it doesn’t have an element which fits to all 3 
prerequisites: size, align, and align offset. In this case,  each allocation 
has to traverse all elements till it reaches the end of element. As a result, 
you might observe each allocation is greater than 10 clocks/call with ‘show 
memory verbose’. It indicates the allocation takes too long, while it should be 
200~300 clocks/call in general. Also you should have noticed ‘per-attempt’ is 
quite high, even more than 100.
 
The fix is straight and simple : as discussed int his mail list before, to 
allocate ‘rewrite’ from a pool, instead of from mheap. Frankly speaking, it 
looks like a workaround not a real fix, so we spent some time fix the problem 
thoroughly. The idea is to add a few more bytes to the original required block 
size so that mheap will always lookup in a bigger free list, then most likely a 
suitable block can be easily located. Well, now the problem becomes how big is 
this extra size? It should be at least align+align_offset, not hard to 
understand. But after careful analysis we think it is better to be like this, 
see code below:
 
Mheap.c:545 
  word modifier = (align > MHEAP_USER_DATA_WORD_BYTES ? align + align_offset + 
sizeof(mheap_elt_t) : 0);
  bin = user_data_size_to_bin_index (n_user_bytes + modifier);
 
The reason of extra sizeof(mheap_elt_t) is to avoid lo_free_size is too small 
to hold a complete free element. You will understand it if you really know how 
mheap_get_search_free_bin is working. I am not going to go through the detail 
of it. In short, every lookup in free list will always locate a suitable 
element, in other words, the hit rate of free list will be almost 100%, and the 
‘per-attempt’ will be always around 1. The test result looks very promising, 
please see below after adding 2M gtpu tunnels and 2M routing entries:
 
Thread 0 vpp_main
13689507 objects, 3048367k of 3505932k used, 243663k free, 243656k reclaimed, 
106951k overhead, 4194300k capacity
  alloc. from small object cache: 47325868 hits 65271210 attempts (72.51%) 
replacements 8266122
  alloc. from free-list: 21879233 attempts, 21877898 hits (99.99%), 21882794 
considered (per-attempt 1.00)
  alloc. low splits: 13355414, high splits: 512984, combined: 281968
  alloc. from vector-expand: 81907
  allocs: 69285673 276.00 clocks/call
  frees: 55596166 173.09 clocks/call
Free list:
bin 3:
20(82220170 48)
total 1
bin 273:
28340k(80569efc 60)
total 1
bin 276:
215323k(8c88df6c 44)
total 1
Total count in free bin: 3
 
You can see, as pointed out before, the hit rate is very high, 

[vpp-dev] question about set ip arp

2018-04-20 Thread xyxue

Hi guys,

I'm testing 'set ip arp' . When I don't configure the param 'no-fib-entry' , 
the configuration time of 100k cost 19+ mins. When I configure  the param 
'no-fib-entry' the time is 9 s.
Can I use 'set ip arp ... + no-fib-entry  and ip route add ' achieve the same 
goal with 'set ip arp without no-fib-entry'?
The most time-consuming part is 'clib_bihash_foreach_key_value_pair_24_8' .  
The stack info is shown below:
0 clib_bihash_foreach_key_value_pair_24_8 (h=0x7fffb5d4c840, 
callback=0x7719c98d , arg=0x7fffb5d33dc0) 
at /home/vpp/build-data/../src/vppinfra/bihash_template.c:589 
#1 0x7719cafd in adj_nbr_walk_nh4 (sw_if_index=1, addr=0x7fffb5d4c0f8, 
cb=0x76cacb17 , ctx=0x7fffb5d4c0f4) 
at /home/vpp/build-data/../src/vnet/adj/adj_nbr.c:642 
#2 0x76cacd64 in arp_update_adjacency (vnm=0x7763a540 , 
sw_if_index=1, ai=1) at /home/vpp/build-data/../src/vnet/ethernet/arp.c:466 
#3 0x76cbb6fe in ethernet_update_adjacency (vnm=0x7763a540 
, sw_if_index=1, ai=1) at 
/home/vpp/build-data/../src/vnet/ethernet/interface.c:208 
#4 0x771aca55 in vnet_update_adjacency_for_sw_interface 
(vnm=0x7763a540 , sw_if_index=1, ai=1) 
at /home/vpp/build-data/../src/vnet/adj/rewrite.c:225 
#5 0x7719c201 in adj_nbr_add_or_lock (nh_proto=FIB_PROTOCOL_IP4, 
link_type=VNET_LINK_IP4, nh_addr=0x7fffb5d47ab0, sw_if_index=1) 
at /home/vpp/build-data/../src/vnet/adj/adj_nbr.c:246 
#6 0x7718eb6a in fib_path_attached_next_hop_get_adj 
(path=0x7fffb5d47a88, link=VNET_LINK_IP4) at 
/home/vpp/build-data/../src/vnet/fib/fib_path.c:664 
#7 0x7718ebc8 in fib_path_attached_next_hop_set (path=0x7fffb5d47a88) 
at /home/vpp/build-data/../src/vnet/fib/fib_path.c:678 
#8 0x77191077 in fib_path_resolve (path_index=14) at 
/home/vpp/build-data/../src/vnet/fib/fib_path.c:1862 
#9 0x7718adb4 in fib_path_list_resolve (path_list=0x7fffb5ade9a4) at 
/home/vpp/build-data/../src/vnet/fib/fib_path_list.c:567 
#10 0x7718b27d in fib_path_list_create (flags=FIB_PATH_LIST_FLAG_NONE, 
rpaths=0x7fffb5d4c56c) at 
/home/vpp/build-data/../src/vnet/fib/fib_path_list.c:734 
#11 0x77185732 in fib_entry_src_adj_path_swap (src=0x7fffb5c3aa94, 
entry=0x7fffb5d3ad2c, pl_flags=FIB_PATH_LIST_FLAG_NONE, paths=0x7fffb5d4c56c) 
at /home/vpp/build-data/../src/vnet/fib/fib_entry_src_adj.c:110 
#12 0x77181ed7 in fib_entry_src_action_path_swap 
(fib_entry=0x7fffb5d3ad2c, source=FIB_SOURCE_ADJ, 
flags=FIB_ENTRY_FLAG_ATTACHED, rpaths=0x7fffb5d4c56c) 
at /home/vpp/build-data/../src/vnet/fib/fib_entry_src.c:1191 
#13 0x7717d63c in fib_entry_create (fib_index=0, prefix=0x7fffb5d34400, 
source=FIB_SOURCE_ADJ, flags=FIB_ENTRY_FLAG_ATTACHED, paths=0x7fffb5d4c56c) 
at /home/vpp/build-data/../src/vnet/fib/fib_entry.c:828 
#14 0x7716dcca in fib_table_entry_path_add2 (fib_index=0, 
prefix=0x7fffb5d34400, source=FIB_SOURCE_ADJ, flags=FIB_ENTRY_FLAG_ATTACHED, 
rpath=0x7fffb5d4c56c) 
at /home/vpp/build-data/../src/vnet/fib/fib_table.c:597 
#15 0x7716dba9 in fib_table_entry_path_add (fib_index=0, 
prefix=0x7fffb5d34400, source=FIB_SOURCE_ADJ, flags=FIB_ENTRY_FLAG_ATTACHED, 
next_hop_proto=DPO_PROTO_IP4, 
next_hop=0x7fffb5d34404, next_hop_sw_if_index=1, next_hop_fib_index=4294967295, 
next_hop_weight=1, next_hop_labels=0x0, path_flags=FIB_ROUTE_PATH_FLAG_NONE) 
at /home/vpp/build-data/../src/vnet/fib/fib_table.c:569 
#16 0x76cacef5 in arp_adj_fib_add (e=0x7fffb5d4c0f4, fib_index=0) at 
/home/vpp/build-data/../src/vnet/ethernet/arp.c:550 
#17 0x76cad644 in vnet_arp_set_ip4_over_ethernet_internal 
(vnm=0x7763a540 , args=0x7fffb5d34700) 
at /home/vpp/build-data/../src/vnet/ethernet/arp.c:618 
#18 0x76cb2f1a in set_ip4_over_ethernet_rpc_callback (a=0x7fffb5d34700) 
at /home/vpp/build-data/../src/vnet/ethernet/arp.c:1989 
#19 0x779442c9 in vl_api_rpc_call_main_thread_inline (fp=0x76cb2e09 
, data=0x7fffb5d34700 "\001", 
data_length=28, 
force_rpc=0 '\000') at 
/home/vpp/build-data/../src/vlibmemory/memory_vlib.c:2061 
#20 0x7794441c in vl_api_rpc_call_main_thread (fp=0x76cb2e09 
, data=0x7fffb5d34700 "\001", 
data_length=28) 
at /home/vpp/build-data/../src/vlibmemory/memory_vlib.c:2107 
#21 0x76cb35c7 in vnet_arp_set_ip4_over_ethernet (vnm=0x7763a540 
, sw_if_index=1, a_arg=0x7fffb5d34800, is_static=0, 
is_no_fib_entry=0) 
at /home/vpp/build-data/../src/vnet/ethernet/arp.c:2074 
#22 0x76cb4015 in ip_arp_add_del_command_fn (vm=0x77923420 
, is_del=0, input=0x7fffb5d34ec0, cmd=0x7fffb5c78864) 
at /home/vpp/build-data/../src/vnet/ethernet/arp.c:2233 

Thanks,
Xyxue






Re: [vpp-dev] mheap performance issue and fixup

2018-04-20 Thread Kingwel Xie
Hi,

Finally I managed to create 3 patches to include all modifications to mheap. 
Please check below for details. I’ll do some other patches later…

https://gerrit.fd.io/r/11950
https://gerrit.fd.io/r/11952
https://gerrit.fd.io/r/11957

Hi Xue, you need at least the first one for your test.

Regards,
Kingwel

From: Kingwel Xie
Sent: Thursday, April 19, 2018 9:20 AM
To: Damjan Marion 
Cc: vpp-dev@lists.fd.io
Subject: RE: [vpp-dev] mheap performance issue and fixup

Hi Damjan,

We will do it asap. Actually we are quite new to vPP and even don’t know how to 
make bug report and code contribution or so.

Regards,
Kingwel

From: vpp-dev@lists.fd.io 
[mailto:vpp-dev@lists.fd.io] On Behalf Of Damjan Marion
Sent: Wednesday, April 18, 2018 11:30 PM
To: Kingwel Xie >
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] mheap performance issue and fixup

Dear Kingwel,

Thank you for your email. It will be really appreciated if you can submit your 
changes to gerrit, preferably each point in separate patch.
That will be best place to discuss those changes...

Thanks in Advance,

--
Damjan

On 16 Apr 2018, at 10:13, Kingwel Xie 
> wrote:

Hi all,

We recently worked on GTPU tunnel and our target is to create 2M tunnels. It is 
not as easy as it looks like, and it took us quite some time to figure it out. 
The biggest problem we found is about mheap, which as you know is the low layer 
memory management function of vPP. We believe it makes sense to share what we 
found and what we’ve done to improve the performance of mheap.

First of all, mheap is fast. It has well-designed small object cache and 
multi-level free lists, to speed up the get/put. However, as discussed in the 
mail list before, it has a performance issue when dealing with 
align/align_offset allocation. We managed to locate the problem is brought by a 
pointer ‘rewrite’ in gtp_tunnel_t. This rewrite is a vector and required to be 
aligned to 64B cache line, therefore with 4 bytes align offset. We realized 
that it is because that the free list must be very long, meaning so many 
mheap_elts, but unfortunately it doesn’t have an element which fits to all 3 
prerequisites: size, align, and align offset. In this case,  each allocation 
has to traverse all elements till it reaches the end of element. As a result, 
you might observe each allocation is greater than 10 clocks/call with ‘show 
memory verbose’. It indicates the allocation takes too long, while it should be 
200~300 clocks/call in general. Also you should have noticed ‘per-attempt’ is 
quite high, even more than 100.

The fix is straight and simple : as discussed int his mail list before, to 
allocate ‘rewrite’ from a pool, instead of from mheap. Frankly speaking, it 
looks like a workaround not a real fix, so we spent some time fix the problem 
thoroughly. The idea is to add a few more bytes to the original required block 
size so that mheap will always lookup in a bigger free list, then most likely a 
suitable block can be easily located. Well, now the problem becomes how big is 
this extra size? It should be at least align+align_offset, not hard to 
understand. But after careful analysis we think it is better to be like this, 
see code below:

Mheap.c:545
  word modifier = (align > MHEAP_USER_DATA_WORD_BYTES ? align + align_offset + 
sizeof(mheap_elt_t) : 0);
  bin = user_data_size_to_bin_index (n_user_bytes + modifier);

The reason of extra sizeof(mheap_elt_t) is to avoid lo_free_size is too small 
to hold a complete free element. You will understand it if you really know how 
mheap_get_search_free_bin is working. I am not going to go through the detail 
of it. In short, every lookup in free list will always locate a suitable 
element, in other words, the hit rate of free list will be almost 100%, and the 
‘per-attempt’ will be always around 1. The test result looks very promising, 
please see below after adding 2M gtpu tunnels and 2M routing entries:

Thread 0 vpp_main
13689507 objects, 3048367k of 3505932k used, 243663k free, 243656k reclaimed, 
106951k overhead, 4194300k capacity
  alloc. from small object cache: 47325868 hits 65271210 attempts (72.51%) 
replacements 8266122
  alloc. from free-list: 21879233 attempts, 21877898 hits (99.99%), 21882794 
considered (per-attempt 1.00)
  alloc. low splits: 13355414, high splits: 512984, combined: 281968
  alloc. from vector-expand: 81907
  allocs: 69285673 276.00 clocks/call
  frees: 55596166 173.09 clocks/call
Free list:
bin 3:
20(82220170 48)
total 1
bin 273:
28340k(80569efc 60)
total 1
bin 276:
215323k(8c88df6c 44)
total 1
Total count in free bin: 3

You can see, as pointed out before, the hit rate is very high, >99.9%, and 
per-attempt is ~1. Furthermore, the total elements in free list is only 3.

Apart from we discussed above, we also made some other 

Re: [vpp-dev] VLAN to VLAN

2018-04-20 Thread Andrew Yourtchenko
Hi Carlito,

What does the packet trace (as per 
https://wiki.fd.io/view/VPP/How_To_Use_The_Packet_Generator_and_Packet_Tracer) 
look like and which version of VPP are you running ?

--a

> On 20 Apr 2018, at 05:00, Carlito Nueno  wrote:
> 
> Thanks John.
> 
> Routing between VLANs is working. But I can't get the ACLs quite
> right. I am trying to block all communication between device A
> (192.168.3.16) on VLAN 3 and device B (192.168.2.181) on VLAN 2.
> 
> vat# acl_add_replace ipv4 deny src 192.168.3.16/32 dst 192.168.2.181/32
> vat# acl_dump
> vl_api_acl_details_t_handler:194: acl_index: 1, count: 1
>   tag {}
>   ipv4 action 0 src 192.168.3.16/32 dst 192.168.2.181/32 proto 0
> sport 0-65535 dport 0-65535 tcpflags 0 mask 0
> 
> # VLAN on subinterface GigabitEthernet0/14/0.2
> vat# acl_interface_set_acl_list sw_if_index 11 input 1 output 1
> 
> # VLAN on subinterface GigabitEthernet0/14/0.3
> vat# acl_interface_set_acl_list sw_if_index 14 input 1 output 1
> 
> vat# acl_interface_list_dump
> vl_api_acl_interface_list_details_t_handler:153: sw_if_index: 11,
> count: 2, n_input: 1
>   input 1
>  output 1
> vl_api_acl_interface_list_details_t_handler:153: sw_if_index: 14,
> count: 2, n_input: 1
>   input 1
>  output 1
> 
> I am still able to ping from 192.168.3.16 to 192.168.2.181 after above 
> commands.
> 
> Thanks
> 
>> On Thu, Apr 19, 2018 at 3:55 PM, John Lo (loj)  wrote:
>> One more comment - unless there are more VLAN 1 and VLAN 2 sub-interfaces 
>> you need to put into BDs 1 and 2, then you may just configure IP addresses 
>> on the sub-interfaces to route directly, as suggested by Andrew. It would be 
>> a lot more efficient than going through two BDs and route via BVIs.  -John
>> 
>> -Original Message-
>> From: vpp-dev@lists.fd.io  On Behalf Of John Lo (loj)
>> Sent: Thursday, April 19, 2018 4:48 PM
>> To: carlito nueno ; Andrew Yourtchenko 
>> 
>> Cc: vpp-dev@lists.fd.io
>> Subject: Re: [vpp-dev] VLAN to VLAN
>> 
>> The config looks correct and should work, assuming the following:
>> 1. The devices connected to GigabitEthernet0/14/0.2 have IP addresses in the 
>> 192.168.2.1/24 subnet with default gateway set to that of the BVI IP address 
>> of 192.168.2.1.
>> 2. The devices connected to GigabitEthernet0/14/0.3 have IP addresses in the 
>> 192.168.3.1/24 subnet with default gateway set to that of the BVI IP address 
>> of 192.168.3.1.
>> 
>> One improvement is to put the BVI interfaces into their own VRF by setting 
>> loop0 and loop1 into a specific ip table to not use the global routing 
>> table.  For example, set the following before assigning IP address to loop0 
>> and loop1:
>>   set int ip table loop0 4
>>   set int ip table loop1 4
>> This will make the routing between BD-VLANs 2 and 3 private and more secure.
>> 
>> Regards,
>> John
>> 
>> -Original Message-
>> From: vpp-dev@lists.fd.io  On Behalf Of carlito nueno
>> Sent: Thursday, April 19, 2018 4:15 PM
>> To: Andrew Yourtchenko 
>> Cc: vpp-dev@lists.fd.io
>> Subject: Re: [vpp-dev] VLAN to VLAN
>> 
>> My current VLAN config:
>> 
>> loopback create
>> set int l2 bridge loop1 2 bvi
>> set int ip address loop1 192.168.2.1/24
>> set int state loop1 up
>> 
>> create sub GigabitEthernet0/14/0 2
>> set int l2 bridge GigabitEthernet0/14/0.2 2 set int l2 tag-rewrite 
>> GigabitEthernet0/14/0.2 pop 1 set int state GigabitEthernet0/14/0.2 up
>> 
>> 
>> loopback create
>> set int l2 bridge loop2 3 bvi
>> set int ip address loop2 192.168.3.1/24
>> set int state loop2 up
>> 
>> create sub GigabitEthernet0/14/0 3
>> set int l2 bridge GigabitEthernet0/14/0.3 3 set int l2 tag-rewrite 
>> GigabitEthernet0/14/0.3 pop 1 set int state GigabitEthernet0/14/0.3 up
>> 
>> 
>> So this should route traffic between VLAN 2 and VLAN 3, correct?
>> 
>> Thanks
>> 
>>> On Thu, Apr 19, 2018 at 12:52 PM, Andrew Yourtchenko  
>>> wrote:
>>> 
>>> hi Carlito,
>>> 
>>> you can configure subinterfaces with tags and assign the ip addresses
>>> so the VPP does routing and then either use vnet ACLs or acl plugin to
>>> restrict the traffic.
>>> 
>>> —a
>>> 
>>> On 19 Apr 2018, at 21:07, Dave Barach  wrote:
>>> 
>>> Begin forwarded message:
>>> 
>>> From: Carlito Nueno 
>>> Date: April 19, 2018 at 9:03:51 AM HST
>>> To: dbar...@cisco.com
>>> Subject: VLAN to VLAN
>>> 
>>> Hi Dave,
>>> 
>>> How can I enable VLAN to VLAN communication? I want to have devices on
>>> one VLAN talk to devices on another VLAN, if possible constrain the
>>> devices by MAC or IP address.
>>> 
>>> For example, only device with MAC (aa:aa:bb:80:90) or IP address
>>> (192.168.2.20) on VLAN 100 can talk to devices on VLAN 200
>>> (192.168.3.0/24).
>>> 
>>> Thanks
>>> 
>>> 
>> 
>> 
>> 
>> 
>> 
>>