[vpp-dev] Question about VPP support for ARM 64

2017-08-18 Thread George Zhao
We encounter following issues while trying to build VPP over ARM 64. It seems 
right now only ARM32 are supported in the code. I list the steps we tried and 
hope VPP folks can help us work around this issue.

Steps:
1. install Ubuntu 16.04 on OD1K
$>> uname -a
Linux OD1K 4.4.0-92-generic #115-Ubuntu SMP Thu Aug 10 09:10:33 UTC 2017 
aarch64 aarch64 aarch64 GNU/Linux

2. git clone VPP 17.04 and build VPP
## Error:
make[2]: Entering directory '/home/huawei/GIT/vpp.1704/dpdk'
cat: '/sys/bus/pci/devices/:00:01.0/uevent': No such file or directory

**Work around to bypass MakeFile:
##
# Cavium ThunderX
##
#else ifneq (,$(findstring thunder,$(shell cat 
/sys/bus/pci/devices/:00:01.0/uevent | grep cavium)))
else
export CROSS=""
DPDK_TARGET   ?= arm64-thunderx-linuxapp-$(DPDK_CC)
DPDK_MACHINE  ?= thunderx
DPDK_TUNE ?= generic

3. Then,  make build and failed following:
/home/huawei/GIT/vpp.1704/build-data/../src/plugins/dpdk/device/node.c:276:9: 
error: `u8x32' undeclared (first use in this function)
   *(u8x32 *) (((u8 *) d0) + i * 32) =

** Check vppinfra/vppinfra/vector.h   and don't find u8x32 with "aarch64"
#if defined (__aarch64__) || defined (__arm__)
typedef unsigned int u32x4 _vector_size (16);
typedef u8 u8x16 _vector_size (16);
typedef u16 u16x8 _vector_size (16);
typedef u32 u32x4 _vector_size (16);
typedef u64 u64x2 _vector_size (16);
#endif

4. According  https://wiki.fd.io/view/VPP/Alternative_builds
The VPP seems to support arm32 only .
export PLATFORM=arm32


*Questions:
Did I miss some steps or should include other header files that defines u8x32?


Thanks,
George

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] build question

2017-08-18 Thread Charles Eckel (eckelcu)
Hi folks,

I am trying to build VPP for the first time. I am using the instructions at 
https://wiki.fd.io/view/VPP/Pulling,_Building,_Running,_Hacking_and_Pushing_VPP_Code

I am running on a Mac and using the default setup of VirtualBox and Ubuntu. 
Things go seemingly well until during ‘vagrant up’ when it tries to build dpdk:

==> default: Command list built, Time taken: 0m0.034s
==> default:  /bin/mkdir -p '/vpp/build-root/tools/bin'
==> default:   /bin/sh ./libtool --quiet  --mode=install /usr/bin/install -c 
elftool vppapigen '/vpp/build-root/tools/bin'
==> default: make[5]: Leaving directory 
'/vpp/build-root/build-tool-native/tools'
==> default: make[4]: Leaving directory 
'/vpp/build-root/build-tool-native/tools'
==> default: make[3]: Leaving directory 
'/vpp/build-root/build-tool-native/tools'
==> default: make[2]: Leaving directory 
'/vpp/build-root/build-tool-native/tools'
==> default: make[1]: Leaving directory '/vpp/build-root'
==> default: make[1]: Entering directory '/vpp/build-root'
==> default: make[2]: Entering directory '/vpp/build-root'
==> default:  Arch for platform 'vpp' is native 
==> default:  Finding source for dpdk 
==> default:  Makefile fragment found in /vpp/build-data/packages/dpdk.mk 

==> default:  Source found in /vpp/dpdk 
==> default:  Arch for platform 'vpp' is native 
==> default:  Finding source for vpp 
==> default:  Makefile fragment found in /vpp/build-data/packages/vpp.mk 

==> default:  Source found in /vpp/src 
==> default:  Configuring dpdk in /vpp/build-root/build-vpp-native/dpdk 
==> default:  Building dpdk in /vpp/build-root/build-vpp-native/dpdk 
==> default: make[3]: Entering directory '/vpp/dpdk'
==> default: ==
==> default: Building DPDK from source. Consider installing development
==> default: package by invoking 'make dpdk-install-dev' from the
==> default: top level directory
==> default: ==
==> default: make config
==> default: make[4]: Entering directory '/vpp/dpdk'
==> default: make[4]: warning: jobserver unavailable: using -j1.  Add '+' to 
parent make rule.
==> default:   % Total% Received % Xferd  Average Speed   TimeTime 
Time  Current
==> default:  Dload  Upload   Total   Spe
==> default: ntLeft  Speed
  0 00 00 0  0  0 --:
==> default: --:-- --:--:-- --:--:-- 0
  0 9681k0 506680 0   221k  0  0:00:43 --:--:--  0:00:43  221k
 32 9681k   32 3160k0 0  2591k  0  0:00:03  0:00:01
==> default:   0:00:02 2590k
 72 9681k   72 7035k0 0  3172k  0  0:00:03  0:00:02  0:00:01 3171k
100 9681k  100 9681k0 0  3162k  0  0:00:03  0:00:03 --:--:-- 3162k
==> default:   % Total% Received % Xferd  Average Speed   TimeTime 
Time  Current
==> default:  Dload  Upload   Total   Spent
Left  Speed
  0 0
==> default:  0 00 0  0  0 --:--:-- --:--:-- --:--:-- 0
  0 00 00 0  0  0 --:--:-- --:--:-- --:--:-- 0
  0 00 00 0  0  0 --:--:--  0:00:01 --:--:-- 0
  0 00 00 0  0  0 --:--:--  0:00:02 --:--:-- 0
  0 00 00 0  0  0 --:--:--  0:00:03 --:--:-- 0

  0 00 00 0  0  0 --:--:--  0:05:02 --:--:-- 0
  0 00 00 0  0  0 --:--:--  0:05:03 --:--:-- 0
  0 00 00 0  0  0 --:--:--  0:05:04 --:--:-- 0
  0 00 00 0  0  0 --:--:--  0:05:05 --:--:-- 0
  0 00 00 0  0  0 --:--:--  0:05:06 --:--:-- 0
  0 00 00 0  0  0 --:--:--  0:05:07 --:--:-- 0
  0 00 00 0  0  0 --:--:--  0:05:08 --:--:-- 0
  0 00 00 0  0  0 --:--:--  0:05:09 --:--:-- 0
100  2267  100  22670 0  7  0  0:05:23  0:05:10  0:00:13   503
==> default: --- extracting dpdk-17.05.tar.xz ---
==> default: --- extracting v0.45.tar.gz ---
==> default: 
==> default: gzip: stdin: not in gzip format
==> default: tar: 
==> default: Child returned status 1
==> default: tar: 
==> default: Error is not recoverable: exiting now
==> default: Makefile:203: recipe for target 
'/vpp/build-root/build-vpp-native/dpdk/.extract.ok' failed
==> default: make[4]: *** [/vpp/build-root/build-vpp-native/dpdk/.extract.ok] 
Error 2
==> default: make[4]: Leaving directory '/vpp/dpdk'
==> default: Makefile:372: recipe for target 'ebuild-build' failed
==> default: make[3]: *** [ebuild-build] Error 2
==> default: make[3]: Leaving directory '/vpp/dpdk'
==> default: Makefile:697: recipe for target 'dpdk-build' failed
==> default: make[2]: *** [dpdk-build] Error 2
==> default: 

[vpp-dev] 17.07.1 patches

2017-08-18 Thread Dave Wallace

Resend including vpp-dev@lists.fd.io...


 Forwarded Message 
Subject:17.07.1 patches
Date:   Fri, 18 Aug 2017 15:24:27 -0400
From:   Dave Wallace 
To: Neale Ranns (nranns) 



Neale,

There are currently many patches that have been submitted to stable/17.07:

   https://gerrit.fd.io/r/#/q/status:open+project:vpp+branch:stable/1707

I've been working on getting all of them verified, but wanted to know if 
you want to merge these yourself or if you have agreed that these should 
be merged and want me (and/or other committers) to merge them.


Thanks,
-daw-
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] heads-up: failures while running tests against vpp with multiple workers

2017-08-18 Thread Klement Sekera -X (ksekera - PANTHEON TECHNOLOGIES at Cisco)
Hi all,

TLDR: when running tests vs vpp with multiple workers, roughly 25% of
tests fail or crash vpp. It looks like buffer management is still not
completely thread safe.

I've pushed work-in-progress make test modification which runs the test
against both single-thread and multiple-worker vpp. There are quite a
few failures and/or coredumps while running against multiple-worker vpp.

These test cases are failing at this time:

ACLPluginConnTestCase
BFD4TestCase
BFDFIBTestCase
TestDHCP
Datapath
DisableFP
DisableIPFIX
Flowprobe
ReenableFP
ReenableIPFIX
TestGRE
TestIPv4FibCrud
TestIp4VrfMultiInst
TestIP6VrfMultiInst
TestL2fib
TestL2bdArpTerm
TestL2bdMultiInst
TestLB
TestNAT64
TestSNAT
TestSpan
TestVxlanGpe

it seems that there are still some thread safety issues with the buffer
management based on the TestSpan crash:

#2  0x00406d1e in os_exit (code=code@entry=1) at 
/home/ksekera/vpp/build-data/../src/vpp/vnet/main.c:287
#3  0x7f139af0c2fa in unix_signal_handler (signum=, 
si=, uc=)
at /home/ksekera/vpp/build-data/../src/vlib/unix/main.c:118
#4  
#5  mheap_put (v=0x7f1356bdf000, uoffset=18446744073709549696) at 
/home/ksekera/vpp/build-data/../src/vppinfra/mheap.c:797
#6  0x7f139aeb6574 in vlib_buffer_add_to_free_list (do_init=1 '\001', 
buffer_index=, f=0x7f1359d0f780,
vm=0x7f139b1252e0 ) at 
/home/ksekera/vpp/build-data/../src/vlib/buffer_funcs.h:861
#7  vlib_buffer_free_inline (follow_buffer_next=1, n_buffers=256, 
buffers=, vm=0x7f139b1252e0 )
at /home/ksekera/vpp/build-data/../src/vlib/buffer.c:705
#8  vlib_buffer_free_internal (vm=0x7f139b1252e0 , 
buffers=0x7f135b504110, n_buffers=)
at /home/ksekera/vpp/build-data/../src/vlib/buffer.c:730
#9  0x7f139aaba427 in vlib_buffer_free (n_buffers=256, buffers=, vm=0x7f139b1252e0 )
at /home/ksekera/vpp/build-data/../src/vlib/buffer_funcs.h:327
#10 pg_output (vm=0x7f139b1252e0 , node=, 
frame=)
at /home/ksekera/vpp/build-data/../src/vnet/pg/output.c:83

(gdb)
#5  mheap_put (v=0x7f1356bdf000, uoffset=18446744073709549696) at 
/home/ksekera/vpp/build-data/../src/vppinfra/mheap.c:797
797   if (e->n_user_data != n->prev_n_user_data)
(gdb) p *n
Cannot access memory at address 0x7f1556bde87c
(gdb)

here is the patch set if you want to try it out...

https://gerrit.fd.io/r/#/c/8090

it's still a bit clunky, as the testing is done in two phases - first
the full suite is run vs single-thread vpp, then vs multiple-worker vpp.
It's not straightforward to do this in one run (so that instead of
running A, B, C vs single and A, B, C vs multi we run A (vs single), A
(vs multi), B (vs single), B (vs multi), C (vs single), C (vs multi)) so
that's why for now it's implemented this way.

If you want to skip the single-thread tests to speed up your own
testing, run it like this:

env VPP_TEST_SKIP_SINGLE_THREAD=y make test

Currently, the number of worker threads is set as the core count minus
two, with a cap of 8. Higher number causes the ACL plugin to freak out
(memory allocation failure) and the VPP refuses to start, ruining the
day for everybody.

Regards,
Klement
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] Set up MPLS via jvpp

2017-08-18 Thread Neale Ranns (nranns)

Hi Marek,

I don’t see anything wrong with the construction of the request.

Can you please show me the HC logs of the message sent and also
  sh ip fib index 2 10.10.2.3/32 detail

this API is used a lot in the unit-tests so I have some confidence that works 
under normal circumstances

regards,
neale

From: "Marek Gradzki -X (mgradzki - PANTHEON TECHNOLOGIES at Cisco)" 

Date: Friday, 18 August 2017 at 06:00
To: Andrej Mak , "vpp-dev@lists.fd.io" 

Cc: "Neale Ranns (nranns)" 
Subject: RE: Set up MPLS via jvpp

Hi,

So I was wrong. I haven’t noticed next_hop_n_out_labels is u8…
Neal: any idea why labels are not added?

Regards,
Marek

From: Marek Gradzki -X (mgradzki - PANTHEON TECHNOLOGIES at Cisco)
Sent: 17 sierpnia 2017 12:17
To: 'Andrej Mak' ; vpp-dev@lists.fd.io
Cc: Neale Ranns (nranns) 
Subject: RE: Set up MPLS via jvpp

Hi,

I think it is a bug in the C handler of the ip_add_del_route message.
Byte order for next_hop_n_out_labels is not flipped.

Please check if this fixes the issue:
https://gerrit.fd.io/r/#/c/8080/


Regards,
Marek


From: vpp-dev-boun...@lists.fd.io 
[mailto:vpp-dev-boun...@lists.fd.io] On Behalf Of Andrej Mak
Sent: 17 sierpnia 2017 11:03
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] Set up MPLS via jvpp

Hi all,

I would like to set mpls via java api, but I have some problems with it.
I want to do java calls equal to this

ip route add 10.10.2.3/32 table 1 via 10.10.1.2 host-veth out-label 1003

which creates this entry in show ip fib index 1

10.10.2.3/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:27 buckets:1 uRPF:19 to:[0:0]]
[0] [@10]: mpls-label:[1]:[1003:255:0:eos]
[@1]: arp-mpls: via 10.10.1.2 host-veth11

so I create DTO, which sets fields matching parameters in CLI command:

IpAddDelRoute addRoute = new IpAddDelRoute();
addRoute.isAdd = 1;
addRoute.tableId = 2;
final Ipv4Prefix prefix = new Ipv4Prefix((„10.10.2.3/32“);
addRoute.dstAddress = 
Ipv4Translator.INSTANCE.ipv4AddressPrefixToArray(prefix);
addRoute.dstAddressLength = 
Ipv4Translator.INSTANCE.extractPrefix(prefix);
addRoute.nextHopAddress = 
Ipv4Translator.INSTANCE.ipv4AddressNoZoneToArray(„10.10.1.2“);
addRoute.nextHopSwIfIndex = 1;
int[] labels = new int[1];
labels[0] = 1003;
addRoute.nextHopNOutLabels = (byte) labels.length;
addRoute.nextHopOutLabelStack = labels;
api.ipAddDelRoute(addRoute).toCompletableFuture().get()

but show ip fib index 2 shows different result
10.10.2.3/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:44 buckets:1 uRPF:44 to:[0:0]]
[0] [@3]: arp-ipv4: via 10.10.1.2 host-veth11

Is it a bug, or am I doing something wrong?

Another question I’d like to ask is whether is it possible to create MPLS 
local-label via jvpp. I could’t find local label API in mpls.api file.

Thanks
Andrej

Andrej Mak
Software Developer

PANTHEON technologies s.r.o.
Janka Kráľa 9, 974 01 Banská Bystrica
Slovakia
Tel / +421 220 665 111

MAIL / andrej@pantheon.tech
WEB / https://pantheon.tech


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev