Re: [vpp-dev] 2MB vs 1GB hugepages on ARM ThunderX

2018-12-12 Thread Gorka Garcia
I am not sure on the reason for this, but it is documented here:

https://github.com/contiv/vpp/blob/master/docs/arm64/MANUAL_INSTALL_CAVIUM.md

“To mention the most important thing from DPDK setup instructions you need to 
setup 1GB hugepages. The allocation of hugepages should be done at boot time or 
as soon as possible after system boot to prevent memory from being fragmented 
in physical memory. Add parameters hugepagesz=1GB hugepages=16 
default_hugepagesz=1GB to the file /etc/default/grub”

Gorka

From: vpp-dev@lists.fd.io  On Behalf Of Juraj Linkeš
Sent: Wednesday, December 12, 2018 9:07 AM
To: dmar...@me.com; gorka.gar...@cavium.com; Nitin Saxena 

Cc: vpp-dev@lists.fd.io; Sirshak Das 
Subject: [EXT] Re: [vpp-dev] 2MB vs 1GB hugepages on ARM ThunderX

External Email


External Email
Thanks Damjan.

Nitin, Gorka, do you have any input on this?

Juraj

From: Damjan Marion via Lists.Fd.Io [mailto:dmarion=me@lists.fd.io]
Sent: Tuesday, December 11, 2018 5:21 PM
To: Juraj Linkeš mailto:juraj.lin...@pantheon.tech>>
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] 2MB vs 1GB hugepages on ARM ThunderX

Dear Juraj,

I don't think anybody have experience with ThunderX to help you here.
The facts that other NICs work OK indicates that that this particular driver 
requires something special.
What it is, you will probably need to ask Cavium/Marvell guys...

--
Damjan

On 11 Dec 2018, at 07:56, Juraj Linkeš 
mailto:juraj.lin...@pantheon.tech>> wrote:

Hi folks,

I've ran into an issue with hugepages on a Cavium ThunderX soc. I was trying to 
bind a physical interface to VPP. When using 1GB hugepages the interface seems 
to be working fine (well, at least I saw the interface in VPP and I was able to 
configure it and use ping with it), but when using 2MB hugepages the interface 
appeared in error state. The output from show hardware told me this:
VirtualFunctionEthernet1/0/1   1down  VirtualFunctionEthernet1/0/1
  Ethernet address 40:8d:5c:e7:b1:12
  Cavium ThunderX
carrier down
flags: pmd pmd-init-fail maybe-multiseg
rx: queues 1 (max 96), desc 1024 (min 0 max 65535 align 1)
tx: queues 1 (max 96), desc 1024 (min 0 max 65535 align 1)
pci: device 177d:a034 subsystem 177d:a134 address 0002:01:00.01 numa 0
module: unknown
max rx packet len: 9204
promiscuous: unicast off all-multicast off
vlan offload: strip off filter off qinq off
rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum jumbo-frame
   crc-strip scatter
rx offload active: jumbo-frame crc-strip scatter
tx offload avail:  ipv4-cksum udp-cksum tcp-cksum outer-ipv4-cksum
tx offload active:
rss avail: ipv4 ipv4-tcp ipv4-udp ipv6 ipv6-tcp ipv6-udp port
   vxlan geneve nvgre
rss active:ipv4 ipv4-tcp ipv4-udp ipv6 ipv6-tcp ipv6-udp
tx burst function: (nil)
rx burst function: (nil)
  Errors:
rte_eth_rx_queue_setup[port:0, errno:-22]: Unknown error -22

I dug around a bit and this seems to be what -22 means:

#define EINVAL  22  /* Invalid argument */
-EINVAL: The size of network buffers which can be allocated from the memory 
pool does not fit the various buffer sizes allowed by the device controller.

Is this something you've seen before? Is this a bug? Do I need to do something 
extra if I want to use 2MB hugepages?

Thanks,
Juraj
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11564): https://lists.fd.io/g/vpp-dev/message/11564
Mute This Topic: https://lists.fd.io/mt/28720621/675642
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  
[dmar...@me.com]
-=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11582): https://lists.fd.io/g/vpp-dev/message/11582
Mute This Topic: https://lists.fd.io/mt/28720621/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Build failing on AArch64

2018-11-26 Thread Gorka Garcia
Hi Sirshak,

Seems OK for me with master right now.

Gorka

-Original Message-
From: vpp-dev@lists.fd.io  On Behalf Of Sirshak Das
Sent: Monday, November 26, 2018 6:49 AM
To: vpp-dev@lists.fd.io; Honnappa Nagarahalli ; 
Juraj Linkeš ; Lijian Zhang (Arm Technology China) 

Subject: [vpp-dev] Build failing on AArch64

External Email

Hi all,

I am currently facing these build failures in master on AArch64.

[38/1160] Building C object vat/CMakeFiles/vpp_api_test.dir/types.c.o
FAILED: vat/CMakeFiles/vpp_api_test.dir/types.c.o
ccache /usr/lib/ccache/cc -DHAVE_MEMFD_CREATE -Dvpp_api_test_EXPORTS 
-I/home/sirdas/code/commita/vpp/src -I. -Iinclude -march=armv8-a+crc -g -O2 
-DFORTIFY_SOURCE=2 -fstack-protector -fPIC -Werror   
-Wno-address-of-packed-member -pthread -MD -MT 
vat/CMakeFiles/vpp_api_test.dir/types.c.o -MF 
vat/CMakeFiles/vpp_api_test.dir/types.c.o.d -o 
vat/CMakeFiles/vpp_api_test.dir/types.c.o   -c 
/home/sirdas/code/commita/vpp/src/vat/types.c
In file included from 
/home/sirdas/code/commita/vpp/src/vpp/api/vpe_all_api_h.h:25,
 from /home/sirdas/code/commita/vpp/src/vpp/api/types.h:20,
 from /home/sirdas/code/commita/vpp/src/vat/types.c:19:
/home/sirdas/code/commita/vpp/src/vnet/vnet_all_api_h.h:33:10: fatal error: 
vnet/devices/af_packet/af_packet.api.h: No such file or directory  #include 

  ^~~~
compilation terminated.
[85/1160] Building C object vnet/CMakeFiles/vnet_cortexa72.dir/ethernet/node.c.o
ninja: build stopped: subcommand failed.
Makefile:691: recipe for target 'vpp-build' failed
make[1]: *** [vpp-build] Error 1
make[1]: Leaving directory '/home/sirdas/code/commita/vpp/build-root'
Makefile:366: recipe for target 'build-release' failed
make: *** [build-release] Error 2

[114/1310] Building C object vat/CMakeFiles/vpp_api_test.dir/types.c.o
FAILED: vat/CMakeFiles/vpp_api_test.dir/types.c.o
ccache /usr/lib/ccache/cc -DHAVE_MEMFD_CREATE -Dvpp_api_test_EXPORTS 
-I/home/sirdas/code/commitb/vpp/src -I. -Iinclude -march=armv8-a+crc -g -O2 
-DFORTIFY_SOURCE=2 -fstack-protector -fPIC -Werror   
-Wno-address-of-packed-member -pthread -MD -MT 
vat/CMakeFiles/vpp_api_test.dir/types.c.o -MF 
vat/CMakeFiles/vpp_api_test.dir/types.c.o.d -o 
vat/CMakeFiles/vpp_api_test.dir/types.c.o   -c 
/home/sirdas/code/commitb/vpp/src/vat/types.c
In file included from 
/home/sirdas/code/commitb/vpp/src/vpp/api/vpe_all_api_h.h:25,
 from /home/sirdas/code/commitb/vpp/src/vpp/api/types.h:20,
 from /home/sirdas/code/commitb/vpp/src/vat/types.c:19:
/home/sirdas/code/commitb/vpp/src/vnet/vnet_all_api_h.h:32:10: fatal error: 
vnet/bonding/bond.api.h: No such file or directory  #include 

  ^
compilation terminated.
[161/1310] Building C object 
vnet/CMakeFiles/vnet_thunderx2t99.dir/ethernet/node.c.o
ninja: build stopped: subcommand failed.
Makefile:691: recipe for target 'vpp-build' failed
make[1]: *** [vpp-build] Error 1
make[1]: Leaving directory '/home/sirdas/code/commitb/vpp/build-root'
Makefile:366: recipe for target 'build-release' failed
make: *** [build-release] Error 2


Its all someway or the other related to *.api files and genereated header files.

I am not able to isolate any particular commit that did this.

Does anybody know if anything changed off the top of their head ?

Thank you
Sirshak Das
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11405): https://lists.fd.io/g/vpp-dev/message/11405
Mute This Topic: https://lists.fd.io/mt/28318534/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-