[vpp-dev] Maintainer router plugin

2018-04-10 Thread Jan Hugo Prins | BetterBe
Hello,

Is someone actively maintaining the router plugin?

Jan Hugo
-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.


[vpp-dev] test-checkstyle breakage

2018-04-10 Thread Ed Kern

Looks like they rev’d pycodestyle from 2.3.1 to 2.4.0 and with that upgrade the 
wheels have come off

make test-checkstyle

Total hack to lock the version back to 2.3.1 should be

https://gerrit.fd.io/r/11667


if you/someone wanted that until you can beat 2.4.0 into shape

Ed



Re: [vpp-dev] VPP crash bug in CGNAT module

2018-04-10 Thread Hamid via Lists.Fd.Io
Thanks Matus for clarification. We have fixed our configs accordingly, but
as it was a bug that we encountered so I posted in the forum.

A similar crash also occur sometimes using 'nat44 deterministic add in
/ out /' command as well. For cases in which subnet
for inside address is set low (e.g. 16) and the outside subnet (e.g. 28), a
similar crash is occured, without warning.

vpp#nat44 deterministic add in 10.10.0.0/16 out 192.168.10.64/28
root@xflow:~#


On Tue, Apr 10, 2018 at 7:09 PM, Matus Fabian -X (matfabia - PANTHEON
TECHNOLOGIES at Cisco)  wrote:

> Hi,
>
>
>
> When NAT plugin is running in deterministic mode you should use only CLI
> commands from list here https://wiki.fd.io/view/VPP/NAT#CLI_2 (for 1801
> works only “show nat44” instead of all “show nat44 deterministic …”
> commands”)
>
> You should not use “nat44 add interface address” or “nat44 add address”.
> Currently there is no check for NAT plugin mode in CLI or API, so wrong
> commands may cause crash.
>
> I will fix this to avoid using of wrong configuration.
>
>
>
> Matus
>
>
>
> *From:* vpp-dev@lists.fd.io  *On Behalf Of *Hamid
> via Lists.Fd.Io
> *Sent:* Tuesday, April 10, 2018 3:55 PM
> *To:* vpp-dev@lists.fd.io
> *Cc:* vpp-dev@lists.fd.io
> *Subject:* [vpp-dev] VPP crash bug in CGNAT module
>
>
>
> Hi,
>
> I am using stable/1801 source build and have encountered a bug due to the
> CGNAT plugin. When using deterministic CGN and using the nat {
> deterministic } option in the startup.conf, if you apply normal nat44
> rules, the interfaces do not work as expected. And by initiating a ping
> command, vppctl crashes and exits and with it, removes all its applied
> previous configuration from the CLI.
>
> Here is a sample setup (loop0 and loop1 have been configured):
>
> vpp# nat44 add interface address loop0
>
> vpp# set interface nat44 in loop1 out loop0
>
> vpp# nat44 add address 192.168.10.20 - 192.168.10.30
>
>
> Now, when the ping command is run (IP address is of a tap interface
> initialized in vpp), vpp crashes and all CLI configuration resets:
>
> vpp# ping 192.168.100.2
>
> root@xflow:~#
>
>
> When the 'nat { deterministic }' statement is removed from the startup
> conf, the issue is resolved and the setup behaves as intended.
>
> 
>


Re: [vpp-dev] VPP crash bug in CGNAT module

2018-04-10 Thread Matus Fabian -X (matfabia - PANTHEON TECHNOLOGIES@Cisco)
Hi,

When NAT plugin is running in deterministic mode you should use only CLI 
commands from list here https://wiki.fd.io/view/VPP/NAT#CLI_2 (for 1801 works 
only “show nat44” instead of all “show nat44 deterministic …” commands”)
You should not use “nat44 add interface address” or “nat44 add address”. 
Currently there is no check for NAT plugin mode in CLI or API, so wrong 
commands may cause crash.
I will fix this to avoid using of wrong configuration.

Matus

From: vpp-dev@lists.fd.io  On Behalf Of Hamid via 
Lists.Fd.Io
Sent: Tuesday, April 10, 2018 3:55 PM
To: vpp-dev@lists.fd.io
Cc: vpp-dev@lists.fd.io
Subject: [vpp-dev] VPP crash bug in CGNAT module

Hi,

I am using stable/1801 source build and have encountered a bug due to the CGNAT 
plugin. When using deterministic CGN and using the nat { deterministic } option 
in the startup.conf, if you apply normal nat44 rules, the interfaces do not 
work as expected. And by initiating a ping command, vppctl crashes and exits 
and with it, removes all its applied previous configuration from the CLI.

Here is a sample setup (loop0 and loop1 have been configured):
vpp# nat44 add interface address loop0
vpp# set interface nat44 in loop1 out loop0
vpp# nat44 add address 192.168.10.20 - 192.168.10.30

Now, when the ping command is run (IP address is of a tap interface initialized 
in vpp), vpp crashes and all CLI configuration resets:
vpp# ping 192.168.100.2
root@xflow:~#

When the 'nat { deterministic }' statement is removed from the startup conf, 
the issue is resolved and the setup behaves as intended.



[vpp-dev] VPP crash bug in CGNAT module

2018-04-10 Thread Hamid via Lists.Fd.Io
Hi,

I am using stable/1801 source build and have encountered a bug due to the CGNAT 
plugin. When using deterministic CGN and using the nat { deterministic } option 
in the startup.conf, if you apply normal nat44 rules, the interfaces do not 
work as expected. And by initiating a ping command, vppctl crashes and exits 
and with it, removes all its applied previous configuration from the CLI.

Here is a sample setup (loop0 and loop1 have been configured):

vpp# nat44 add interface address loop0
vpp# set interface nat44 in loop1 out loop0
vpp# nat44 add address 192.168.10.20 - 192.168.10.30

Now, when the ping command is run (IP address is of a tap interface initialized 
in vpp), vpp crashes and all CLI configuration resets:
vpp# ping 192.168.100.2
root@xflow:~#

When the 'nat { deterministic }' statement is removed from the startup conf, 
the issue is resolved and the setup behaves as intended.


Re: [vpp-dev] VPP with DPDK drops packet, but it sends ok

2018-04-10 Thread Moon-Sang Lee
On Tue, Apr 10, 2018 at 8:24 PM, Marco Varlese  wrote:

> On Tue, 2018-04-10 at 19:33 +0900, Moon-Sang Lee wrote:
>
>
> Thanks for your interest, Marco.
>
> I follows the intel guideline, "As an SR-IOV VF network adapter using a
> KVM virtual network pool of adapters"
> from https://software.intel.com/en-us/articles/configure-
> sr-iov-network-virtual-functions-in-linux-kvm.
>
> In summary, I modprobe ixgbe on host side and creates one VF per PF.
> When I start VM using virsh, libvirt binds the VF to vfio-pci in host side.
> After VM finishes booting, I login to VM and bind the VF to igb_uio using
> dpdk-dev command.
> (i.e. only igb_uio works and other drivers like uio_pci_generic and
> vfio-pci fail to bind VF in VM side.)
>
> Yes, that's expected.
> If you want to use vfio-pci in the VM you'll need to enable the "no-iommu":
> # echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode
>

well, I cannot find such file in host nor VM.


>
>
> I don't edit startup.conf in VM, but I just bind the VF to dpdk-compatible
> driver, igb_uio, inside VM.
> With above configuration, I can bind the VF to guest kernel driver,
> ixgbevf, and also to DPDK PMD, igb_uio.
> As a result,  I can run VPP without DPDK using ixgbevf, and also DPDK
> applications using igb_uio.
> (i.e. I successfully runs l2fwd/l3fwd sample applications inside VM, so I
> guess VF biding has no problem.)
>
> However, I cannot run VPP with DPDK and I suspect hugepage is related to
> this problem as shown in my VPP log.
>
> So, what does the command "cat /proc/meminfo |grep HugePages_" shows?
>


well, 'cat /proc/meminfo |grep HugePages' seems ok even though I cannot
find any rtemap files neither in /dev/hugepages nor /run/vpp/hugepages.
When I run dpdk application, I can see rtemap files in /dev/hugepages.

HugePages_Total:1024
HugePages_Free:  972
HugePages_Rsvd:0
HugePages_Surp:0
Hugepagesize:   2048 kB

FYI.
Apr 10 05:12:35 ubuntu1604 /usr/bin/vpp[1720]: dpdk_config:1271: EAL init
args: -c 1 -n 4 --huge-dir /run/vpp/hugepages --file-prefix vpp -b
:00:05.0 -b :00:06.0 -b :00:07.0 --master-lcore 0 --socket-mem
64




>
> Every packet sent from the VM is traversed via VF to the opposite side, a
> pktgen server.
> And pktgen replies to those packets, but VM does not receive those replies.
> (i.e. I ping to pktgen port from VM, where host server port is directly
> linked to pktgen server port.)
>
> Here is my vpp-config script.
> I test ping after running this script.
>
> #!/bin/sh
>
> vppctl enable tap-inject
> vppctl create loopback interface
> vppctl set interface state loop0 up
>
> vppctl show interface
> vppctl show hardware
> vppctl set interface state VirtualFunctionEthernet0/6/0 up
> vppctl set interface state VirtualFunctionEthernet0/7/0 up
> vppctl set interface ip address loop0 2.2.2.2/32
> vppctl set interface ip address VirtualFunctionEthernet0/6/0
> 192.168.0.1/24
> vppctl set interface ip address VirtualFunctionEthernet0/7/0
> 192.168.1.1/24
> vppctl show interface address
> vppctl show tap-inject
>
> ip addr add 2.2.2.2/32 dev vpp0
> ip addr add 192.168.0.1/24 dev vpp1
> ip addr add 192.168.1.1/24 dev vpp2
> ip link set dev vpp0 up
> ip link set dev vpp1 up
> ip link set dev vpp2 up
>
>
> On Tue, Apr 10, 2018 at 4:51 PM, Marco Varlese  wrote:
>
> On Mon, 2018-04-09 at 22:53 +0900, Moon-Sang Lee wrote:
>
>
> I've configured a VM with KVM, and the VM is intended to run VPP with DPDK.
> In particular, the VM is connected to one of VFs. (i.e. SR-IOV)
> I can run DPDK sample applications,including l2fwd and l3fwd, in the VM,
> therefore I guess VM is successfully connected to the outside-world(pktgen
> server) via VFs.
>
> However, I cannot receive a packet when I run VPP/DPDK.
> I can see TX packets from the VM on the opposite side, pktgen server,
> but the VM does not receive any reply from  pktgen server which reports
> RX/TX packet count.
> (i.e. arping/ping from VM arrives in pktgen, but the reply from pktgen is
> not received in VM.)
> I found some strange log messages regarding vpp launching as below.
>
> I appreciate for any comment.
> Thanks in advance...
>
> - Host NIC: Intel 82599 10G NIC (i.e. VF binding with vfio-pci)
> - VM: 1 socket 4 vCPU
> - VPP: 18.04
> - DPDK binding: igb_uio
>
> It isn't clear to me who manages the PF in the host and how you created
> the VFs (kernel module or via DPDK binding)?
>
> Second, what do you mean by DPDK binding in the last line above?
> Is that what you have configured in startup.conf in the VM for VPP to be
> used?
>
> If so, what is the different between VF binding and DPDK binding in your
> short setup summary above? I'm confused by reading vfio-pci in one place
> and then igb_uio later on.
>
> Can you provide us with the startup.conf you have in the VM?
>
> Finally, if you are interested in using vfio-pci then you'll need to have
> the no-IOMMU enabled otherwise you can't use VFIO in the VM...
> 

Re: [vpp-dev] VPP with DPDK drops packet, but it sends ok

2018-04-10 Thread Marco Varlese
On Tue, 2018-04-10 at 19:33 +0900, Moon-Sang Lee wrote:
> Thanks for your interest, Marco.
> I follows the intel guideline, "As an SR-IOV VF network adapter using a KVM
> virtual network pool of adapters"
> from https://software.intel.com/en-us/articles/configure-sr-iov-network-virtua
> l-functions-in-linux-kvm. 
> 
> In summary, I modprobe ixgbe on host side and creates one VF per PF. 
> When I start VM using virsh, libvirt binds the VF to vfio-pci in host side.
> After VM finishes booting, I login to VM and bind the VF to igb_uio using
> dpdk-dev command.
> (i.e. only igb_uio works and other drivers like uio_pci_generic and vfio-pci
> fail to bind VF in VM side.) 
Yes, that's expected. If you want to use vfio-pci in the VM you'll need to
enable the "no-iommu":# echo 1 >
/sys/module/vfio/parameters/enable_unsafe_noiommu_mode
> I don't edit startup.conf in VM, but I just bind the VF to dpdk-compatible
> driver, igb_uio, inside VM.
> With above configuration, I can bind the VF to guest kernel driver, ixgbevf,
> and also to DPDK PMD, igb_uio.  
> As a result,  I can run VPP without DPDK using ixgbevf, and also DPDK
> applications using igb_uio.
> (i.e. I successfully runs l2fwd/l3fwd sample applications inside VM, so I
> guess VF biding has no problem.)
> 
> However, I cannot run VPP with DPDK and I suspect hugepage is related to this
> problem as shown in my VPP log.  
So, what does the command "cat /proc/meminfo |grep HugePages_" shows?
> Every packet sent from the VM is traversed via VF to the opposite side, a
> pktgen server. 
> And pktgen replies to those packets, but VM does not receive those replies.
> (i.e. I ping to pktgen port from VM, where host server port is directly linked
> to pktgen server port.)  
>  
> Here is my vpp-config script.
> I test ping after running this script. 
> 
> #!/bin/sh
> 
> vppctl enable tap-inject
> vppctl create loopback interface
> vppctl set interface state loop0 up
> 
> vppctl show interface
> vppctl show hardware
> vppctl set interface state VirtualFunctionEthernet0/6/0 up
> vppctl set interface state VirtualFunctionEthernet0/7/0 up
> vppctl set interface ip address loop0 2.2.2.2/32
> vppctl set interface ip address VirtualFunctionEthernet0/6/0 192.168.0.1/24
> vppctl set interface ip address VirtualFunctionEthernet0/7/0 192.168.1.1/24
> vppctl show interface address
> vppctl show tap-inject
> 
> ip addr add 2.2.2.2/32 dev vpp0
> ip addr add 192.168.0.1/24 dev vpp1
> ip addr add 192.168.1.1/24 dev vpp2
> ip link set dev vpp0 up
> ip link set dev vpp1 up
> ip link set dev vpp2 up
> 
> 
> 
> On Tue, Apr 10, 2018 at 4:51 PM, Marco Varlese  wrote:
> > On Mon, 2018-04-09 at 22:53 +0900, Moon-Sang Lee wrote:
> > > I've configured a VM with KVM, and the VM is intended to run VPP with
> > > DPDK.
> > > In particular, the VM is connected to one of VFs. (i.e. SR-IOV)
> > > I can run DPDK sample applications,including l2fwd and l3fwd, in the VM, 
> > > therefore I guess VM is successfully connected to the 
> > > outside-world(pktgen 
> > > server) via VFs. 
> > > 
> > > However, I cannot receive a packet when I run VPP/DPDK. 
> > > I can see TX packets from the VM on the opposite side, pktgen server, 
> > > but the VM does not receive any reply from  pktgen server which reports
> > > RX/TX packet count.
> > > (i.e. arping/ping from VM arrives in pktgen, but the reply from pktgen is
> > > not received in VM.)
> > > I found some strange log messages regarding vpp launching as below. 
> > > 
> > > I appreciate for any comment.
> > > Thanks in advance...
> > > 
> > > - Host NIC: Intel 82599 10G NIC (i.e. VF binding with vfio-pci)
> > > - VM: 1 socket 4 vCPU
> > > - VPP: 18.04
> > > - DPDK binding: igb_uio 
> > It isn't clear to me who manages the PF in the host and how you created the
> > VFs (kernel module or via DPDK binding)?
> > 
> > Second, what do you mean by DPDK binding in the last line above?
> > Is that what you have configured in startup.conf in the VM for VPP to be
> > used?
> > 
> > If so, what is the different between VF binding and DPDK binding in your
> > short setup summary above? I'm confused by reading vfio-pci in one place and
> > then igb_uio later on.
> > 
> > Can you provide us with the startup.conf you have in the VM?
> > 
> > Finally, if you are interested in using vfio-pci then you'll need to have
> > the no-IOMMU enabled otherwise you can't use VFIO in the VM... 
> > probably, the easiest would be to use igb_uio everywhere...
> > 
> > > root@xenial-vpp-frr:~# vpp -c /etc/vpp/startup.conf
> > > vlib_plugin_early_init:359: plugin path /usr/lib/vpp_plugins
> > > load_one_plugin:187: Loaded plugin: acl_plugin.so (Access Control Lists)
> > > load_one_plugin:187: Loaded plugin: avf_plugin.so (Intel Adaptive Virtual
> > > Function (AVF) Device Plugin)
> > > load_one_plugin:189: Loaded plugin: cdp_plugin.so
> > > load_one_plugin:187: Loaded plugin: dpdk_plugin.so (Data Plane Development
> > > Kit (DPDK))
> > > load_one_plugin:187: Loaded 

Re: [vpp-dev] VPP with DPDK drops packet, but it sends ok

2018-04-10 Thread Moon-Sang Lee
Thanks for your interest, Marco.

I follows the intel guideline, "As an SR-IOV VF network adapter using a KVM
virtual network pool of adapters"
from
https://software.intel.com/en-us/articles/configure-sr-iov-network-virtual-functions-in-linux-kvm
.

In summary, I modprobe ixgbe on host side and creates one VF per PF.
When I start VM using virsh, libvirt binds the VF to vfio-pci in host side.
After VM finishes booting, I login to VM and bind the VF to igb_uio using
dpdk-dev command.
(i.e. only igb_uio works and other drivers like uio_pci_generic and
vfio-pci fail to bind VF in VM side.)

I don't edit startup.conf in VM, but I just bind the VF to dpdk-compatible
driver, igb_uio, inside VM.
With above configuration, I can bind the VF to guest kernel driver,
ixgbevf, and also to DPDK PMD, igb_uio.
As a result,  I can run VPP without DPDK using ixgbevf, and also DPDK
applications using igb_uio.
(i.e. I successfully runs l2fwd/l3fwd sample applications inside VM, so I
guess VF biding has no problem.)

However, I cannot run VPP with DPDK and I suspect hugepage is related to
this problem as shown in my VPP log.
Every packet sent from the VM is traversed via VF to the opposite side, a
pktgen server.
And pktgen replies to those packets, but VM does not receive those replies.
(i.e. I ping to pktgen port from VM, where host server port is directly
linked to pktgen server port.)

Here is my vpp-config script.
I test ping after running this script.

#!/bin/sh

vppctl enable tap-inject
vppctl create loopback interface
vppctl set interface state loop0 up

vppctl show interface
vppctl show hardware
vppctl set interface state VirtualFunctionEthernet0/6/0 up
vppctl set interface state VirtualFunctionEthernet0/7/0 up
vppctl set interface ip address loop0 2.2.2.2/32
vppctl set interface ip address VirtualFunctionEthernet0/6/0 192.168.0.1/24
vppctl set interface ip address VirtualFunctionEthernet0/7/0 192.168.1.1/24
vppctl show interface address
vppctl show tap-inject

ip addr add 2.2.2.2/32 dev vpp0
ip addr add 192.168.0.1/24 dev vpp1
ip addr add 192.168.1.1/24 dev vpp2
ip link set dev vpp0 up
ip link set dev vpp1 up
ip link set dev vpp2 up


On Tue, Apr 10, 2018 at 4:51 PM, Marco Varlese  wrote:

> On Mon, 2018-04-09 at 22:53 +0900, Moon-Sang Lee wrote:
>
>
> I've configured a VM with KVM, and the VM is intended to run VPP with DPDK.
> In particular, the VM is connected to one of VFs. (i.e. SR-IOV)
> I can run DPDK sample applications,including l2fwd and l3fwd, in the VM,
> therefore I guess VM is successfully connected to the outside-world(pktgen
> server) via VFs.
>
> However, I cannot receive a packet when I run VPP/DPDK.
> I can see TX packets from the VM on the opposite side, pktgen server,
> but the VM does not receive any reply from  pktgen server which reports
> RX/TX packet count.
> (i.e. arping/ping from VM arrives in pktgen, but the reply from pktgen is
> not received in VM.)
> I found some strange log messages regarding vpp launching as below.
>
> I appreciate for any comment.
> Thanks in advance...
>
> - Host NIC: Intel 82599 10G NIC (i.e. VF binding with vfio-pci)
> - VM: 1 socket 4 vCPU
> - VPP: 18.04
> - DPDK binding: igb_uio
>
> It isn't clear to me who manages the PF in the host and how you created
> the VFs (kernel module or via DPDK binding)?
>
> Second, what do you mean by DPDK binding in the last line above?
> Is that what you have configured in startup.conf in the VM for VPP to be
> used?
>
> If so, what is the different between VF binding and DPDK binding in your
> short setup summary above? I'm confused by reading vfio-pci in one place
> and then igb_uio later on.
>
> Can you provide us with the startup.conf you have in the VM?
>
> Finally, if you are interested in using vfio-pci then you'll need to have
> the no-IOMMU enabled otherwise you can't use VFIO in the VM...
> probably, the easiest would be to use igb_uio everywhere...
>
>
> root@xenial-vpp-frr:~# vpp -c /etc/vpp/startup.conf
> vlib_plugin_early_init:359: plugin path /usr/lib/vpp_plugins
> load_one_plugin:187: Loaded plugin: acl_plugin.so (Access Control Lists)
> load_one_plugin:187: Loaded plugin: avf_plugin.so (Intel Adaptive Virtual
> Function (AVF) Device Plugin)
> load_one_plugin:189: Loaded plugin: cdp_plugin.so
> load_one_plugin:187: Loaded plugin: dpdk_plugin.so (Data Plane Development
> Kit (DPDK))
> load_one_plugin:187: Loaded plugin: flowprobe_plugin.so (Flow per Packet)
> load_one_plugin:187: Loaded plugin: gbp_plugin.so (Group Based Policy)
> load_one_plugin:187: Loaded plugin: gtpu_plugin.so (GTPv1-U)
> load_one_plugin:187: Loaded plugin: igmp_plugin.so (IGMP messaging)
> load_one_plugin:187: Loaded plugin: ila_plugin.so (Identifier-locator
> addressing for IPv6)
> load_one_plugin:187: Loaded plugin: ioam_plugin.so (Inbound OAM)
> load_one_plugin:117: Plugin disabled (default): ixge_plugin.so
> load_one_plugin:187: Loaded plugin: kubeproxy_plugin.so (kube-proxy data
> plane)
> load_one_plugin:187: Loaded 

Re: [vpp-dev] 18.04 RC2 this Wednesday!

2018-04-10 Thread Jan Hugo Prins | BetterBe
The following error scrolls by a lot during compile:

    default: gcc -std=gnu99 -c -g -O3 -fwrapv -U__STRICT_ANSI__
-fno-common -Werror=attributes -W -Wall -pedantic -Wno-long-long
-Werror=implicit -Werror=missing-braces -Werror=return-type
-Werror=trigraphs -Werror=pointer-arith -Werror=missing-prototypes
-Werror=missing-declarations -Werror=comment -Werror=vla -DHAVE_CONFIG_H
-I. -I. -I./include -I./include -I./x86 -I./x86 -I./asm -I./asm
-I./disasm -I./disasm -I./output -I./output -o rdoff/rdlib.o rdoff/rdlib.c
    default: In file included from rdoff/rdfutils.h:44:0,
    default:  from rdoff/rdoff.c:50:
    default: ./include/nasmlib.h:113:11: warning: ISO C99 does not
support '_Noreturn' [-Wpedantic]
    default:  no_return nasm_assert_failed(const char *, int, const char *);
    default:    ^
    default: In file included from rdoff/rdfutils.h:45:0,
    default:  from rdoff/rdoff.c:50:
    default: ./include/error.h:47:29: warning: ISO C99 does not support
'_Noreturn' [-Wpedantic]
    default:  no_return printf_func(2, 3) nasm_fatal(int flags, const
char *fmt, ...);
    default:  ^
    default: ./include/error.h:48:29: warning: ISO C99 does not support
'_Noreturn' [-Wpedantic]
    default:  no_return printf_func(2, 3) nasm_panic(int flags, const
char *fmt, ...);
    default:  ^
    default: ./include/error.h:49:11: warning: ISO C99 does not support
'_Noreturn' [-Wpedantic]
    default:  no_return nasm_panic_from_macro(const char *file, int line);
    default:    ^
    default: In file included from rdoff/rdfutils.h:44:0,
    default:  from rdoff/rdfload.h:15,
    default:  from rdoff/rdfload.c:51:
    default: ./include/nasmlib.h:113:11: warning: ISO C99 does not
support '_Noreturn' [-Wpedantic]
    default:  no_return nasm_assert_failed(const char *, int, const char *);
    default:    ^
    default: In file included from rdoff/rdfutils.h:45:0,
    default:  from rdoff/rdfload.h:15,
    default:  from rdoff/rdfload.c:51:
    default: ./include/error.h:47:29: warning: ISO C99 does not support
'_Noreturn' [-Wpedantic]
    default:  no_return printf_func(2, 3) nasm_fatal(int flags, const
char *fmt, ...);
    default:  ^
    default: ./include/error.h:48:29: warning: ISO C99 does not support
'_Noreturn' [-Wpedantic]
    default:  no_return printf_func(2, 3) nasm_panic(int flags, const
char *fmt, ...);
    default:  ^
    default: ./include/error.h:49:11: warning: ISO C99 does not support
'_Noreturn' [-Wpedantic]
    default:  no_return nasm_panic_from_macro(const char *file, int line);
    default:    ^


And the following:


    default: make[4]: Entering directory
`/vpp/build-root/rpmbuild/vpp-18.07/build-root/build-vpp-native/vpp'
    default:   APIGEN   vlibmemory/memclnt.api.h
    default:   JSON API vlibmemory/memclnt.api.json
    default:
WARNING:vppapigen:/vpp/build-root/rpmbuild/vpp-18.07/build-data/../src/vlibmemory/memclnt.api:0:1:
Old Style VLA: u8 data[0];
    default:
WARNING:vppapigen:/vpp/build-root/rpmbuild/vpp-18.07/build-data/../src/vlibmemory/memclnt.api:0:1:
Old Style VLA: u8 data[0];


No idea if it is anywhere critical or if someone has created reports for
this in the past.

Jan Hugo



On 04/09/2018 08:43 PM, Chris Luke wrote:
>
> All,
>
>  
>
> Gentle reminder that 18.04 RC2 will be posted on Wednesday.
>
>  
>
> Note: After Wednesday's RC2 Milestone, only critical bug fixes will be
> merged into branch stable/1804.  Please review open anomalies for
> candidates to be fixed this week. Also, please remember to open a Jira
> ticket for all patches submitted to stable branches.
>
>  
>
> Cheers,
>
> Chris.
>
> 

-- 
Kind regards

Jan Hugo Prins
/DevOps Engineer/

Auke Vleerstraat 140 E
7547 AN Enschede
CC no. 08097527

*T* +31 (0) 53 48 00 694 
*E* jpr...@betterbe.com 
*M* +31 (0)6 263 58 951 
www.betterbe.com 
BetterBe accepts no liability for the content of this email, or for the
consequences of any actions taken on the basis
of the information provided, unless that information is subsequently
confirmed in writing. If you are not the intended
recipient you are notified that disclosing, copying, distributing or
taking any action in reliance on the contents of this
information is strictly prohibited.



Re: [vpp-dev] VPP with DPDK drops packet, but it sends ok

2018-04-10 Thread Marco Varlese
On Mon, 2018-04-09 at 22:53 +0900, Moon-Sang Lee wrote:
> I've configured a VM with KVM, and the VM is intended to run VPP with DPDK.
> In particular, the VM is connected to one of VFs. (i.e. SR-IOV)
> I can run DPDK sample applications,including l2fwd and l3fwd, in the VM, 
> therefore I guess VM is successfully connected to the outside-world(pktgen
> server) via VFs. 
> 
> However, I cannot receive a packet when I run VPP/DPDK. 
> I can see TX packets from the VM on the opposite side, pktgen server, 
> but the VM does not receive any reply from  pktgen server which reports RX/TX
> packet count.
> (i.e. arping/ping from VM arrives in pktgen, but the reply from pktgen is not
> received in VM.)
> I found some strange log messages regarding vpp launching as below. 
> 
> I appreciate for any comment.
> Thanks in advance...
> 
> - Host NIC: Intel 82599 10G NIC (i.e. VF binding with vfio-pci)
> - VM: 1 socket 4 vCPU
> - VPP: 18.04
> - DPDK binding: igb_uio 
It isn't clear to me who manages the PF in the host and how you created the VFs
(kernel module or via DPDK binding)?
Second, what do you mean by DPDK binding in the last line above?Is that what you
have configured in startup.conf in the VM for VPP to be used?
If so, what is the different between VF binding and DPDK binding in your short
setup summary above? I'm confused by reading vfio-pci in one place and then
igb_uio later on.
Can you provide us with the startup.conf you have in the VM?
Finally, if you are interested in using vfio-pci then you'll need to have the
no-IOMMU enabled otherwise you can't use VFIO in the VM... probably, the easiest
would be to use igb_uio everywhere...
> root@xenial-vpp-frr:~# vpp -c /etc/vpp/startup.conf
> vlib_plugin_early_init:359: plugin path /usr/lib/vpp_plugins
> load_one_plugin:187: Loaded plugin: acl_plugin.so (Access Control Lists)
> load_one_plugin:187: Loaded plugin: avf_plugin.so (Intel Adaptive Virtual
> Function (AVF) Device Plugin)
> load_one_plugin:189: Loaded plugin: cdp_plugin.so
> load_one_plugin:187: Loaded plugin: dpdk_plugin.so (Data Plane Development Kit
> (DPDK))
> load_one_plugin:187: Loaded plugin: flowprobe_plugin.so (Flow per Packet)
> load_one_plugin:187: Loaded plugin: gbp_plugin.so (Group Based Policy)
> load_one_plugin:187: Loaded plugin: gtpu_plugin.so (GTPv1-U)
> load_one_plugin:187: Loaded plugin: igmp_plugin.so (IGMP messaging)
> load_one_plugin:187: Loaded plugin: ila_plugin.so (Identifier-locator
> addressing for IPv6)
> load_one_plugin:187: Loaded plugin: ioam_plugin.so (Inbound OAM)
> load_one_plugin:117: Plugin disabled (default): ixge_plugin.so
> load_one_plugin:187: Loaded plugin: kubeproxy_plugin.so (kube-proxy data
> plane)
> load_one_plugin:187: Loaded plugin: l2e_plugin.so (L2 Emulation)
> load_one_plugin:187: Loaded plugin: lacp_plugin.so (Link Aggregation Control
> Protocol)
> load_one_plugin:187: Loaded plugin: lb_plugin.so (Load Balancer)
> load_one_plugin:187: Loaded plugin: memif_plugin.so (Packet Memory Interface
> (experimetal))
> load_one_plugin:187: Loaded plugin: nat_plugin.so (Network Address
> Translation)
> load_one_plugin:187: Loaded plugin: pppoe_plugin.so (PPPoE)
> load_one_plugin:187: Loaded plugin: router.so (router)
> load_one_plugin:187: Loaded plugin: srv6ad_plugin.so (Dynamic SRv6 proxy)
> load_one_plugin:187: Loaded plugin: srv6am_plugin.so (Masquerading SRv6 proxy)
> load_one_plugin:187: Loaded plugin: srv6as_plugin.so (Static SRv6 proxy)
> load_one_plugin:187: Loaded plugin: stn_plugin.so (VPP Steals the NIC for
> Container integration)
> load_one_plugin:187: Loaded plugin: tlsmbedtls_plugin.so (mbedtls based TLS
> Engine)
> load_one_plugin:187: Loaded plugin: tlsopenssl_plugin.so (openssl based TLS
> Engine)
> load_one_plugin:67: Loaded plugin:
> /usr/lib/vpp_api_test_plugins/flowprobe_test_plugin.so
> load_one_plugin:67: Loaded plugin:
> /usr/lib/vpp_api_test_plugins/dpdk_test_plugin.so
> load_one_plugin:67: Loaded plugin:
> /usr/lib/vpp_api_test_plugins/ioam_export_test_plugin.so
> load_one_plugin:67: Loaded plugin:
> /usr/lib/vpp_api_test_plugins/cdp_test_plugin.so
> load_one_plugin:67: Loaded plugin:
> /usr/lib/vpp_api_test_plugins/kubeproxy_test_plugin.so
> load_one_plugin:67: Loaded plugin:
> /usr/lib/vpp_api_test_plugins/ioam_vxlan_gpe_test_plugin.so
> load_one_plugin:67: Loaded plugin:
> /usr/lib/vpp_api_test_plugins/stn_test_plugin.so
> load_one_plugin:67: Loaded plugin:
> /usr/lib/vpp_api_test_plugins/acl_test_plugin.so
> load_one_plugin:67: Loaded plugin:
> /usr/lib/vpp_api_test_plugins/lb_test_plugin.so
> load_one_plugin:67: Loaded plugin:
> /usr/lib/vpp_api_test_plugins/lacp_test_plugin.so
> load_one_plugin:67: Loaded plugin:
> /usr/lib/vpp_api_test_plugins/ioam_pot_test_plugin.so
> load_one_plugin:67: Loaded plugin:
> /usr/lib/vpp_api_test_plugins/pppoe_test_plugin.so
> load_one_plugin:67: Loaded plugin:
> /usr/lib/vpp_api_test_plugins/udp_ping_test_plugin.so
> load_one_plugin:67: Loaded plugin:
> 

Re: [vpp-dev] 18.04 RC2 this Wednesday!

2018-04-10 Thread Jan Hugo Prins | BetterBe
I understand completely.
Is the maintainer of these plugins here on the mailinglist?

Jan Hugo

On 04/09/2018 11:21 PM, Chris Luke wrote:
>
> Unfortunately it’s too late for such large features to be merged with
> VPP’s stable branch, which is now focused on bug fixes.
>
>  
>
> Additionally, it’s up to the maintainers of those plugins to propose
> merging their work with the VPP project.
>
>  
>
> Cheers,
>
> Chris.
>
>  
>
>  
>
>  
>
> *From:*vpp-dev@lists.fd.io  *On Behalf Of *Jan
> Hugo Prins | BetterBe
> *Sent:* Monday, April 9, 2018 17:15
> *To:* vpp-dev@lists.fd.io
> *Subject:* Re: [vpp-dev] 18.04 RC2 this Wednesday!
>
>  
>
> Hello,
>
> Would it be possible to get the router and netlink plugin, that are
> currently in the VPPSB project, merged into 18.04 ?
> I would like to work with them building a set of routers, and having
> them in the stable branche would mean that the chance of some patches
> breaking this functionality would be a lot smaller.
>
> Cheers,
> Jan Hugo Prins
>
> On 04/09/2018 08:43 PM, Chris Luke wrote:
>
> All,
>
>  
>
> Gentle reminder that 18.04 RC2 will be posted on Wednesday.
>
>  
>
> Note: After Wednesday's RC2 Milestone, only critical bug fixes
> will be merged into branch stable/1804.  Please review open
> anomalies for candidates to be fixed this week. Also, please
> remember to open a Jira ticket for all patches submitted to stable
> branches.
>
>  
>
> Cheers,
>
> Chris.
>
>  
>
> -- 
>
> Kind regards
>
> Jan Hugo Prins
> /DevOps Engineer/
>
> 
>
> Auke Vleerstraat 140 E
> 7547 AN Enschede
> CC no. 08097527
> 
>
>   
>
> *T*+31 (0) 53 48 00 694 
> *E* jpr...@betterbe.com 
> *M* +31 (0)6 263 58 951 
>
>   
>
> *www.betterbe.com* 
>
> BetterBe accepts no liability for the content of this email, or for
> the consequences of any actions taken on the basis
> of the information provided, unless that information is subsequently
> confirmed in writing. If you are not the intended
> recipient you are notified that disclosing, copying, distributing or
> taking any action in reliance on the contents of this
> information is strictly prohibited.
>
> 

-- 
Kind regards

Jan Hugo Prins
/DevOps Engineer/

Auke Vleerstraat 140 E
7547 AN Enschede
CC no. 08097527

*T* +31 (0) 53 48 00 694 
*E* jpr...@betterbe.com 
*M* +31 (0)6 263 58 951 
www.betterbe.com 
BetterBe accepts no liability for the content of this email, or for the
consequences of any actions taken on the basis
of the information provided, unless that information is subsequently
confirmed in writing. If you are not the intended
recipient you are notified that disclosing, copying, distributing or
taking any action in reliance on the contents of this
information is strictly prohibited.