Re: [ovs-dev] [PATCH 1/4] netdev-dpdk: fix mempool management with vhu client.

2017-09-28 Thread Fischetti, Antonio
Thanks Darrell, replies inline.

-Antonio

> -Original Message-
> From: Darrell Ball [mailto:db...@vmware.com]
> Sent: Tuesday, September 26, 2017 9:04 PM
> To: Fischetti, Antonio <antonio.fische...@intel.com>; d...@openvswitch.org
> Subject: Re: [ovs-dev] [PATCH 1/4] netdev-dpdk: fix mempool management with 
> vhu
> client.
> 
> 
> 
> On 9/26/17, 12:58 PM, "Darrell Ball" <db...@vmware.com> wrote:
> 
> 
> 
> On 9/26/17, 8:04 AM, "ovs-dev-boun...@openvswitch.org on behalf of
> antonio.fische...@intel.com" <ovs-dev-boun...@openvswitch.org on behalf of
> antonio.fische...@intel.com> wrote:
> 
> In a PVP test where vhostuser ports are configured as
> clients, OvS crashes when QEMU is launched.
> This patch avoids the repeated calls to netdev_change_seq_changed
> after the requested mempool is already acquired.
> 
> CC: Ciara Loftus <ciara.lof...@intel.com>
> CC: Kevin Traynor <ktray...@redhat.com>
> CC: Aaron Conole <acon...@redhat.com>
> Fixes: d555d9bded5f ("netdev-dpdk: Create separate memory pool for 
> each
> port.")
> Signed-off-by: Antonio Fischetti <antonio.fische...@intel.com>
> ---
> To replicate the bug scenario:
> 
> [Darrell] Just curious, but reproducibility with below steps ?; what about
> using libvirt ?
>Do we have the stacktrace ?

[Antonio] 
Actually I didn't try with libvirt. I also saw that it didn't crash when the
vhostuser ports were set with type=dpdkvhostuser.
 
Below is the stacktrace on a crash, ie when setting type=dpdkvhostuserclient:

Thread 26 "pmd124" received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7f73ca7fc700 (LWP 20176)]
0x00410fa9 in stack_dequeue ()
(gdb) bt
#0  0x00410fa9 in stack_dequeue ()
#1  0x005cdc17 in rte_mempool_ops_dequeue_bulk (mp=0x7f72fb83c940, 
obj_table=0x7f73ca7fb258, n=1) at
 /home/user/dpdk/x86_64-native-linuxapp-gcc/include/rte_mempool.h:474
#2  0x005cdff9 in __mempool_generic_get (cache=0x0, n=1, 
obj_table=0x7f73ca7fb258, mp=0x7f72fb83c940
) at /home/user/dpdk/x86_64-native-linuxapp-gcc/include/rte_mempool.h:1218
#3  rte_mempool_generic_get (flags=0, cache=0x0, n=1, obj_table=0x7f73ca7fb258, 
mp=0x7f72fb83c940) at /home/
user/dpdk/x86_64-native-linuxapp-gcc/include/rte_mempool.h:1256
#4  rte_mempool_get_bulk (n=1, obj_table=0x7f73ca7fb258, mp=0x7f72fb83c940) at 
/home/user/dpdk/x86_64-nat
ive-linuxapp-gcc/include/rte_mempool.h:1289
#5  rte_mempool_get (obj_p=0x7f73ca7fb258, mp=0x7f72fb83c940) at 
/home/user/dpdk/x86_64-native-linuxapp-g
cc/include/rte_mempool.h:1315
#6  rte_mbuf_raw_alloc (mp=0x7f72fb83c940) at 
/home/user/dpdk/x86_64-native-linuxapp-gcc/include/rte_mbuf
.h:822
#7  0x005ce14b in rte_pktmbuf_alloc (mp=0x7f72fb83c940) at 
/home/user/dpdk/x86_64-native-linuxapp
-gcc/include/rte_mbuf.h:1122
#8  0x005d283a in rte_vhost_dequeue_burst (vid=0, queue_id=1, 
mbuf_pool=0x7f72fb83c940, pkts=0x7f73c
a7fb830, count=1) at /home/user/dpdk/lib/librte_vhost/virtio_net.c:1116
#9  0x007b4025 in netdev_dpdk_vhost_rxq_recv (rxq=0x7f72ffdab080, 
batch=0x7f73ca7fb820) at lib/netde
v-dpdk.c:1650
#10 0x0070d331 in netdev_rxq_recv ()
#11 0x006ea8ce in dp_netdev_process_rxq_port ()
#12 0x006eaba0 in pmd_thread_main ()
#13 0x0075cb34 in ovsthread_wrapper ()
#14 0x7f742a0e65ca in start_thread () from /lib64/libpthread.so.0
#15 0x7f742990f0cd in clone () from /lib64/libc.so.6

> 
> 
> 
>  PVP test setup
>  --
> CLIENT_SOCK_DIR=/tmp
> SOCK0=dpdkvhostuser0
> SOCK1=dpdkvhostuser1
> 
> 1 PMD
> Add 2 dpdk ports, n_rxq=1
> Add 2 vhu ports both of type dpdkvhostuserclient and specify vhost-
> server-path
>  ovs-vsctl set Interface dpdkvhostuser0 options:vhost-server-
> path="$CLIENT_SOCK_DIR/$SOCK0"
>  ovs-vsctl set Interface dpdkvhostuser1 options:vhost-server-
> path="$CLIENT_SOCK_DIR/$SOCK1"
> 
> Set port-based rules: dpdk0 <--> vhu0 and dpdk1 <--> vhu1
>  add-flow br0 in_port=1,action=output:3
>  add-flow br0 in_port=3,action=output:1
>  add-flow br0 in_port=4,action=output:2
>  add-flow br0 in_port=2,action=output:4
> 
>  Launch QEMU
>  ---
> As OvS vhu ports are acting as clients, we must specify 'server' in 
> the
> next command.
> VM_IMAGE=
> 
>  sudo -E taskset 0x3F00 $QEMU_DIR/x86_64-softmmu/qemu-system-x86_64 -
> name us-vhost-vm1 -cpu host -enable-kvm -m 4096M -object memory-backend-
> file,id=mem,size=4096M,mem-path=/de

Re: [ovs-dev] [PATCH 1/4] netdev-dpdk: fix mempool management with vhu client.

2017-09-26 Thread Darrell Ball


On 9/26/17, 12:58 PM, "Darrell Ball"  wrote:



On 9/26/17, 8:04 AM, "ovs-dev-boun...@openvswitch.org on behalf of 
antonio.fische...@intel.com"  wrote:

In a PVP test where vhostuser ports are configured as
clients, OvS crashes when QEMU is launched.
This patch avoids the repeated calls to netdev_change_seq_changed
after the requested mempool is already acquired.

CC: Ciara Loftus 
CC: Kevin Traynor 
CC: Aaron Conole 
Fixes: d555d9bded5f ("netdev-dpdk: Create separate memory pool for each 
port.")
Signed-off-by: Antonio Fischetti 
---
To replicate the bug scenario:

[Darrell] Just curious, but reproducibility with below steps ?; what about 
using libvirt ?
   Do we have the stacktrace ?
 


 PVP test setup
 --
CLIENT_SOCK_DIR=/tmp
SOCK0=dpdkvhostuser0
SOCK1=dpdkvhostuser1

1 PMD
Add 2 dpdk ports, n_rxq=1
Add 2 vhu ports both of type dpdkvhostuserclient and specify 
vhost-server-path
 ovs-vsctl set Interface dpdkvhostuser0 
options:vhost-server-path="$CLIENT_SOCK_DIR/$SOCK0"
 ovs-vsctl set Interface dpdkvhostuser1 
options:vhost-server-path="$CLIENT_SOCK_DIR/$SOCK1"

Set port-based rules: dpdk0 <--> vhu0 and dpdk1 <--> vhu1
 add-flow br0 in_port=1,action=output:3
 add-flow br0 in_port=3,action=output:1
 add-flow br0 in_port=4,action=output:2
 add-flow br0 in_port=2,action=output:4

 Launch QEMU
 ---
As OvS vhu ports are acting as clients, we must specify 'server' in the 
next command.
VM_IMAGE=

 sudo -E taskset 0x3F00 $QEMU_DIR/x86_64-softmmu/qemu-system-x86_64 
-name us-vhost-vm1 -cpu host -enable-kvm -m 4096M -object 
memory-backend-file,id=mem,size=4096M,mem-path=/dev/hugepages,share=on -numa 
node,memdev=mem -mem-prealloc -smp 4 -drive file=$VM_IMAGE -chardev 
socket,id=char0,path=$CLIENT_SOCK_DIR/$SOCK0,server -netdev 
type=vhost-user,id=mynet1,chardev=char0,vhostforce -device 
virtio-net-pci,mac=00:00:00:00:00:01,netdev=mynet1,mrg_rxbuf=off -chardev 
socket,id=char1,path=$CLIENT_SOCK_DIR/$SOCK1,server -netdev 
type=vhost-user,id=mynet2,chardev=char1,vhostforce -device 
virtio-net-pci,mac=00:00:00:00:00:02,netdev=mynet2,mrg_rxbuf=off --nographic

 Expected behavior
 -
With this fix OvS shouldn't crash.
---
 lib/netdev-dpdk.c | 16 ++--
 1 file changed, 10 insertions(+), 6 deletions(-)

diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c
index 648d719..2f5ec71 100644
--- a/lib/netdev-dpdk.c
+++ b/lib/netdev-dpdk.c
@@ -549,7 +549,7 @@ dpdk_mp_create(struct netdev_dpdk *dev, int mtu)
   dmp->socket_id);
 if (dmp->mp) {
 VLOG_DBG("Allocated \"%s\" mempool with %u mbufs", mp_name,
- dmp->mp_size);
+dmp->mp_size);


[Darrell]
is this needed?



 } else if (rte_errno == EEXIST) {
 /* A mempool with the same name already exists.  We just
  * retrieve its pointer to be returned to the caller. */
@@ -571,6 +571,7 @@ dpdk_mp_create(struct netdev_dpdk *dev, int mtu)
  * initializes some OVS specific fields of dp_packet.
  */
 rte_mempool_obj_iter(dmp->mp, ovs_rte_pktmbuf_init, NULL);
+
 return dmp;
 }
 } while (!mp_exists &&
@@ -3247,11 +3248,14 @@ dpdk_vhost_reconfigure_helper(struct 
netdev_dpdk *dev)
 
 netdev_dpdk_remap_txqs(dev);
 
-err = netdev_dpdk_mempool_configure(dev);
-if (err) {
-return err;
-} else {
-netdev_change_seq_changed(>up);
+if (dev->requested_socket_id != dev->socket_id
+|| dev->requested_mtu != dev->mtu) {


[Darrell] Can this check be moved to common code in 
netdev_dpdk_mempool_configure() and using a special return value
that skips netdev_change_seq_changed(>up); in the callers, if
the call would be redundant ?


+err = netdev_dpdk_mempool_configure(dev);
+if (err) {
+return err;
+} else {
+netdev_change_seq_changed(>up);
+}
 }
 
 if (netdev_dpdk_get_vid(dev) >= 0) {
-- 
2.4.11

 

Re: [ovs-dev] [PATCH 1/4] netdev-dpdk: fix mempool management with vhu client.

2017-09-26 Thread Darrell Ball


On 9/26/17, 8:04 AM, "ovs-dev-boun...@openvswitch.org on behalf of 
antonio.fische...@intel.com"  wrote:

In a PVP test where vhostuser ports are configured as
clients, OvS crashes when QEMU is launched.
This patch avoids the repeated calls to netdev_change_seq_changed
after the requested mempool is already acquired.

CC: Ciara Loftus 
CC: Kevin Traynor 
CC: Aaron Conole 
Fixes: d555d9bded5f ("netdev-dpdk: Create separate memory pool for each 
port.")
Signed-off-by: Antonio Fischetti 
---
To replicate the bug scenario:

[Darrell] Just curious, but reproducibility with below steps ?; what about 
using libvirt ?
   Do we have the stacktrace ?
 


 PVP test setup
 --
CLIENT_SOCK_DIR=/tmp
SOCK0=dpdkvhostuser0
SOCK1=dpdkvhostuser1

1 PMD
Add 2 dpdk ports, n_rxq=1
Add 2 vhu ports both of type dpdkvhostuserclient and specify 
vhost-server-path
 ovs-vsctl set Interface dpdkvhostuser0 
options:vhost-server-path="$CLIENT_SOCK_DIR/$SOCK0"
 ovs-vsctl set Interface dpdkvhostuser1 
options:vhost-server-path="$CLIENT_SOCK_DIR/$SOCK1"

Set port-based rules: dpdk0 <--> vhu0 and dpdk1 <--> vhu1
 add-flow br0 in_port=1,action=output:3
 add-flow br0 in_port=3,action=output:1
 add-flow br0 in_port=4,action=output:2
 add-flow br0 in_port=2,action=output:4

 Launch QEMU
 ---
As OvS vhu ports are acting as clients, we must specify 'server' in the 
next command.
VM_IMAGE=

 sudo -E taskset 0x3F00 $QEMU_DIR/x86_64-softmmu/qemu-system-x86_64 -name 
us-vhost-vm1 -cpu host -enable-kvm -m 4096M -object 
memory-backend-file,id=mem,size=4096M,mem-path=/dev/hugepages,share=on -numa 
node,memdev=mem -mem-prealloc -smp 4 -drive file=$VM_IMAGE -chardev 
socket,id=char0,path=$CLIENT_SOCK_DIR/$SOCK0,server -netdev 
type=vhost-user,id=mynet1,chardev=char0,vhostforce -device 
virtio-net-pci,mac=00:00:00:00:00:01,netdev=mynet1,mrg_rxbuf=off -chardev 
socket,id=char1,path=$CLIENT_SOCK_DIR/$SOCK1,server -netdev 
type=vhost-user,id=mynet2,chardev=char1,vhostforce -device 
virtio-net-pci,mac=00:00:00:00:00:02,netdev=mynet2,mrg_rxbuf=off --nographic

 Expected behavior
 -
With this fix OvS shouldn't crash.
---
 lib/netdev-dpdk.c | 16 ++--
 1 file changed, 10 insertions(+), 6 deletions(-)

diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c
index 648d719..2f5ec71 100644
--- a/lib/netdev-dpdk.c
+++ b/lib/netdev-dpdk.c
@@ -549,7 +549,7 @@ dpdk_mp_create(struct netdev_dpdk *dev, int mtu)
   dmp->socket_id);
 if (dmp->mp) {
 VLOG_DBG("Allocated \"%s\" mempool with %u mbufs", mp_name,
- dmp->mp_size);
+dmp->mp_size);
 } else if (rte_errno == EEXIST) {
 /* A mempool with the same name already exists.  We just
  * retrieve its pointer to be returned to the caller. */
@@ -571,6 +571,7 @@ dpdk_mp_create(struct netdev_dpdk *dev, int mtu)
  * initializes some OVS specific fields of dp_packet.
  */
 rte_mempool_obj_iter(dmp->mp, ovs_rte_pktmbuf_init, NULL);
+
 return dmp;
 }
 } while (!mp_exists &&
@@ -3247,11 +3248,14 @@ dpdk_vhost_reconfigure_helper(struct netdev_dpdk 
*dev)
 
 netdev_dpdk_remap_txqs(dev);
 
-err = netdev_dpdk_mempool_configure(dev);
-if (err) {
-return err;
-} else {
-netdev_change_seq_changed(>up);
+if (dev->requested_socket_id != dev->socket_id
+|| dev->requested_mtu != dev->mtu) {


[Darrell] Can this check be moved to common code in 
netdev_dpdk_mempool_configure() and using a special return value
that skips netdev_change_seq_changed(>up); in the callers, if
the call would be redundant ?


+err = netdev_dpdk_mempool_configure(dev);
+if (err) {
+return err;
+} else {
+netdev_change_seq_changed(>up);
+}
 }
 
 if (netdev_dpdk_get_vid(dev) >= 0) {
-- 
2.4.11

___
dev mailing list
d...@openvswitch.org

https://urldefense.proofpoint.com/v2/url?u=https-3A__mail.openvswitch.org_mailman_listinfo_ovs-2Ddev=DwICAg=uilaK90D4TOVoH58JNXRgQ=BVhFA09CGX7JQ5Ih-uZnsw=eAz1yiSbXvMuAA292xJwliYRlu6n1Y0nqm0MuU2JTXY=4gLg6fgwmWYEM4s5v4S0NjY52DBbxKTmaQalT8194NE=
 


___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev