Hi Mauricio:
I changed my qemu version from 2.2.1 to 2.5.0 and the vms can
communication with each other. But the VM cannot ping PC in the outside
network. They in a same subnet.
PC(192.168.0.103/24) ping vm(192.168.0.90); vm’s host’s dpdk NIC and PC are in
a L2 switch. And DPDK NIC connected to switch’s Port Eth1/0/8
Investigation:
Binding VM’s IP and Mac in PC’s ARP table, and binding VM’s Mac in
Port Eth1/0/8; PC is pinging vm, I can see Output packets of Eth1/0/8
increasing regularly, which means the ping request packet has send to DPDK NIC.
But the VM can not receive the packet?
My switch has NO vlan , the default VLAN is 1. But when I ping from
vm1 to vm2 , showing MaC address in OVS which shows as follows:
port VLAN MAC Age
4 0 00:00:00:00:02:12 1
3 0 00:00:00:00:00:04 1
What was the reason why VM cannot communication with the PC? Thank you
发件人: lifuqiong [mailto:[email protected]]
发送时间: 2016年4月15日 9:22
收件人: 'Mauricio Vásquez'
抄送: '[email protected]'
主题: 答复: [ovs-dev] ovs + dpdk vhost-user match flows but cannot execute actions
Hello Mauricio Vasquez:
It works. Thank you very much.
发件人: Mauricio Vásquez [mailto:[email protected]]
发送时间: 2016年4月14日 14:55
收件人: lifuqiong
抄送: [email protected]
主题: Re: [ovs-dev] ovs + dpdk vhost-user match flows but cannot execute actions
Hello lifuqiong,
I faced the same problem some days ago
(http://openvswitch.org/pipermail/dev/2016-March/068282.html), the bug is
already fixed.
Where are you downloading OVS from?, it appears that the bug is still present
in the version on http://openvswitch.org/releases/openvswitch-2.5.0.tar.gz,
please download ovs from git and switch to branch-2.5.
Mauricio Vasquez,
On Thu, Apr 14, 2016 at 4:28 AM, lifuqiong <[email protected]> wrote:
I want to test dpdk vhost-user port on ovs to follow
https://software.intel.com/en-us/blogs/2015/06/09/building-vhost-user-for-ovs-today-using-dpdk-200;
I create ovs+dpdk environment followed INSTALL.DPDK.md; and create 2 VM2, try
to ping each other but show me “Destination Host Unreachable”;
Dump-flows shows packets matched the flow, but can’t output to port 4, why ? I
can’t get any useful error or warning info from ovs-vswitchd.log.
While ping from vm1 to vm2, statistics on vm1 shows that eth1 RX_packet keeps
zero, TX_PACKET keeps increasing.
1.
OVS: 2.5.0
Dpdk: 2.2.0
Qemu: 2.2.1
2. ovs-ofctl dump-flows ovsbr0
NXST_FLOW reply (xid=0x4):
cookie=0x0, duration=836.946s, table=0, n_packets=628, n_bytes=26376,
idle_age=0, in_port=3 actions=output:4
cookie=0x0, duration=831.458s, table=0, n_packets=36, n_bytes=1512,
idle_age=770, in_port=4 actions=output:3
3. root@host152:/usr/local/var/run/openvswitch# ovs-vsctl show
03ae6f7d-3b71-45e3-beb0-09fa11292eaa
Bridge "ovsbr0"
Port "vhost-user-1"
Interface "vhost-user-1"
type: dpdkvhostuser
Port "ovsbr0"
Interface "ovsbr0"
type: internal
Port "dpdk1"
Interface "dpdk1"
type: dpdk
Port "vhost-user-0"
Interface "vhost-user-0"
type: dpdkvhostuser
Port "dpdk0"
Interface "dpdk0"
type: dpdk
4. Start VM info:
qemu-system-x86_64 -m 1024 -smp 2 -hda /root/vm11.qcow2 -boot c -enable-kvm
-vnc 0.0.0.0:1 -chardev
socket,id=char1,path=/usr/local/var/run/openvswitch/vhost-user-0 -netdev
type=vhost-user,id=mynet1,chardev=char1,vhostforce -device
virtio-net-pci,mac=00:00:00:00:01:12,netdev=mynet1 -object
memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on -numa
node,memdev=mem -mem-prealloc -d exec
qemu-system-x86_64: -netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce:
chardev "char1" went up
5. My build command as follows:
#!/bin/bash
################ config and compile dpdk ################ # cd dpdk # make
config T=x86_64-native-linuxapp-gcc # make install T=x86_64-native-linuxapp-gcc
########################################################
################ config and compile ovs ################ # cd ovs # ./boot.sh #
./configure --localstatedir=/var
--with-dpdk=/root/workplane/dpdk/x86_64-native-linuxapp-gcc
# make
# make install
########################################################
################ config and compile qemu ################ # cd qemu #
./configure # make # make install
########################################################
## set hugepage number, use boot cmdline or procfs echo 8 >
/proc/sys/vm/nr_hugepages
## insert the kernel modules
modprobe uio
insmod $DPDK_BUILD/kmod/igb_uio.ko
insmod $DPDK_BUILD/kmod/rte_kni.ko
insmod $DPDK_DIR/lib/librte_vhost/eventfd_link/eventfd_link.ko
# unbind the dpdk interface
$DPDK_DIR/tools/dpdk_nic_bind.py --bind=igb_uio 01:00.0
$DPDK_DIR/tools/dpdk_nic_bind.py --bind=igb_uio 01:00.1
#Mount hugetable
mkdir -p /dev/hugepages
mount -t hugetlbfs -o pagesize=1G none /dev/hugepages
#first time
#ovsdb-tool create /usr/local/etc/openvswitch/conf.db
/usr/local/share/openvswitch/vswitch.ovsschema
ovsdb-server --remote=punix:/usr/local/var/run/openvswitch/db.sock
--remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach
--log-file
#firsttime only
#ovs-vsctl --no-wait init
ovs-vswitchd --dpdk -c 0x77 -n 2 --socket-mem 2048,0 -- unix:$DB_SOCK --pidfile
--detach --log-file
##############################################################################
## Add brige
/usr/local/bin/ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0 datapath_type=netdev
## Add dpdk port
/usr/local/bin/ovs-vsctl add-port ovsbr0 dpdk0 -- set Interface dpdk0 type=dpdk
/usr/local/bin/ovs-vsctl add-port ovsbr0 dpdk1 -- set Interface dpdk1 type=dpdk
# /usr/local/bin/ovs-vsctl add-port ovsbr0 dpdk2 -- set Interface dpdk2
type=dpdk # /usr/local/bin/ovs-vsctl add-port ovsbr0 dpdk3 -- set Interface
dpdk3 type=dpdk
## Add vhost-user port
/usr/local/bin/ovs-vsctl add-port ovsbr0 vhost-user-0 -- set Interface
vhost-user-0 type=dpdkvhostuser /usr/local/bin/ovs-vsctl add-port ovsbr0
vhost-user-1 -- set Interface vhost-user-1 type=dpdkvhostuser
_______________________________________________
dev mailing list
[email protected]
http://openvswitch.org/mailman/listinfo/dev
_______________________________________________
dev mailing list
[email protected]
http://openvswitch.org/mailman/listinfo/dev