> > Yes, I would like to use IVSHMEM to communicate between host and guest > VM. In guest VM, DPDK application is running [with 2MB hugepages], which > processes the packets. > > I'm using QEMU emulator version 2.2.1. Do I need to apply any patch for > qemu [It is mentioned in INSTALL.DPDK.nd] to use this feature?
The guide is correct - you need to use the patched version of QEMU v1.6.2 > > If the answer for the above is yes, then in below link, openvswitch related > tar > bundles were updated 2 years back. Which one to pick? > https://01.org/packet-processing/downloads > The latest one should do. R1.1. There should be a qemu directory inside this tarball. Compile this, and use it when running the IVSHM test with dpdkr ports. > Thanks &Regards, > Varun > > -----Original Message----- > From: Loftus, Ciara [mailto:[email protected]] > Sent: Thursday, January 28, 2016 10:29 PM > To: Rapelly, Varun <[email protected]> > Cc: [email protected] > Subject: RE: OVS-DPDK vhostuser guest ping issue > > > Hi Loftus, > > > > It was other configuration issue. After resolving that it worked fine. > > > > Could you please let me know how to launch ovs-vswitchd using DPDK > > IVSHMEM? > > > > I'm using below command to launch ovs-vswitchd to use DPDK IVSHMEM > > ---------------------------------------------------------------------- > > ------------------------ > > ------- > > ovs-vswitchd --dpdk -c 0xf -n 4 --proc-type=primary --huge-dir > > /dev/hugepages --socket-mem 1024,0 -- unix:$DB_SOCK -vconsole:emer - > > vsyslog:err -vfile:info --mlockall --no-chdir --log- > > file=/usr/local/var/log/openvswitch/ovs-vswitchd.log --pidfile -detach > > > > without IVSHMEM > > ---------------------------------------------------------------------- > > ------------------------ > > ------- > > ovs-vswitchd --dpdk -c 0x1 -n 4 --socket-mem 1024,0 -- unix:$DB_SOCK - > > vconsole:emer -vsyslog:err -vfile:info --mlockall --no-chdir --log- > > file=/usr/local/var/log/openvswitch/ovs-vswitchd.log --pidfile -detach > > > > When I launched with the above args [with IVSHMEM], getting following > > message in ovs-vswitchd log: > > > > EAL: VFIO modules not all loaded, skip VFIO support... > > EAL: Searching for IVSHMEM devices... > > EAL: No IVSHMEM configuration found! > > EAL: Setting up memory... > > EAL: Ask a virtual area of 0x400000000 bytes > > This log is normal. > > > > > I followed the remaining steps as per INSTALL.DPDK.md, but little > > confused with IVSHMEM configuration. > > What is your intention? Do you wish to use IVSHM to communicate with a > VM? If so I recommend you refer to the steps in INSTALL.DPDK explaining > 'DPDK Rings' / 'dpdkr' ports. > > Ciara > > > > > Thanks in advance. > > > > -----Original Message----- > > From: Loftus, Ciara [mailto:[email protected]] > > Sent: Tuesday, January 19, 2016 2:51 PM > > To: Rapelly, Varun <[email protected]>; [email protected] > > Subject: RE: OVS-DPDK vhostuser guest ping issue > > > > > > > > Hi All, > > > I'm facing a following issue with OVS-DPDK dpdkvhostuser port. I > > > was able to create ovs bridge with dpdkvhostuser port types and able > > > to ping the gateway from ovs bridges on the host. But when I'm > > > creating guest VM using dpdkvhostuser port type, not able to ping > > > gateway on guest VM. Not seeing any error logs in vswitchd log. > > > Following are the list of commands that I used: > > > [root@kujo ~]# ovs-vsctl --no-wait add-br pkt1 -- set Bridge pkt1 > > > datapath_type=netdev [root@kujo ~]# ovs-vsctl add-port pkt1 dpdk0 -- > > > set Interface dpdk0 type=dpdk [root@kujo ~]# ovs-vsctl add-port pkt1 > > > vhostuser0 -- set Interface > > > vhostuser0 type=dpdkvhostuser > > > [root@kujo ~]# qemu-system-x86_64 -smp 4 -boot d -cdrom TinyCore- > > > current.iso -m 512 TinyCore-current.iso -boot d -name varun -object > > > memory-backend-file,id=mem,size=8192M,mem- > > > path=/dev/hugepages,share=on -mem-prealloc -chardev > > > socket,id=char1,path=/usr/local/var/run/openvswitch/vhostuser0 > > > -netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce -device > > > virtio-net- > > > pci,mac=52:55:00:00:20:11,netdev=mynet1 -daemonize > > > qemu-system-x86_64: -netdev type=vhost- > > > user,id=mynet1,chardev=char1,vhostforce: chardev "char1" went up > > > WARNING: Image format was not specified for 'TinyCore-current.iso' > > > and probing guessed raw. > > > Automatically detecting the format is dangerous for raw > > > images, write operations on block 0 will be restricted. > > > Specify the 'raw' format explicitly to remove the restrictions. > > > VNC server running on `::1:5900' > > > [root@kujo ~]# ifconfig pkt1 10.54.218.88 netmask 255.255.255.0 up > > > [root@kujo ~]# ping -I pkt1 10.54.218.1 [on host] PING 10.54.218.1 > > > (10.54.218.1) from 10.54.218.88 pkt1: 56(84) bytes of data. > > > 64 bytes from 10.54.218.1: icmp_seq=1 ttl=255 time=11.7 ms > > > 64 bytes from 10.54.218.1: icmp_seq=2 ttl=255 time=0.930 ms ^C > > > --- 10.54.218.1 ping statistics --- > > > 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt > > > min/avg/max/mdev = 0.930/6.351/11.773/5.422 ms [root@kujo ~]# > > > ovs-vsctl show 2a8a86f3-b813-43a4-826e-dd778aafbcec > > > Bridge "pkt1" > > > Port "pkt1" > > > Interface "pkt1" > > > type: internal > > > Port "dpdk0" > > > Interface "dpdk0" > > > type: dpdk > > > Port "vhostuser0" > > > Interface "vhostuser0" > > > type: dpdkvhostuser > > > > > > [root@kujo ~]# ovs-ofctl dump-flows pkt1 NXST_FLOW reply (xid=0x4): > > > cookie=0x0, duration=86.627s, table=0, n_packets=89, n_bytes=8446, > > > idle_age=0, priority=0 actions=NORMAL > > > > > > [root@kujo ~]# ovs-ofctl dump-ports pkt1 OFPST_PORT reply (xid=0x2): > > > 3 ports > > > port LOCAL: rx pkts=11, bytes=830, drop=0, errs=0, frame=0, > > > over=0, > > > crc=0 > > > tx pkts=5, bytes=434, drop=0, errs=0, coll=0 > > > port 1: rx pkts=86, bytes=8638, drop=0, errs=0, frame=0, over=0, > > > crc=0 > > > tx pkts=11, bytes=928, drop=0, errs=0, coll=0 > > > port 2: rx pkts=0, bytes=?, drop=?, errs=?, frame=?, over=?, crc=? > > > tx pkts=0, bytes=?, drop=11, errs=?, coll=? > > > > > > When i'm pinging the guest VM IP [10.54.218.244], I could see the > > > packets coming on ovs bridge interface. > > > [root@kujo ~]# ifconfig pkt1 > > > pkt1: flags=323<UP,BROADCAST,RUNNING,PROMISC> mtu 1500 > > > inet 10.54.218.88 netmask 255.255.255.0 broadcast > > > 10.54.218.255 > > > inet6 fe80::e611:5bff:fe98:962 prefixlen 64 scopeid > > > 0x20<link> > > > ether e4:11:5b:98:09:62 txqueuelen 500 (Ethernet) > > > RX packets 10 bytes 850 (850.0 B) > > > RX errors 0 dropped 0 overruns 0 frame 0 > > > TX packets 11 bytes 830 (830.0 B) > > > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > > > > > [root@kujo ~]# tcpdump -i pkt1 > > > tcpdump: verbose output suppressed, use -v or -vv for full protocol > > > decode listening on pkt1, link-type EN10MB (Ethernet), capture size > > > 65535 bytes > > > 11:01:36.852756 ARP, Request who-has 10.54.218.244 tell 10.54.218.1, > > > length > > > 46 > > > 11:01:41.581666 ARP, Request who-has 10.54.218.244 tell 10.54.218.1, > > > length > > > 46 > > > ^C > > > 2 packets captured > > > 2 packets received by filter > > > 0 packets dropped by kernel > > > > > > [root@kujo ~]# route -n > > > Kernel IP routing table > > > Destination Gateway Genmask Flags Metric Ref Use > > > Iface > > > 0.0.0.0 10.54.28.1 0.0.0.0 UG 100 0 > > > 0 eno1 > > > 10.54.28.0 0.0.0.0 255.255.254.0 U 100 0 > > > 0 eno1 > > > 10.54.218.0 0.0.0.0 255.255.255.0 U 0 0 > > > 0 pkt1 > > > 192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 > > > 0 virbr0 > > > > > > Please give me some points to debug this issue. Few months back I > > > used the same steps on the same host, it worked fine. I didn't face > > > this kind of > > issue. > > > > What has changed in your setup since then? OVS/DPDK commit ID etc? > > > > Thanks, > > Ciara > > > > > Thanks in advance. > > > > > > Regards, > > > Varun > > > > > > > > > > > > Regards, > > > Varun _______________________________________________ discuss mailing list [email protected] http://openvswitch.org/mailman/listinfo/discuss
