Hi Ciera ,
Thank you for your reply .
I am assuming , we dont need to configure any flows if both the ports are
in the ovs-bridge ( each is connected to a guest) , Please let me know if i
am wrong .
however, i tried to configure the flows too as per your suggestion , but
still i am unable to see any packets in the host for that bridge .
I am using Qemu 2.2.0
qemu-system-x86_64 --version
QEMU emulator version 2.2.0, Copyright (c) 2003-2008 Fabrice Bellard
My qemu commandline options :
VM1 :::::
/usr/bin/qemu-system-x86_64 -name Vhost1 -S -machine
pc-i440fx-2.2,accel=kvm,usb=off -cpu
SandyBridge,+invpcid,+erms,+bmi2,+smep,+avx2,+b
mi1,+fsgsbase,+abm,+pdpe1gb,+rdrand,+f16c,+osxsave,+movbe,+dca,+pcid,+pdcm,+xtpr,+fma,+tm2,+est,+smx,+vmx,+ds_cpl,+monitor,+dtes64,+pbe,+tm,+ht,+ss,+acpi,+ds,+vme
-m 15024 -realtime mlock=off -smp 16,so
ckets=16,cores=1,threads=1 -uuid fed77f13-ba10-57e4-7dd8-7629e6181657
-no-user-config -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/Vhost1.monitor,server,nowait
-mon chardev=char
monitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot strict=on
-device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
file=/test.img,if=none,id=drive-virtio-disk0,format=ra
w -device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
-netdev tap,fd=24,id=hostnet0,vhost=on,vhostfd=25 -device
virtio-net-pci,netdev=hostnet0,id=net0
,mac=52:54:00:ca:d5:80,bus=pci.0,addr=0x3 -chardev pty,id=charserial0
-device isa-serial,chardev=charserial0,id=serial0 -vnc 127.0.0.1:0 -device
cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -chardev
socket,id=char1,path=/var/run/openvswitch/dpdk0 -netdev
type=vhost-user,id=mynet1,chardev=char1,vhostforce -device
virtio-net-pci,mac=00:00:00:00:00:01,
netdev=mynet1 -chardev socket,id=char2,path=/var/run/openvswitch/dpdk1
-netdev type=vhost-user,id=mynet2,chardev=char2,vhostforce -device
virtio-net-pci,mac=00:00:00:00:00:02,netdev=mynet2 -object
memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge/,share=on
VM2::::
/usr/bin/qemu-system-x86_64 -name Vhost2 -S -machine
pc-i440fx-2.2,accel=kvm,usb=off -cpu
SandyBridge,+invpcid,+erms,+bmi2,+smep,+avx2,+b
mi1,+fsgsbase,+abm,+pdpe1gb,+rdrand,+f16c,+osxsave,+movbe,+dca,+pcid,+pdcm,+xtpr,+fma,+tm2,+est,+smx,+vmx,+ds_cpl,+monitor,+dtes64,+pbe,+tm,+ht,+ss,+acpi,+ds,+vme
-m 15024 -realtime mlock=off -smp 8,soc
kets=8,cores=1,threads=1 -uuid 30bc0154-7057-a7d6-12e1-7a2d8a178d47
-no-user-config -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/Vhost2.monitor,server,nowait
-mon chardev=charmo
nitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot strict=on
-device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
file=/test2.img,if=none,id=drive-virtio-disk0,format=raw
-device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
-netdev tap,fd=24,id=hostnet0,vhost=on,vhostfd=26 -device
virtio-net-pci,netdev=hostnet0,id=net0,
mac=52:54:00:4d:91:f5,bus=pci.0,addr=0x3 -chardev pty,id=charserial0
-device isa-serial,chardev=charserial0,id=serial0 -vnc 127.0.0.1:1 -device
cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device
intel-hda,id=sound0,bus=pci.0,addr=0x4 -device
hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0 -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -chardev
socket,id=char1,path=/var/run/openvswitch/dpdk1 -netdev
type=vhost-user,id=mynet1,chardev=char1,vhostforce -device
virtio-net-pci,mac=00:00:00:00:00:03,netdev=mynet1 -chardev
socket,id=char2,path=/var/run/openvswitch/dpdk3 -netdev
type=vhost-user,id=mynet2,chardev=char2,vhostforce -device
virtio-net-pci,mac=00:00:00:00:00:04,netdev=mynet2 -object
memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge/,share=on
ovs-vsctl :
ovs-vsctl show
3c25dda6-46c4-454c-8bdf-3832636b1f71
Bridge "temp0"
Port "dpdk1"
Interface "dpdk1"
type: dpdkvhostuser
Port "temp0"
Interface "temp0"
type: internal
Port "dpdk2"
Interface "dpdk2"
type: dpdkvhostuser
Port "dpdk0"
Interface "dpdk0"
type: dpdkvhostuser
Port "dpdk3"
Interface "dpdk3"
type: dpdkvhostuser
ovs_version: "2.4.90"
My vswitchd options
ovs-vswitchd --dpdk -c 0x0FF8 -n 4 --socket-mem 1024 0 --
unix:/var/run/openvswitch/db.sock -vconsole:emer -vsyslog:err -vfile:info
--mlockall --no-chdir --log-file=/var/log/openvswitch/ovs-vswitchd.log
--detach --monitor
ovs-ofctl dump-flows temp0
NXST_FLOW reply (xid=0x4):
cookie=0x0, duration=871.033s, table=0, n_packets=0, n_bytes=0,
idle_age=871, in_port=ANY actions=output:3
I am trying in the following way. .
[vm1] <dpdk1----------dpdk2> [vm2]
and the Ip address are in the same subnet on the 2 Vms .. (2.2.2.x/24)
Please let me know if any of the configuration is having any issues.
-Srikanth
On Wed, Jul 22, 2015 at 2:39 AM, Loftus, Ciara <[email protected]>
wrote:
> >
> > Hello,
> >
> > I am trying to use vhost-user for sending traffic between VMs . I have
> > configured two "dpdkvhostuser" interfaces each VM using one of them each
> > .
> >
> > vswitchd is running with dpdk.
> > Qemu is running with the vhost interfaces
> >
> > Guest OS can see interfaces - Verified with the static MAC i have
> assigned
> > for vhost interfaces.
> >
> > But i am not able to ping b/w these two VMs . Could somebody tell me how
> > to debug this further .
>
> Hi,
>
> To ping between the VMs first assign appropriate IP addresses, then
> configure the following flows:
> in_port=<vhostvm1>,actions=output:<vhostvm2>
> in_port=<vhostvm2>,actions=output:<vhostvm1>
>
> These flows allow the request/response packets to take the necessary path
> for a successful ping & you should see the stats incrementing with
> ovs-ofctl dump-flows.
>
> If you've already done this and it's still not working, please ensure your
> QEMU version is v2.2.0 or greater.
>
> Thanks,
> Ciara
>
> >
> > In the host i could see the ovs-netdev & ovs bridge i have created .
> >
> > Regards,
> > Srikanth
> > _______________________________________________
> > dev mailing list
> > [email protected]
> > http://openvswitch.org/mailman/listinfo/dev
>
_______________________________________________
dev mailing list
[email protected]
http://openvswitch.org/mailman/listinfo/dev