You are perfectly right, Ilya! That fixed it. 

Thank you very much.

Regards,
Sundar

> -----Original Message-----
> From: Ilya Maximets [mailto:[email protected]]
> Sent: Tuesday, April 4, 2017 2:45 AM
> To: Nadathur, Sundar <[email protected]>; ovs-
> [email protected]
> Subject: Re: [ovs-dev] Traffic fails in vhost user port
> 
> On 04.04.2017 12:26, Nadathur, Sundar wrote:
> > Thanks, Ilya.
> >
> > # ovs-vsctl list Interface vi1
> > _uuid               : 30d1600a-ff7d-4bf5-9fdb-b0767af3611c
> > admin_state         : up
> > bfd                 : {}
> > bfd_status          : {}
> > cfm_fault           : []
> > cfm_fault_status    : []
> > cfm_flap_count      : []
> > cfm_health          : []
> > cfm_mpid            : []
> > cfm_remote_mpids    : []
> > cfm_remote_opstate  : []
> > duplex              : []
> > error               : []
> > external_ids        : {}
> > ifindex             : 0
> > ingress_policing_burst: 0
> > ingress_policing_rate: 0
> > lacp_current        : []
> > link_resets         : 0
> > link_speed          : []
> > link_state          : up
> > lldp                : {}
> > mac                 : []
> > mac_in_use          : "00:00:00:00:00:00"
> > mtu                 : 1500
> > mtu_request         : []
> > name                : "vi1"
> > ofport              : 5
> > ofport_request      : []
> > options             : {}
> > other_config        : {}
> > statistics          : {"rx_1024_to_1518_packets"=0,
> "rx_128_to_255_packets"=0, "rx_1523_to_max_packets"=0,
> "rx_1_to_64_packets"=0, "rx_256_to_511_packets"=0,
> "rx_512_to_1023_packets"=0, "rx_65_to_127_packets"=0, rx_bytes=0,
> rx_dropped=0, rx_errors=0, tx_bytes=0, tx_dropped=11}
> > status              : {}
> > type                : dpdkvhostuser
> >
> > Here is the qemu command line split for readability:
> > /usr/libexec/qemu-kvm -name guest=vhu-vm1,debug-threads=on -S
> >    -object
> secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-3-vhu-
> vm1/master-key.aes -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off
> >    -m 2048 -mem-prealloc -mem-path /dev/hugepages/libvirt/qemu -
> realtime mlock=off -smp 2,sockets=2,cores=1,threads=1
> >    -uuid f5b8c05b-9c7a-3211-49b9-2bd635f7e2aa -no-user-config -
> nodefaults
> >    -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-3-
> vhu-vm1/monitor.sock,server,nowait
> >    -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-
> shutdown -boot strict=on -device piix3-usb-
> uhci,id=usb,bus=pci.0,addr=0x1.0x2
> >    -drive
> file=/home/nfv/Images/vm1.qcow2,format=qcow2,if=none,id=drive-virtio-
> disk0 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-
> disk0,id=virtio-disk0,bootindex=1
> >     -chardev socket,id=charnet0,path=/usr/local/var/run/openvswitch/vi1 -
> netdev vhost-user,chardev=charnet0,id=hostnet0
> >    -device virtio-net-
> pci,netdev=hostnet0,id=net0,mac=3a:19:09:52:14:50,bus=pci.0,addr=0x3 -
> vnc 0.0.0.0:1
> >    -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device
> > virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -msg timestamp=on
> >
> 
> OK. I got it. Memory is not shared between OVS and VM.
> To make vhostuser work you must use 'share' option for qemu memory
> backing.
> 
> Please, refer the Documentation/topics/dpdk/vhost-user.rst for libvirt xml
> example.  "memAccess='shared'" - is what you need.
> 
> QEMU cmdline should contain something like this:
> -object memory-backend-file,id=ram-node0,prealloc=yes,mem-
> path=/dev/hugepages/libvirt/qemu,share=yes,size=10737418240,host-
> nodes=0,policy=bind
> Maybe you can avoid using hugepages, but 'share=yes' is required for vhost-
> user to work.
> 
> Best regards, Ilya Maximets.
> 
> 
> 
> > Re. ifconfig from VM, I have difficulty getting it right now over VPN, but I
> will get it by tomorrow morning. The 'ifconfig ' state is UP in the VM, IP
> address is configured as 200.1.1.2/24 in the virtio-net interface in the VM.
> Within the VM, the local address 200.1.1.2 can be pinged.
> >
> > Is there any good way to monitor packets flowing over vhost-user interface,
> such as wireshark for eth interfaces?
> >
> >
> > Regards,
> > Sundar
> >
> >> -----Original Message-----
> >> From: Ilya Maximets [mailto:[email protected]]
> >> Sent: Tuesday, April 4, 2017 2:13 AM
> >> To: Nadathur, Sundar <[email protected]>; ovs-
> >> [email protected]
> >> Subject: Re: [ovs-dev] Traffic fails in vhost user port
> >>
> >> On 04.04.2017 11:29, Nadathur, Sundar wrote:
> >>>> -----Original Message-----
> >>>> From: Ilya Maximets [mailto:[email protected]]
> >>>> Sent: Tuesday, April 4, 2017 12:07 AM
> >>>> To: [email protected]; Nadathur, Sundar
> >>>> <[email protected]>
> >>>> Subject: [ovs-dev] Traffic fails in vhost user port
> >>>>
> >>>> Hi Sundar.
> >>>
> >>>>> The flows are configured as below:
> >>>>> # ovs-ofctl dump-flows br0
> >>>>> NXST_FLOW reply (xid=0x4):
> >>>>> cookie=0x0, duration=2833.612s, table=0, n_packets=0, n_bytes=0,
> >>>>> idle_age=2833, in_port=1 actions=output:5 cookie=0x2,
> >>>>> duration=2819.820s, table=0, n_packets=0, n_bytes=0,
> >>>>> idle_age=2819,
> >>>>> in_port=5 actions=output:1
> >>>>
> >>>> I guess, your flow table configured in a wrong way.
> >>>> OpenFlow port of br0 is LOCAL, not 1.
> >>>> Try this:
> >>>>
> >>>> # ovs-ofctl del-flows br0
> >>>>
> >>>> # ovs-ofctl add-flow br0 in_port=5,actions=output:LOCAL # ovs-ofctl
> >>>> add-flow
> >>>> br0 in_port=LOCAL,actions=output:5
> >>>
> >>> Thank you, Ilya. I did as you suggested, but the ping traffic from
> >>> br0
> >> (LOCAL) is dropped by the output port 5:
> >>> # ovs-ofctl dump-flows br0
> >>> NXST_FLOW reply (xid=0x4):
> >>>  cookie=0x0, duration=1922.876s, table=0, n_packets=0, n_bytes=0,
> >>> idle_age=1922, in_port=5 actions=LOCAL  cookie=0x0,
> >>> duration=1915.458s, table=0, n_packets=6, n_bytes=252, idle_age=116,
> >>> in_port=LOCAL actions=output:5
> >>>
> >>> # ovs-ofctl dump-ports br0 # <-- Drops in port 5 OFPST_PORT reply
> >>> (xid=0x2): 2 ports
> >>>   port  5: rx pkts=?, bytes=0, drop=0, errs=0, frame=?, over=?, crc=?
> >>>            tx pkts=?, bytes=0, drop=5, errs=?, coll=?
> >>>   port LOCAL: rx pkts=43, bytes=2118, drop=0, errs=0, frame=0,
> >>> over=0,
> >> crc=0
> >>>            tx pkts=0, bytes=0, drop=0, errs=0, coll=0
> >>>
> >>> Wireshark shows that br0 sends out 3 ARP requests but there is no
> >> response.
> >>>
> >>>> or
> >>>>
> >>>> # ovs-ofctl add-flow br0 actions=NORMAL
> >>> I tried this too after doing del-flows. The LOCAL port's MAC is
> >>> learnt,
> >> wireshark still shows br0 sending out ARP requests with no response.
> >>>
> >>> BTW, 'ovs-vsctl list Interface' shows the vi1 (VM port, #5) is up
> >>> (most fields
> >> are blank):
> >>> _uuid               : 30d1600a-ff7d-4bf5-9fdb-b0767af3611c
> >>> admin_state         : up
> >>> . . .
> >>> link_speed          : []
> >>> link_state          : up
> >>> . . .
> >>> mac_in_use          : "00:00:00:00:00:00"
> >>> mtu                 : 1500
> >>> mtu_request         : []
> >>> name                : "vi1"
> >>> . . .
> >>> statistics          : {"rx_1024_to_1518_packets"=0,
> >> "rx_128_to_255_packets"=0, "rx_1523_to_max_packets"=0,
> >> "rx_1_to_64_packets"=0, "rx_256_to_511_packets"=0,
> >> "rx_512_to_1023_packets"=0, "rx_65_to_127_packets"=0, rx_bytes=0,
> >> rx_dropped=0, rx_errors=0, tx_bytes=0, tx_dropped=8}
> >>> status              : {}
> >>> type                : dpdkvhostuser
> >>>
> >>> Is there any way to do the equivalent of a tcpdump or wireshark on a
> >>> vhost
> >> user port?
> >>>
> >>> Thanks,
> >>> Sundar
> >>>
> >> Blanc fields in 'list interface' is normal for vhostuser.
> >>
> >> Looks like something wrong with VM.
> >> Please, provide the output of 'ip a' or 'ifconfig -a' from VM and
> >> full output of 'ovs-vsctl list Interface vi1'. Also, qemu cmdline or 
> >> libvirt xml
> can be helpful.
> >>
> >>
> >> Best regards, Ilya Maximets.
_______________________________________________
dev mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to