Thanks Victor,   After setting MQ feature bit in get_feature response,
Guest vm can see the MQ feature bit enabled, and all the queues are getting
initialized on the vm.

Jana

On 3 December 2015 at 20:07, Victor Kaplansky <vict...@redhat.com> wrote:

> On Thu, Dec 03, 2015 at 03:11:57PM +0530, Naredula Janardhana Reddy wrote:
> > Hi
> >   I am using the latest qemu-2.5.0-rc2 with vhost-user(with multi-queue,
> > with 3 queues)  to test multi-queue feature with vhost-user.  In the
> guest
> > vm, multi-queue feature flag is not getting enabled .
> >
> > On the backend, the backend(user space switch) is receving SET_VRING_ADDR
> > message only for the first queue, but set VRING_ENABLE message is
> recevied
> > for all the 6 rings(or 3 queues).
> >
> > On the guest vm,  "Multi-queue" feature( bit 22) is not enabled in the
> host
> > features(0x409f0024), due to this guest is not enabling multi-queue.
> >
> >
> > Is the vhost-user with multi-queue is fully implemented in the
> > qemu-2.5.0-rc2?.
> >
> > Thanks
> > Jana
> > -------------------------------------------------------------------------
> > Pls find the qemu command line and logs :
> >
> > guest vm command line:
> >  ../qemu-system-x86_64 -enable-kvm -gdb tcp::1336,server,nowait -m 256M
> > -monitor tcp::52001,server,nowait,nodelay -object
> >
> memory-backend-file,size=256M,id=ram0,prealloc=yes,mem-path=/mnt/hugetlbfs,share=on
> > -numa node,memdev=ram0 -mem-prealloc -smp 6 -chardev
> > socket,id=char1,path=./p1 -netdev
> > vhost-user,id=guest0,chardev=char1,queues=3 -device
> > virtio-net-pci,mq=on,vectors=8,mac=00:30:48:DB:5E:01,netdev=guest0 -vnc
> :8
> > -serial telnet::50001,server,nowait -serial telnet::50011,server,nowait
> > -daemonize -append ipaddr=192.168.122.3 gw=192.168.122.1 hugepages=1
> > hw_clock=0 -kernel ./test_image -drive
> > if=virtio,id=hdr0,file=./test_disk,aio=native
> >
> > log on the guest vm:
> > : Matches inside the NETPROBE....
> >    2:    VirtioNet: Initializing VIRTIO PCI NET status :1 : pcioaddr:c000
> >    2:    VirtioNet:  HOSTfeatures :409f0024:  capabilitie:40
> > guestfeatures:100024 mask_features:7000ff
> >    2:[5] MacAddress,   2:[16] Status,   2:[17] ControlVq,   2:[18]
> > RxMode,   2:[19] VLanFilter,   2:[20] RxModeExtra,   2:
> >    2: msi vector start :101 num:8
> >    2:        create Kernel vmap: msix
>  :ffffffffd0501000-ffffffffd0502000
> > size:0M
> >    2: msix table :ffffffffd0501000  bar addr:febd1000  baroffset:1
> >    2:        Kernel Adding to LEAF: private page paddr: febd1004 vaddr:
> > ffffffffd0501004
> >    2:        addr:ffffffffd0501004 ->  Lindex ( 1ff : 1ff : 82 :101 )
> >    2:        3: addr:ffffffffd0501004 ->  Lindexloc ( ff8 : ff8 : 410
> :808 )
> >    2: 0: MSIX  data :165 address:fee00008
> >    2: 1: MSIX  data :166 address:fee00008
> >    2: 2: MSIX  data :167 address:fee00008
> >    2: 3: MSIX  data :168 address:fee00008
> >    2: 4: MSIX  data :169 address:fee00008
> >    2: 5: MSIX  data :16a address:fee00008
> >    2: 6: MSIX  data :16b address:fee00008
> >    2: 7: MSIX  data :16c address:fee00008
> >    2:MSIX... Configured ISR vector:101  numvector:8 ctrl:8007
> >    2:    VIRTIONET:  pioaddr:c018 MAC address : 0 :30 :48 :db
> :1820000005e
> > :ffffffff00000001 mis_vector:ffffffff00000065   : max_vqs:1
> >    2:    VIRTIONET: initializing MAX VQ's:1
> >
> >
> > log of user space switch:
> > ./vhost ./p1 ./p2 0
> >  <port1-file>: ./p1 <port2-file>: ./p2
> >
> >
> ................................................................................
> > Cmd: VHOST_USER_GET_FEATURES (0x1)
> > Flags: 0x1
> > u64: 0x500000000
> > Processing message: VHOST_USER_GET_FEATURES
> > _get_features
> >  New3333 MQ feature as enabled: SIZE: 8  value:40000000
>
> Is 0x40000000 returned by the back-end for get_features request?
> If so, it has bit 22 cleared, which could explain why MQ feature
> is not negotiated.
>
> -- Victor
>
>

Reply via email to