Re: [Qemu-devel] [PATCH v6 1/2] vhost-user: add multi queue support

2015-09-07 Thread Michael S. Tsirkin
On Mon, Sep 07, 2015 at 02:07:38PM +0300, Marcel Apfelbaum wrote:
> On 08/13/2015 12:18 PM, Michael S. Tsirkin wrote:
> >On Wed, Aug 12, 2015 at 02:25:41PM +0800, Ouyang Changchun wrote:
> >>Based on patch by Nikolay Nikolaev:
> >>Vhost-user will implement the multi queue support in a similar way
> >>to what vhost already has - a separate thread for each queue.
> >>To enable the multi queue functionality - a new command line parameter
> >>"queues" is introduced for the vhost-user netdev.
> >>
> >>The RESET_OWNER change is based on commit:
> >>294ce717e0f212ed0763307f3eab72b4a1bdf4d0
> >>If it is reverted, the patch need update for it accordingly.
> >>
> >>Signed-off-by: Nikolay Nikolaev 
> >>Signed-off-by: Changchun Ouyang 
> >>---
> >>Changes since v5:
> >>  - fix the message descption for VHOST_RESET_OWNER in vhost-user txt
> >>
> >>Changes since v4:
> >>  - remove the unnecessary trailing '\n'
> >>
> >>Changes since v3:
> >>  - fix one typo and wrap one long line
> >>
> >>Changes since v2:
> >>  - fix vq index issue for set_vring_call
> >>When it is the case of VHOST_SET_VRING_CALL, The vq_index is not 
> >> initialized before it is used,
> >>thus it could be a random value. The random value leads to crash in 
> >> vhost after passing down
> >>to vhost, as vhost use this random value to index an array index.
> >>  - fix the typo in the doc and description
> >>  - address vq index for reset_owner
> >>
> >>Changes since v1:
> >>  - use s->nc.info_str when bringing up/down the backend
> >>
> >>  docs/specs/vhost-user.txt |  7 ++-
> >>  hw/net/vhost_net.c|  3 ++-
> >>  hw/virtio/vhost-user.c| 11 ++-
> >>  net/vhost-user.c  | 37 -
> >>  qapi-schema.json  |  6 +-
> >>  qemu-options.hx   |  5 +++--
> >>  6 files changed, 50 insertions(+), 19 deletions(-)
> >>
> 
> [...]
> 
> >
> >
> >There are two problems here:
> >
> >1. we don't really know that the backend
> >is able to support the requested number of queues.
> >If not, everything will fail, silently.
> >A new message to query the # of queues could help, though
> >I'm not sure what can be done on failure. Fail connection?
> >
> >2. each message (e.g. set memory table) is sent multiple times,
> >on the same socket.
> 
> Hi,
> 
> Actually each queue has its own vhost_dev device which in turn has his own
> memory mappings. Because of this VHOST_SET_MEM_TABLE should be sent for each 
> queue.
> 
> Should we change it to VHOST_SET_VRING_MEM_TABLE? Or maybe I got this wrong...
> 
> Thanks,
> Marcel
> 

You got it wrong, the table is the same for all rings.

> 
> 
> >
> >
> >
> 
> [...]



Re: [Qemu-devel] [PATCH v6 1/2] vhost-user: add multi queue support

2015-09-07 Thread Marcel Apfelbaum

On 09/07/2015 03:26 PM, Michael S. Tsirkin wrote:

On Mon, Sep 07, 2015 at 02:07:38PM +0300, Marcel Apfelbaum wrote:

On 08/13/2015 12:18 PM, Michael S. Tsirkin wrote:

On Wed, Aug 12, 2015 at 02:25:41PM +0800, Ouyang Changchun wrote:

Based on patch by Nikolay Nikolaev:
Vhost-user will implement the multi queue support in a similar way
to what vhost already has - a separate thread for each queue.
To enable the multi queue functionality - a new command line parameter
"queues" is introduced for the vhost-user netdev.

The RESET_OWNER change is based on commit:
294ce717e0f212ed0763307f3eab72b4a1bdf4d0
If it is reverted, the patch need update for it accordingly.

Signed-off-by: Nikolay Nikolaev 
Signed-off-by: Changchun Ouyang 
---
Changes since v5:
  - fix the message descption for VHOST_RESET_OWNER in vhost-user txt

Changes since v4:
  - remove the unnecessary trailing '\n'

Changes since v3:
  - fix one typo and wrap one long line

Changes since v2:
  - fix vq index issue for set_vring_call
When it is the case of VHOST_SET_VRING_CALL, The vq_index is not 
initialized before it is used,
thus it could be a random value. The random value leads to crash in vhost 
after passing down
to vhost, as vhost use this random value to index an array index.
  - fix the typo in the doc and description
  - address vq index for reset_owner

Changes since v1:
  - use s->nc.info_str when bringing up/down the backend

  docs/specs/vhost-user.txt |  7 ++-
  hw/net/vhost_net.c|  3 ++-
  hw/virtio/vhost-user.c| 11 ++-
  net/vhost-user.c  | 37 -
  qapi-schema.json  |  6 +-
  qemu-options.hx   |  5 +++--
  6 files changed, 50 insertions(+), 19 deletions(-)



[...]




There are two problems here:

1. we don't really know that the backend
is able to support the requested number of queues.
If not, everything will fail, silently.
A new message to query the # of queues could help, though
I'm not sure what can be done on failure. Fail connection?

2. each message (e.g. set memory table) is sent multiple times,
on the same socket.


Hi,

Actually each queue has its own vhost_dev device which in turn has his own
memory mappings. Because of this VHOST_SET_MEM_TABLE should be sent for each 
queue.

Should we change it to VHOST_SET_VRING_MEM_TABLE? Or maybe I got this wrong...

Thanks,
Marcel



You got it wrong, the table is the same for all rings.


OK, thanks. So the backend, in this case DPDK, maps it different per each queue.
Here is an example for 2 queues:

 VHOST_CONFIG: read message VHOST_USER_SET_MEM_TABLE
 VHOST_CONFIG: mapped region 0 fd:165 to 0x2aac4000 sz:0xa off:0x0
 VHOST_CONFIG: mapped region 1 fd:166 to 0x2aac8000 sz:0x8000 
off:0xc
 VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
 VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
 VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
 VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
 VHOST_CONFIG: vring kick idx:0 file:167 VHOST_CONFIG: virtio isn't ready for 
processing.
 VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
 VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
 VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
 VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
 VHOST_CONFIG: vring kick idx:1 file:168
 VHOST_CONFIG: virtio isn't ready for processing.
 VHOST_CONFIG: read message VHOST_USER_SET_FEATURES
 VHOST_CONFIG: read message VHOST_USER_SET_MEM_TABLE
 VHOST_CONFIG: mapped region 0 fd:169 to 0x2aad sz:0xa off:0x0
 VHOST_CONFIG: mapped region 1 fd:170 to 0x2aad4000 sz:0x8000 
off:0xc
 VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
 VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
 VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
 VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
 VHOST_CONFIG: vring kick idx:2 file:171
 VHOST_CONFIG: virtio isn't ready for processing.
 VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
 VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
 VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
 VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
 VHOST_CONFIG: vring kick idx:3 file:172
 VHOST_CONFIG: virtio is now ready for processing.

DPDK expects this message per queue and maps it to a different address.
Trying to send it only once will leave the second queue without the mapping.

I'll try to fix this on dpdk side.

Thanks,
Marcel













[...]





Re: [Qemu-devel] [PATCH v6 1/2] vhost-user: add multi queue support

2015-09-07 Thread Marcel Apfelbaum

On 08/13/2015 12:18 PM, Michael S. Tsirkin wrote:

On Wed, Aug 12, 2015 at 02:25:41PM +0800, Ouyang Changchun wrote:

Based on patch by Nikolay Nikolaev:
Vhost-user will implement the multi queue support in a similar way
to what vhost already has - a separate thread for each queue.
To enable the multi queue functionality - a new command line parameter
"queues" is introduced for the vhost-user netdev.

The RESET_OWNER change is based on commit:
294ce717e0f212ed0763307f3eab72b4a1bdf4d0
If it is reverted, the patch need update for it accordingly.

Signed-off-by: Nikolay Nikolaev 
Signed-off-by: Changchun Ouyang 
---
Changes since v5:
  - fix the message descption for VHOST_RESET_OWNER in vhost-user txt

Changes since v4:
  - remove the unnecessary trailing '\n'

Changes since v3:
  - fix one typo and wrap one long line

Changes since v2:
  - fix vq index issue for set_vring_call
When it is the case of VHOST_SET_VRING_CALL, The vq_index is not 
initialized before it is used,
thus it could be a random value. The random value leads to crash in vhost 
after passing down
to vhost, as vhost use this random value to index an array index.
  - fix the typo in the doc and description
  - address vq index for reset_owner

Changes since v1:
  - use s->nc.info_str when bringing up/down the backend

  docs/specs/vhost-user.txt |  7 ++-
  hw/net/vhost_net.c|  3 ++-
  hw/virtio/vhost-user.c| 11 ++-
  net/vhost-user.c  | 37 -
  qapi-schema.json  |  6 +-
  qemu-options.hx   |  5 +++--
  6 files changed, 50 insertions(+), 19 deletions(-)



[...]




There are two problems here:

1. we don't really know that the backend
is able to support the requested number of queues.
If not, everything will fail, silently.
A new message to query the # of queues could help, though
I'm not sure what can be done on failure. Fail connection?

2. each message (e.g. set memory table) is sent multiple times,
on the same socket.


Hi,

Actually each queue has its own vhost_dev device which in turn has his own
memory mappings. Because of this VHOST_SET_MEM_TABLE should be sent for each 
queue.

Should we change it to VHOST_SET_VRING_MEM_TABLE? Or maybe I got this wrong...

Thanks,
Marcel










[...]



Re: [Qemu-devel] [PATCH v6 1/2] vhost-user: add multi queue support

2015-09-02 Thread Michael S. Tsirkin
On Wed, Sep 02, 2015 at 05:45:18AM +, Ouyang, Changchun wrote:
> 
> 
> > -Original Message-
> > From: Michael S. Tsirkin [mailto:m...@redhat.com]
> > Sent: Tuesday, September 1, 2015 10:10 PM
> > To: Yuanhan Liu
> > Cc: snabb-de...@googlegroups.com; thibaut.col...@6wind.com; qemu-
> > de...@nongnu.org; n.nikol...@virtualopensystems.com; l...@snabb.co;
> > Long, Thomas; Ouyang, Changchun
> > Subject: Re: [Qemu-devel] [PATCH v6 1/2] vhost-user: add multi queue
> > support
> > 
> > On Tue, Sep 01, 2015 at 08:15:15PM +0800, Yuanhan Liu wrote:
> > > On Tue, Sep 01, 2015 at 01:07:11PM +0300, Michael S. Tsirkin wrote:
> > > > On Tue, Sep 01, 2015 at 05:13:50PM +0800, Yuanhan Liu wrote:
> > > > > On Thu, Aug 13, 2015 at 12:18:38PM +0300, Michael S. Tsirkin wrote:
> > > > > > On Wed, Aug 12, 2015 at 02:25:41PM +0800, Ouyang Changchun wrote:
> > > > > > > Based on patch by Nikolay Nikolaev:
> > > > > > > Vhost-user will implement the multi queue support in a similar
> > > > > > > way to what vhost already has - a separate thread for each queue.
> > > > > > > To enable the multi queue functionality - a new command line
> > > > > > > parameter "queues" is introduced for the vhost-user netdev.
> > > > > > >
> > > > > > > The RESET_OWNER change is based on commit:
> > > > > > >294ce717e0f212ed0763307f3eab72b4a1bdf4d0
> > > > > > > If it is reverted, the patch need update for it accordingly.
> > > > > > >
> > > > > > > Signed-off-by: Nikolay Nikolaev
> > > > > > > <n.nikol...@virtualopensystems.com>
> > > > > > > Signed-off-by: Changchun Ouyang <changchun.ouy...@intel.com>
> > > > > [snip...]
> > > > > > > @@ -198,7 +203,7 @@ Message types
> > > > > > >
> > > > > > >Id: 4
> > > > > > >Equivalent ioctl: VHOST_RESET_OWNER
> > > > > > > -  Master payload: N/A
> > > > > > > +  Master payload: vring state description
> > > > > > >
> > > > > > >Issued when a new connection is about to be closed. The 
> > > > > > > Master
> > will no
> > > > > > >longer own this connection (and will usually close it).
> > > > > >
> > > > > > This is an interface change, isn't it?
> > > > > > We can't make it unconditionally, need to make it dependent on a
> > > > > > protocol flag.
> > > > >
> > > > > Hi Michael,
> > > > >
> > > > > I'm wondering why we need a payload here, as we don't do that for
> > > > > VHOST_SET_OWNER. I mean, stopping one or few queue pairs when a
> > > > > connect is about to be close doesn't make sense to me. Instead, we
> > > > > should clean up all queue pair when VHOST_RESET_OWNER message is
> > > > > received, right?
> > > >
> > > > We really should rename VHOST_RESET_OWNER to
> > VHOST_RESET_DEVICE.
> > >
> > > Yeah, second that.
> > >
> > > BTW, can we simply do the name convertion, just changing
> > > VHOST_RESET_OWNER to VHOST_RESET_DEVICE(or VHOST_STOP_DEVICE).
> > I guess
> > > it's doable in theory as far as we don't change the number. I somehow
> > > feel it's not a good practice.
> > 
> > I think just renaming is fine, we are not changing the protocol at all.
> 
> Agree, till now, VHOST_RESET_OWNER is not used pretty much, so no backward 
> compatibility issue.
> Renaming it is enough.
> 
> > 
> > > Maybe we could make it as a new vhost message, and mark the old one as
> > > obsolete? That doesn't sound perfect, either, as it reserves a number
> > > for a message we will not use any more.
> > >
> > > Also, we may rename VHOST_SET_OWNER to VHOST_INIT_DEVICE?
> > 
> > I think VHOST_SET_OWNER specified who's the master?
> > 
> 
> I think VHOST_SET_OWNER is also the message for socket, not for each vring.

Exactly. Documentation says it sets the sender as the master of the socket.

> > > > And I agree, I don't think it needs a payload.
> > >
> > > Good to know.
> > >
> > > >
> > > >
> > > > > >
> > > > > >
> > > 

Re: [Qemu-devel] [PATCH v6 1/2] vhost-user: add multi queue support

2015-09-01 Thread Michael S. Tsirkin
On Tue, Sep 01, 2015 at 05:13:50PM +0800, Yuanhan Liu wrote:
> On Thu, Aug 13, 2015 at 12:18:38PM +0300, Michael S. Tsirkin wrote:
> > On Wed, Aug 12, 2015 at 02:25:41PM +0800, Ouyang Changchun wrote:
> > > Based on patch by Nikolay Nikolaev:
> > > Vhost-user will implement the multi queue support in a similar way
> > > to what vhost already has - a separate thread for each queue.
> > > To enable the multi queue functionality - a new command line parameter
> > > "queues" is introduced for the vhost-user netdev.
> > > 
> > > The RESET_OWNER change is based on commit:
> > >294ce717e0f212ed0763307f3eab72b4a1bdf4d0
> > > If it is reverted, the patch need update for it accordingly.
> > > 
> > > Signed-off-by: Nikolay Nikolaev 
> > > Signed-off-by: Changchun Ouyang 
> [snip...]
> > > @@ -198,7 +203,7 @@ Message types
> > >  
> > >Id: 4
> > >Equivalent ioctl: VHOST_RESET_OWNER
> > > -  Master payload: N/A
> > > +  Master payload: vring state description
> > >  
> > >Issued when a new connection is about to be closed. The Master 
> > > will no
> > >longer own this connection (and will usually close it).
> > 
> > This is an interface change, isn't it?
> > We can't make it unconditionally, need to make it dependent
> > on a protocol flag.
> 
> Hi Michael,
> 
> I'm wondering why we need a payload here, as we don't do that for
> VHOST_SET_OWNER. I mean, stopping one or few queue pairs when a
> connect is about to be close doesn't make sense to me. Instead,
> we should clean up all queue pair when VHOST_RESET_OWNER message
> is received, right?

We really should rename VHOST_RESET_OWNER to VHOST_RESET_DEVICE.

And I agree, I don't think it needs a payload.


> > 
> > 
> > > diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
> > > index 1f25cb3..9cd6c05 100644
> > > --- a/hw/net/vhost_net.c
> > > +++ b/hw/net/vhost_net.c
> [snip...]
> > >  static int net_vhost_user_init(NetClientState *peer, const char *device,
> > > -   const char *name, CharDriverState *chr)
> > > +   const char *name, CharDriverState *chr,
> > > +   uint32_t queues)
> > >  {
> > >  NetClientState *nc;
> > >  VhostUserState *s;
> > > +int i;
> > >  
> > > -nc = qemu_new_net_client(_vhost_user_info, peer, device, name);
> > > +for (i = 0; i < queues; i++) {
> > > +nc = qemu_new_net_client(_vhost_user_info, peer, device, 
> > > name);
> > >  
> > > -snprintf(nc->info_str, sizeof(nc->info_str), "vhost-user to %s",
> > > - chr->label);
> > > +snprintf(nc->info_str, sizeof(nc->info_str), "vhost-user%d to 
> > > %s",
> > > + i, chr->label);
> > >  
> > > -s = DO_UPCAST(VhostUserState, nc, nc);
> > > +s = DO_UPCAST(VhostUserState, nc, nc);
> > >  
> > > -/* We don't provide a receive callback */
> > > -s->nc.receive_disabled = 1;
> > > -s->chr = chr;
> > > -
> > > -qemu_chr_add_handlers(s->chr, NULL, NULL, net_vhost_user_event, s);
> > > +/* We don't provide a receive callback */
> > > +s->nc.receive_disabled = 1;
> > > +s->chr = chr;
> > > +s->nc.queue_index = i;
> > >  
> > > +qemu_chr_add_handlers(s->chr, NULL, NULL, net_vhost_user_event, 
> > > s);
> > > +}
> > >  return 0;
> > >  }
> > >  
> > > @@ -225,6 +229,7 @@ static int net_vhost_check_net(QemuOpts *opts, void 
> > > *opaque)
> > 
> > 
> > There are two problems here:
> > 
> > 1. we don't really know that the backend
> >is able to support the requested number of queues.
> >If not, everything will fail, silently.
> >A new message to query the # of queues could help, though
> >I'm not sure what can be done on failure. Fail connection?
> 
> What I'm thinking is we may do:
> 
> - introduce a feature flag, for indicating we support MQ or not.
> 
>   We query this flag only when # of queues given is > 1. We exit
>   if it not matches.
> 
> - invoke vhost_dev init repeatedly for # of queues given, unless
>   something wrong happened, which basically means the backend
>   can not support such # of queues; we then quit.
> 
>   We could, as you suggested, add an another message to query
>   the max # queues the backend support. However, judging we have
>   to check the return value of setting up a single queue pair,
>   which already gives feedback when the backed is not able to
>   support requested # of queues, we could save such message,
>   though it's easy to implement :)

Problem is, we only setup queues when device is started,
that is when guest is running.

Doing this at connect would mean we don't start the VM
that we can't then support.

> > 
> > 2. each message (e.g. set memory table) is sent multiple times,
> >on the same socket.
> 
> Yeah, for there is a single socket opening there, it's not necessary
> to send messages like SET_MEM_TABLE 

Re: [Qemu-devel] [PATCH v6 1/2] vhost-user: add multi queue support

2015-09-01 Thread Yuanhan Liu
On Thu, Aug 13, 2015 at 12:18:38PM +0300, Michael S. Tsirkin wrote:
> On Wed, Aug 12, 2015 at 02:25:41PM +0800, Ouyang Changchun wrote:
> > Based on patch by Nikolay Nikolaev:
> > Vhost-user will implement the multi queue support in a similar way
> > to what vhost already has - a separate thread for each queue.
> > To enable the multi queue functionality - a new command line parameter
> > "queues" is introduced for the vhost-user netdev.
> > 
> > The RESET_OWNER change is based on commit:
> >294ce717e0f212ed0763307f3eab72b4a1bdf4d0
> > If it is reverted, the patch need update for it accordingly.
> > 
> > Signed-off-by: Nikolay Nikolaev 
> > Signed-off-by: Changchun Ouyang 
[snip...]
> > @@ -198,7 +203,7 @@ Message types
> >  
> >Id: 4
> >Equivalent ioctl: VHOST_RESET_OWNER
> > -  Master payload: N/A
> > +  Master payload: vring state description
> >  
> >Issued when a new connection is about to be closed. The Master will 
> > no
> >longer own this connection (and will usually close it).
> 
> This is an interface change, isn't it?
> We can't make it unconditionally, need to make it dependent
> on a protocol flag.

Hi Michael,

I'm wondering why we need a payload here, as we don't do that for
VHOST_SET_OWNER. I mean, stopping one or few queue pairs when a
connect is about to be close doesn't make sense to me. Instead,
we should clean up all queue pair when VHOST_RESET_OWNER message
is received, right?

> 
> 
> > diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
> > index 1f25cb3..9cd6c05 100644
> > --- a/hw/net/vhost_net.c
> > +++ b/hw/net/vhost_net.c
[snip...]
> >  static int net_vhost_user_init(NetClientState *peer, const char *device,
> > -   const char *name, CharDriverState *chr)
> > +   const char *name, CharDriverState *chr,
> > +   uint32_t queues)
> >  {
> >  NetClientState *nc;
> >  VhostUserState *s;
> > +int i;
> >  
> > -nc = qemu_new_net_client(_vhost_user_info, peer, device, name);
> > +for (i = 0; i < queues; i++) {
> > +nc = qemu_new_net_client(_vhost_user_info, peer, device, name);
> >  
> > -snprintf(nc->info_str, sizeof(nc->info_str), "vhost-user to %s",
> > - chr->label);
> > +snprintf(nc->info_str, sizeof(nc->info_str), "vhost-user%d to %s",
> > + i, chr->label);
> >  
> > -s = DO_UPCAST(VhostUserState, nc, nc);
> > +s = DO_UPCAST(VhostUserState, nc, nc);
> >  
> > -/* We don't provide a receive callback */
> > -s->nc.receive_disabled = 1;
> > -s->chr = chr;
> > -
> > -qemu_chr_add_handlers(s->chr, NULL, NULL, net_vhost_user_event, s);
> > +/* We don't provide a receive callback */
> > +s->nc.receive_disabled = 1;
> > +s->chr = chr;
> > +s->nc.queue_index = i;
> >  
> > +qemu_chr_add_handlers(s->chr, NULL, NULL, net_vhost_user_event, s);
> > +}
> >  return 0;
> >  }
> >  
> > @@ -225,6 +229,7 @@ static int net_vhost_check_net(QemuOpts *opts, void 
> > *opaque)
> 
> 
> There are two problems here:
> 
> 1. we don't really know that the backend
>is able to support the requested number of queues.
>If not, everything will fail, silently.
>A new message to query the # of queues could help, though
>I'm not sure what can be done on failure. Fail connection?

What I'm thinking is we may do:

- introduce a feature flag, for indicating we support MQ or not.

  We query this flag only when # of queues given is > 1. We exit
  if it not matches.

- invoke vhost_dev init repeatedly for # of queues given, unless
  something wrong happened, which basically means the backend
  can not support such # of queues; we then quit.

  We could, as you suggested, add an another message to query
  the max # queues the backend support. However, judging we have
  to check the return value of setting up a single queue pair,
  which already gives feedback when the backed is not able to
  support requested # of queues, we could save such message,
  though it's easy to implement :)

> 
> 2. each message (e.g. set memory table) is sent multiple times,
>on the same socket.

Yeah, for there is a single socket opening there, it's not necessary
to send messages like SET_MEM_TABLE multiple times. But for other
messages that relate to to a specific vring, we have to send N times,
don't we?

So, I'm wondering could we categorize the message in two types: vring
specific and none-vring specific. For vring specific, we send it N
times, with the vhost_dev->vq_index telling which one queue pair
we have interest.

For none-vring specific, we just send it once for first queue pair
(vhost_dev->queue == 0), just like what we did for tap: we launch
qemu-ifup/down script only for the first queue pair.

Comments? (And sorry if I made some silly comments, as I'm pretty
new to this 

Re: [Qemu-devel] [PATCH v6 1/2] vhost-user: add multi queue support

2015-09-01 Thread Yuanhan Liu
On Tue, Sep 01, 2015 at 01:07:11PM +0300, Michael S. Tsirkin wrote:
> On Tue, Sep 01, 2015 at 05:13:50PM +0800, Yuanhan Liu wrote:
> > On Thu, Aug 13, 2015 at 12:18:38PM +0300, Michael S. Tsirkin wrote:
> > > On Wed, Aug 12, 2015 at 02:25:41PM +0800, Ouyang Changchun wrote:
> > > > Based on patch by Nikolay Nikolaev:
> > > > Vhost-user will implement the multi queue support in a similar way
> > > > to what vhost already has - a separate thread for each queue.
> > > > To enable the multi queue functionality - a new command line parameter
> > > > "queues" is introduced for the vhost-user netdev.
> > > > 
> > > > The RESET_OWNER change is based on commit:
> > > >294ce717e0f212ed0763307f3eab72b4a1bdf4d0
> > > > If it is reverted, the patch need update for it accordingly.
> > > > 
> > > > Signed-off-by: Nikolay Nikolaev 
> > > > Signed-off-by: Changchun Ouyang 
> > [snip...]
> > > > @@ -198,7 +203,7 @@ Message types
> > > >  
> > > >Id: 4
> > > >Equivalent ioctl: VHOST_RESET_OWNER
> > > > -  Master payload: N/A
> > > > +  Master payload: vring state description
> > > >  
> > > >Issued when a new connection is about to be closed. The Master 
> > > > will no
> > > >longer own this connection (and will usually close it).
> > > 
> > > This is an interface change, isn't it?
> > > We can't make it unconditionally, need to make it dependent
> > > on a protocol flag.
> > 
> > Hi Michael,
> > 
> > I'm wondering why we need a payload here, as we don't do that for
> > VHOST_SET_OWNER. I mean, stopping one or few queue pairs when a
> > connect is about to be close doesn't make sense to me. Instead,
> > we should clean up all queue pair when VHOST_RESET_OWNER message
> > is received, right?
> 
> We really should rename VHOST_RESET_OWNER to VHOST_RESET_DEVICE.

Yeah, second that.

BTW, can we simply do the name convertion, just changing VHOST_RESET_OWNER
to VHOST_RESET_DEVICE(or VHOST_STOP_DEVICE). I guess it's doable in
theory as far as we don't change the number. I somehow feel it's not a
good practice.

Maybe we could make it as a new vhost message, and mark the old one
as obsolete? That doesn't sound perfect, either, as it reserves a number
for a message we will not use any more.

Also, we may rename VHOST_SET_OWNER to VHOST_INIT_DEVICE?

> And I agree, I don't think it needs a payload.

Good to know.

> 
> 
> > > 
> > > 
> > > > diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
> > > > index 1f25cb3..9cd6c05 100644
> > > > --- a/hw/net/vhost_net.c
> > > > +++ b/hw/net/vhost_net.c
> > [snip...]
> > > >  static int net_vhost_user_init(NetClientState *peer, const char 
> > > > *device,
> > > > -   const char *name, CharDriverState *chr)
> > > > +   const char *name, CharDriverState *chr,
> > > > +   uint32_t queues)
> > > >  {
> > > >  NetClientState *nc;
> > > >  VhostUserState *s;
> > > > +int i;
> > > >  
> > > > -nc = qemu_new_net_client(_vhost_user_info, peer, device, name);
> > > > +for (i = 0; i < queues; i++) {
> > > > +nc = qemu_new_net_client(_vhost_user_info, peer, device, 
> > > > name);
> > > >  
> > > > -snprintf(nc->info_str, sizeof(nc->info_str), "vhost-user to %s",
> > > > - chr->label);
> > > > +snprintf(nc->info_str, sizeof(nc->info_str), "vhost-user%d to 
> > > > %s",
> > > > + i, chr->label);
> > > >  
> > > > -s = DO_UPCAST(VhostUserState, nc, nc);
> > > > +s = DO_UPCAST(VhostUserState, nc, nc);
> > > >  
> > > > -/* We don't provide a receive callback */
> > > > -s->nc.receive_disabled = 1;
> > > > -s->chr = chr;
> > > > -
> > > > -qemu_chr_add_handlers(s->chr, NULL, NULL, net_vhost_user_event, s);
> > > > +/* We don't provide a receive callback */
> > > > +s->nc.receive_disabled = 1;
> > > > +s->chr = chr;
> > > > +s->nc.queue_index = i;
> > > >  
> > > > +qemu_chr_add_handlers(s->chr, NULL, NULL, 
> > > > net_vhost_user_event, s);
> > > > +}
> > > >  return 0;
> > > >  }
> > > >  
> > > > @@ -225,6 +229,7 @@ static int net_vhost_check_net(QemuOpts *opts, void 
> > > > *opaque)
> > > 
> > > 
> > > There are two problems here:
> > > 
> > > 1. we don't really know that the backend
> > >is able to support the requested number of queues.
> > >If not, everything will fail, silently.
> > >A new message to query the # of queues could help, though
> > >I'm not sure what can be done on failure. Fail connection?
> > 
> > What I'm thinking is we may do:
> > 
> > - introduce a feature flag, for indicating we support MQ or not.
> > 
> >   We query this flag only when # of queues given is > 1. We exit
> >   if it not matches.
> > 
> > - invoke vhost_dev init repeatedly for # of queues given, unless
> >   something wrong happened, which basically means the backend
> 

Re: [Qemu-devel] [PATCH v6 1/2] vhost-user: add multi queue support

2015-09-01 Thread Ouyang, Changchun


> -Original Message-
> From: Michael S. Tsirkin [mailto:m...@redhat.com]
> Sent: Tuesday, September 1, 2015 10:10 PM
> To: Yuanhan Liu
> Cc: snabb-de...@googlegroups.com; thibaut.col...@6wind.com; qemu-
> de...@nongnu.org; n.nikol...@virtualopensystems.com; l...@snabb.co;
> Long, Thomas; Ouyang, Changchun
> Subject: Re: [Qemu-devel] [PATCH v6 1/2] vhost-user: add multi queue
> support
> 
> On Tue, Sep 01, 2015 at 08:15:15PM +0800, Yuanhan Liu wrote:
> > On Tue, Sep 01, 2015 at 01:07:11PM +0300, Michael S. Tsirkin wrote:
> > > On Tue, Sep 01, 2015 at 05:13:50PM +0800, Yuanhan Liu wrote:
> > > > On Thu, Aug 13, 2015 at 12:18:38PM +0300, Michael S. Tsirkin wrote:
> > > > > On Wed, Aug 12, 2015 at 02:25:41PM +0800, Ouyang Changchun wrote:
> > > > > > Based on patch by Nikolay Nikolaev:
> > > > > > Vhost-user will implement the multi queue support in a similar
> > > > > > way to what vhost already has - a separate thread for each queue.
> > > > > > To enable the multi queue functionality - a new command line
> > > > > > parameter "queues" is introduced for the vhost-user netdev.
> > > > > >
> > > > > > The RESET_OWNER change is based on commit:
> > > > > >294ce717e0f212ed0763307f3eab72b4a1bdf4d0
> > > > > > If it is reverted, the patch need update for it accordingly.
> > > > > >
> > > > > > Signed-off-by: Nikolay Nikolaev
> > > > > > <n.nikol...@virtualopensystems.com>
> > > > > > Signed-off-by: Changchun Ouyang <changchun.ouy...@intel.com>
> > > > [snip...]
> > > > > > @@ -198,7 +203,7 @@ Message types
> > > > > >
> > > > > >Id: 4
> > > > > >Equivalent ioctl: VHOST_RESET_OWNER
> > > > > > -  Master payload: N/A
> > > > > > +  Master payload: vring state description
> > > > > >
> > > > > >Issued when a new connection is about to be closed. The 
> > > > > > Master
> will no
> > > > > >longer own this connection (and will usually close it).
> > > > >
> > > > > This is an interface change, isn't it?
> > > > > We can't make it unconditionally, need to make it dependent on a
> > > > > protocol flag.
> > > >
> > > > Hi Michael,
> > > >
> > > > I'm wondering why we need a payload here, as we don't do that for
> > > > VHOST_SET_OWNER. I mean, stopping one or few queue pairs when a
> > > > connect is about to be close doesn't make sense to me. Instead, we
> > > > should clean up all queue pair when VHOST_RESET_OWNER message is
> > > > received, right?
> > >
> > > We really should rename VHOST_RESET_OWNER to
> VHOST_RESET_DEVICE.
> >
> > Yeah, second that.
> >
> > BTW, can we simply do the name convertion, just changing
> > VHOST_RESET_OWNER to VHOST_RESET_DEVICE(or VHOST_STOP_DEVICE).
> I guess
> > it's doable in theory as far as we don't change the number. I somehow
> > feel it's not a good practice.
> 
> I think just renaming is fine, we are not changing the protocol at all.

Agree, till now, VHOST_RESET_OWNER is not used pretty much, so no backward 
compatibility issue.
Renaming it is enough.

> 
> > Maybe we could make it as a new vhost message, and mark the old one as
> > obsolete? That doesn't sound perfect, either, as it reserves a number
> > for a message we will not use any more.
> >
> > Also, we may rename VHOST_SET_OWNER to VHOST_INIT_DEVICE?
> 
> I think VHOST_SET_OWNER specified who's the master?
> 

I think VHOST_SET_OWNER is also the message for socket, not for each vring.

> > > And I agree, I don't think it needs a payload.
> >
> > Good to know.
> >
> > >
> > >
> > > > >
> > > > >
> > > > > > diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c index
> > > > > > 1f25cb3..9cd6c05 100644
> > > > > > --- a/hw/net/vhost_net.c
> > > > > > +++ b/hw/net/vhost_net.c
> > > > [snip...]
> > > > > >  static int net_vhost_user_init(NetClientState *peer, const char
> *device,
> > > > > > -   const char *name, CharDriverState 
> > > > > > *chr)
> > > > > > +

Re: [Qemu-devel] [PATCH v6 1/2] vhost-user: add multi queue support

2015-09-01 Thread Michael S. Tsirkin
On Tue, Sep 01, 2015 at 08:15:15PM +0800, Yuanhan Liu wrote:
> On Tue, Sep 01, 2015 at 01:07:11PM +0300, Michael S. Tsirkin wrote:
> > On Tue, Sep 01, 2015 at 05:13:50PM +0800, Yuanhan Liu wrote:
> > > On Thu, Aug 13, 2015 at 12:18:38PM +0300, Michael S. Tsirkin wrote:
> > > > On Wed, Aug 12, 2015 at 02:25:41PM +0800, Ouyang Changchun wrote:
> > > > > Based on patch by Nikolay Nikolaev:
> > > > > Vhost-user will implement the multi queue support in a similar way
> > > > > to what vhost already has - a separate thread for each queue.
> > > > > To enable the multi queue functionality - a new command line parameter
> > > > > "queues" is introduced for the vhost-user netdev.
> > > > > 
> > > > > The RESET_OWNER change is based on commit:
> > > > >294ce717e0f212ed0763307f3eab72b4a1bdf4d0
> > > > > If it is reverted, the patch need update for it accordingly.
> > > > > 
> > > > > Signed-off-by: Nikolay Nikolaev 
> > > > > Signed-off-by: Changchun Ouyang 
> > > [snip...]
> > > > > @@ -198,7 +203,7 @@ Message types
> > > > >  
> > > > >Id: 4
> > > > >Equivalent ioctl: VHOST_RESET_OWNER
> > > > > -  Master payload: N/A
> > > > > +  Master payload: vring state description
> > > > >  
> > > > >Issued when a new connection is about to be closed. The Master 
> > > > > will no
> > > > >longer own this connection (and will usually close it).
> > > > 
> > > > This is an interface change, isn't it?
> > > > We can't make it unconditionally, need to make it dependent
> > > > on a protocol flag.
> > > 
> > > Hi Michael,
> > > 
> > > I'm wondering why we need a payload here, as we don't do that for
> > > VHOST_SET_OWNER. I mean, stopping one or few queue pairs when a
> > > connect is about to be close doesn't make sense to me. Instead,
> > > we should clean up all queue pair when VHOST_RESET_OWNER message
> > > is received, right?
> > 
> > We really should rename VHOST_RESET_OWNER to VHOST_RESET_DEVICE.
> 
> Yeah, second that.
> 
> BTW, can we simply do the name convertion, just changing VHOST_RESET_OWNER
> to VHOST_RESET_DEVICE(or VHOST_STOP_DEVICE). I guess it's doable in
> theory as far as we don't change the number. I somehow feel it's not a
> good practice.

I think just renaming is fine, we are not changing the protocol
at all.

> Maybe we could make it as a new vhost message, and mark the old one
> as obsolete? That doesn't sound perfect, either, as it reserves a number
> for a message we will not use any more.
> 
> Also, we may rename VHOST_SET_OWNER to VHOST_INIT_DEVICE?

I think VHOST_SET_OWNER specified who's the master?

> > And I agree, I don't think it needs a payload.
> 
> Good to know.
> 
> > 
> > 
> > > > 
> > > > 
> > > > > diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
> > > > > index 1f25cb3..9cd6c05 100644
> > > > > --- a/hw/net/vhost_net.c
> > > > > +++ b/hw/net/vhost_net.c
> > > [snip...]
> > > > >  static int net_vhost_user_init(NetClientState *peer, const char 
> > > > > *device,
> > > > > -   const char *name, CharDriverState 
> > > > > *chr)
> > > > > +   const char *name, CharDriverState 
> > > > > *chr,
> > > > > +   uint32_t queues)
> > > > >  {
> > > > >  NetClientState *nc;
> > > > >  VhostUserState *s;
> > > > > +int i;
> > > > >  
> > > > > -nc = qemu_new_net_client(_vhost_user_info, peer, device, 
> > > > > name);
> > > > > +for (i = 0; i < queues; i++) {
> > > > > +nc = qemu_new_net_client(_vhost_user_info, peer, device, 
> > > > > name);
> > > > >  
> > > > > -snprintf(nc->info_str, sizeof(nc->info_str), "vhost-user to %s",
> > > > > - chr->label);
> > > > > +snprintf(nc->info_str, sizeof(nc->info_str), "vhost-user%d 
> > > > > to %s",
> > > > > + i, chr->label);
> > > > >  
> > > > > -s = DO_UPCAST(VhostUserState, nc, nc);
> > > > > +s = DO_UPCAST(VhostUserState, nc, nc);
> > > > >  
> > > > > -/* We don't provide a receive callback */
> > > > > -s->nc.receive_disabled = 1;
> > > > > -s->chr = chr;
> > > > > -
> > > > > -qemu_chr_add_handlers(s->chr, NULL, NULL, net_vhost_user_event, 
> > > > > s);
> > > > > +/* We don't provide a receive callback */
> > > > > +s->nc.receive_disabled = 1;
> > > > > +s->chr = chr;
> > > > > +s->nc.queue_index = i;
> > > > >  
> > > > > +qemu_chr_add_handlers(s->chr, NULL, NULL, 
> > > > > net_vhost_user_event, s);
> > > > > +}
> > > > >  return 0;
> > > > >  }
> > > > >  
> > > > > @@ -225,6 +229,7 @@ static int net_vhost_check_net(QemuOpts *opts, 
> > > > > void *opaque)
> > > > 
> > > > 
> > > > There are two problems here:
> > > > 
> > > > 1. we don't really know that the backend
> > > >is able to support the requested number of queues.
> > > >If not, everything will fail, silently.
> > > >A new message to 

Re: [Qemu-devel] [PATCH v6 1/2] vhost-user: add multi queue support

2015-08-13 Thread Michael S. Tsirkin
On Wed, Aug 12, 2015 at 02:25:41PM +0800, Ouyang Changchun wrote:
 Based on patch by Nikolay Nikolaev:
 Vhost-user will implement the multi queue support in a similar way
 to what vhost already has - a separate thread for each queue.
 To enable the multi queue functionality - a new command line parameter
 queues is introduced for the vhost-user netdev.
 
 The RESET_OWNER change is based on commit:
294ce717e0f212ed0763307f3eab72b4a1bdf4d0
 If it is reverted, the patch need update for it accordingly.
 
 Signed-off-by: Nikolay Nikolaev n.nikol...@virtualopensystems.com
 Signed-off-by: Changchun Ouyang changchun.ouy...@intel.com
 ---
 Changes since v5:
  - fix the message descption for VHOST_RESET_OWNER in vhost-user txt
 
 Changes since v4:
  - remove the unnecessary trailing '\n'
 
 Changes since v3:
  - fix one typo and wrap one long line
 
 Changes since v2:
  - fix vq index issue for set_vring_call
When it is the case of VHOST_SET_VRING_CALL, The vq_index is not 
 initialized before it is used,
thus it could be a random value. The random value leads to crash in vhost 
 after passing down
to vhost, as vhost use this random value to index an array index.
  - fix the typo in the doc and description
  - address vq index for reset_owner
 
 Changes since v1:
  - use s-nc.info_str when bringing up/down the backend
 
  docs/specs/vhost-user.txt |  7 ++-
  hw/net/vhost_net.c|  3 ++-
  hw/virtio/vhost-user.c| 11 ++-
  net/vhost-user.c  | 37 -
  qapi-schema.json  |  6 +-
  qemu-options.hx   |  5 +++--
  6 files changed, 50 insertions(+), 19 deletions(-)
 
 diff --git a/docs/specs/vhost-user.txt b/docs/specs/vhost-user.txt
 index 70da3b1..9390f89 100644
 --- a/docs/specs/vhost-user.txt
 +++ b/docs/specs/vhost-user.txt
 @@ -135,6 +135,11 @@ As older slaves don't support negotiating protocol 
 features,
  a feature bit was dedicated for this purpose:
  #define VHOST_USER_F_PROTOCOL_FEATURES 30
  
 +Multi queue support
 +---
 +The protocol supports multiple queues by setting all index fields in the sent
 +messages to a properly calculated value.
 +
  Message types
  -
  
 @@ -198,7 +203,7 @@ Message types
  
Id: 4
Equivalent ioctl: VHOST_RESET_OWNER
 -  Master payload: N/A
 +  Master payload: vring state description
  
Issued when a new connection is about to be closed. The Master will no
longer own this connection (and will usually close it).

This is an interface change, isn't it?
We can't make it unconditionally, need to make it dependent
on a protocol flag.


 diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
 index 1f25cb3..9cd6c05 100644
 --- a/hw/net/vhost_net.c
 +++ b/hw/net/vhost_net.c
 @@ -159,6 +159,7 @@ struct vhost_net *vhost_net_init(VhostNetOptions *options)
  
  net-dev.nvqs = 2;
  net-dev.vqs = net-vqs;
 +net-dev.vq_index = net-nc-queue_index;
  
  r = vhost_dev_init(net-dev, options-opaque,
 options-backend_type, options-force);
 @@ -269,7 +270,7 @@ static void vhost_net_stop_one(struct vhost_net *net,
  for (file.index = 0; file.index  net-dev.nvqs; ++file.index) {
  const VhostOps *vhost_ops = net-dev.vhost_ops;
  int r = vhost_ops-vhost_call(net-dev, VHOST_RESET_OWNER,
 -  NULL);
 +  file);
  assert(r = 0);
  }
  }
 diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
 index 27ba035..fb11d4c 100644
 --- a/hw/virtio/vhost-user.c
 +++ b/hw/virtio/vhost-user.c
 @@ -219,7 +219,12 @@ static int vhost_user_call(struct vhost_dev *dev, 
 unsigned long int request,
  break;
  
  case VHOST_USER_SET_OWNER:
 +break;
 +
  case VHOST_USER_RESET_OWNER:
 +memcpy(msg.state, arg, sizeof(struct vhost_vring_state));
 +msg.state.index += dev-vq_index;
 +msg.size = sizeof(m.state);
  break;
  
  case VHOST_USER_SET_MEM_TABLE:
 @@ -262,17 +267,20 @@ static int vhost_user_call(struct vhost_dev *dev, 
 unsigned long int request,
  case VHOST_USER_SET_VRING_NUM:
  case VHOST_USER_SET_VRING_BASE:
  memcpy(msg.state, arg, sizeof(struct vhost_vring_state));
 +msg.state.index += dev-vq_index;
  msg.size = sizeof(m.state);
  break;
  
  case VHOST_USER_GET_VRING_BASE:
  memcpy(msg.state, arg, sizeof(struct vhost_vring_state));
 +msg.state.index += dev-vq_index;
  msg.size = sizeof(m.state);
  need_reply = 1;
  break;
  
  case VHOST_USER_SET_VRING_ADDR:
  memcpy(msg.addr, arg, sizeof(struct vhost_vring_addr));
 +msg.addr.index += dev-vq_index;
  msg.size = sizeof(m.addr);
  break;
  
 @@ -280,7 +288,7 @@ static int vhost_user_call(struct vhost_dev *dev, 
 unsigned long int request,
  case 

Re: [Qemu-devel] [PATCH v6 1/2] vhost-user: add multi queue support

2015-08-13 Thread Maxime Leroy
On Thu, Aug 13, 2015 at 11:18 AM, Michael S. Tsirkin m...@redhat.com wrote:
 On Wed, Aug 12, 2015 at 02:25:41PM +0800, Ouyang Changchun wrote:
 Based on patch by Nikolay Nikolaev:
 Vhost-user will implement the multi queue support in a similar way
 to what vhost already has - a separate thread for each queue.
 To enable the multi queue functionality - a new command line parameter
 queues is introduced for the vhost-user netdev.

 The RESET_OWNER change is based on commit:
294ce717e0f212ed0763307f3eab72b4a1bdf4d0
 If it is reverted, the patch need update for it accordingly.

 Signed-off-by: Nikolay Nikolaev n.nikol...@virtualopensystems.com
 Signed-off-by: Changchun Ouyang changchun.ouy...@intel.com
 ---
 Changes since v5:
  - fix the message descption for VHOST_RESET_OWNER in vhost-user txt

 Changes since v4:
  - remove the unnecessary trailing '\n'

 Changes since v3:
  - fix one typo and wrap one long line

 Changes since v2:
  - fix vq index issue for set_vring_call
When it is the case of VHOST_SET_VRING_CALL, The vq_index is not 
 initialized before it is used,
thus it could be a random value. The random value leads to crash in vhost 
 after passing down
to vhost, as vhost use this random value to index an array index.
  - fix the typo in the doc and description
  - address vq index for reset_owner

 Changes since v1:
  - use s-nc.info_str when bringing up/down the backend

  docs/specs/vhost-user.txt |  7 ++-
  hw/net/vhost_net.c|  3 ++-
  hw/virtio/vhost-user.c| 11 ++-
  net/vhost-user.c  | 37 -
  qapi-schema.json  |  6 +-
  qemu-options.hx   |  5 +++--
  6 files changed, 50 insertions(+), 19 deletions(-)

 diff --git a/docs/specs/vhost-user.txt b/docs/specs/vhost-user.txt
 index 70da3b1..9390f89 100644
 --- a/docs/specs/vhost-user.txt
 +++ b/docs/specs/vhost-user.txt
 @@ -135,6 +135,11 @@ As older slaves don't support negotiating protocol 
 features,
  a feature bit was dedicated for this purpose:
  #define VHOST_USER_F_PROTOCOL_FEATURES 30

 +Multi queue support
 +---
 +The protocol supports multiple queues by setting all index fields in the 
 sent
 +messages to a properly calculated value.
 +
  Message types
  -

 @@ -198,7 +203,7 @@ Message types

Id: 4
Equivalent ioctl: VHOST_RESET_OWNER
 -  Master payload: N/A
 +  Master payload: vring state description

Issued when a new connection is about to be closed. The Master will no
longer own this connection (and will usually close it).

 This is an interface change, isn't it?
 We can't make it unconditionally, need to make it dependent
 on a protocol flag.

Agree. It can potential break vhost-user driver implementation
checking the size of the message. We should not change the vhost-user
protocol without a new protocol flag.

I think the first issue here that VHOST_RESET_OWNER should happen on
vhost_dev_cleanup and not in  vhost_net_stop_one.

VHOST_RESET_OWNER should be the counter part of VHOST_SET_OWNER. So it
don't need to have a payload like VHOST_SET_OWNER.

Thus I agree with this email
(http://lists.nongnu.org/archive/html/qemu-devel/2015-07/msg05971.html)

Maybe should we use an other message to tell to the backend that the
vring is not anymore available in vhost_net_stop_one ?

Maxime



Re: [Qemu-devel] [PATCH v6 1/2] vhost-user: add multi queue support

2015-08-13 Thread Michael S. Tsirkin
On Thu, Aug 13, 2015 at 12:24:16PM +0200, Maxime Leroy wrote:
 On Thu, Aug 13, 2015 at 11:18 AM, Michael S. Tsirkin m...@redhat.com wrote:
  On Wed, Aug 12, 2015 at 02:25:41PM +0800, Ouyang Changchun wrote:
  Based on patch by Nikolay Nikolaev:
  Vhost-user will implement the multi queue support in a similar way
  to what vhost already has - a separate thread for each queue.
  To enable the multi queue functionality - a new command line parameter
  queues is introduced for the vhost-user netdev.
 
  The RESET_OWNER change is based on commit:
 294ce717e0f212ed0763307f3eab72b4a1bdf4d0
  If it is reverted, the patch need update for it accordingly.
 
  Signed-off-by: Nikolay Nikolaev n.nikol...@virtualopensystems.com
  Signed-off-by: Changchun Ouyang changchun.ouy...@intel.com
  ---
  Changes since v5:
   - fix the message descption for VHOST_RESET_OWNER in vhost-user txt
 
  Changes since v4:
   - remove the unnecessary trailing '\n'
 
  Changes since v3:
   - fix one typo and wrap one long line
 
  Changes since v2:
   - fix vq index issue for set_vring_call
 When it is the case of VHOST_SET_VRING_CALL, The vq_index is not 
  initialized before it is used,
 thus it could be a random value. The random value leads to crash in 
  vhost after passing down
 to vhost, as vhost use this random value to index an array index.
   - fix the typo in the doc and description
   - address vq index for reset_owner
 
  Changes since v1:
   - use s-nc.info_str when bringing up/down the backend
 
   docs/specs/vhost-user.txt |  7 ++-
   hw/net/vhost_net.c|  3 ++-
   hw/virtio/vhost-user.c| 11 ++-
   net/vhost-user.c  | 37 -
   qapi-schema.json  |  6 +-
   qemu-options.hx   |  5 +++--
   6 files changed, 50 insertions(+), 19 deletions(-)
 
  diff --git a/docs/specs/vhost-user.txt b/docs/specs/vhost-user.txt
  index 70da3b1..9390f89 100644
  --- a/docs/specs/vhost-user.txt
  +++ b/docs/specs/vhost-user.txt
  @@ -135,6 +135,11 @@ As older slaves don't support negotiating protocol 
  features,
   a feature bit was dedicated for this purpose:
   #define VHOST_USER_F_PROTOCOL_FEATURES 30
 
  +Multi queue support
  +---
  +The protocol supports multiple queues by setting all index fields in the 
  sent
  +messages to a properly calculated value.
  +
   Message types
   -
 
  @@ -198,7 +203,7 @@ Message types
 
 Id: 4
 Equivalent ioctl: VHOST_RESET_OWNER
  -  Master payload: N/A
  +  Master payload: vring state description
 
 Issued when a new connection is about to be closed. The Master will 
  no
 longer own this connection (and will usually close it).
 
  This is an interface change, isn't it?
  We can't make it unconditionally, need to make it dependent
  on a protocol flag.
 
 Agree. It can potential break vhost-user driver implementation
 checking the size of the message. We should not change the vhost-user
 protocol without a new protocol flag.
 
 I think the first issue here that VHOST_RESET_OWNER should happen on
 vhost_dev_cleanup and not in  vhost_net_stop_one.
 
 VHOST_RESET_OWNER should be the counter part of VHOST_SET_OWNER. So it
 don't need to have a payload like VHOST_SET_OWNER.
 
 Thus I agree with this email
 (http://lists.nongnu.org/archive/html/qemu-devel/2015-07/msg05971.html)
 
 Maybe should we use an other message to tell to the backend that the
 vring is not anymore available in vhost_net_stop_one ?
 
 Maxime

I think the cleanest fix is to rename this message to e.g.
VHOST_RESET_DEVICE. This way we won't break existing users.

-- 
MST



[Qemu-devel] [PATCH v6 1/2] vhost-user: add multi queue support

2015-08-12 Thread Ouyang Changchun
Based on patch by Nikolay Nikolaev:
Vhost-user will implement the multi queue support in a similar way
to what vhost already has - a separate thread for each queue.
To enable the multi queue functionality - a new command line parameter
queues is introduced for the vhost-user netdev.

The RESET_OWNER change is based on commit:
   294ce717e0f212ed0763307f3eab72b4a1bdf4d0
If it is reverted, the patch need update for it accordingly.

Signed-off-by: Nikolay Nikolaev n.nikol...@virtualopensystems.com
Signed-off-by: Changchun Ouyang changchun.ouy...@intel.com
---
Changes since v5:
 - fix the message descption for VHOST_RESET_OWNER in vhost-user txt

Changes since v4:
 - remove the unnecessary trailing '\n'

Changes since v3:
 - fix one typo and wrap one long line

Changes since v2:
 - fix vq index issue for set_vring_call
   When it is the case of VHOST_SET_VRING_CALL, The vq_index is not initialized 
before it is used,
   thus it could be a random value. The random value leads to crash in vhost 
after passing down
   to vhost, as vhost use this random value to index an array index.
 - fix the typo in the doc and description
 - address vq index for reset_owner

Changes since v1:
 - use s-nc.info_str when bringing up/down the backend

 docs/specs/vhost-user.txt |  7 ++-
 hw/net/vhost_net.c|  3 ++-
 hw/virtio/vhost-user.c| 11 ++-
 net/vhost-user.c  | 37 -
 qapi-schema.json  |  6 +-
 qemu-options.hx   |  5 +++--
 6 files changed, 50 insertions(+), 19 deletions(-)

diff --git a/docs/specs/vhost-user.txt b/docs/specs/vhost-user.txt
index 70da3b1..9390f89 100644
--- a/docs/specs/vhost-user.txt
+++ b/docs/specs/vhost-user.txt
@@ -135,6 +135,11 @@ As older slaves don't support negotiating protocol 
features,
 a feature bit was dedicated for this purpose:
 #define VHOST_USER_F_PROTOCOL_FEATURES 30
 
+Multi queue support
+---
+The protocol supports multiple queues by setting all index fields in the sent
+messages to a properly calculated value.
+
 Message types
 -
 
@@ -198,7 +203,7 @@ Message types
 
   Id: 4
   Equivalent ioctl: VHOST_RESET_OWNER
-  Master payload: N/A
+  Master payload: vring state description
 
   Issued when a new connection is about to be closed. The Master will no
   longer own this connection (and will usually close it).
diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
index 1f25cb3..9cd6c05 100644
--- a/hw/net/vhost_net.c
+++ b/hw/net/vhost_net.c
@@ -159,6 +159,7 @@ struct vhost_net *vhost_net_init(VhostNetOptions *options)
 
 net-dev.nvqs = 2;
 net-dev.vqs = net-vqs;
+net-dev.vq_index = net-nc-queue_index;
 
 r = vhost_dev_init(net-dev, options-opaque,
options-backend_type, options-force);
@@ -269,7 +270,7 @@ static void vhost_net_stop_one(struct vhost_net *net,
 for (file.index = 0; file.index  net-dev.nvqs; ++file.index) {
 const VhostOps *vhost_ops = net-dev.vhost_ops;
 int r = vhost_ops-vhost_call(net-dev, VHOST_RESET_OWNER,
-  NULL);
+  file);
 assert(r = 0);
 }
 }
diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
index 27ba035..fb11d4c 100644
--- a/hw/virtio/vhost-user.c
+++ b/hw/virtio/vhost-user.c
@@ -219,7 +219,12 @@ static int vhost_user_call(struct vhost_dev *dev, unsigned 
long int request,
 break;
 
 case VHOST_USER_SET_OWNER:
+break;
+
 case VHOST_USER_RESET_OWNER:
+memcpy(msg.state, arg, sizeof(struct vhost_vring_state));
+msg.state.index += dev-vq_index;
+msg.size = sizeof(m.state);
 break;
 
 case VHOST_USER_SET_MEM_TABLE:
@@ -262,17 +267,20 @@ static int vhost_user_call(struct vhost_dev *dev, 
unsigned long int request,
 case VHOST_USER_SET_VRING_NUM:
 case VHOST_USER_SET_VRING_BASE:
 memcpy(msg.state, arg, sizeof(struct vhost_vring_state));
+msg.state.index += dev-vq_index;
 msg.size = sizeof(m.state);
 break;
 
 case VHOST_USER_GET_VRING_BASE:
 memcpy(msg.state, arg, sizeof(struct vhost_vring_state));
+msg.state.index += dev-vq_index;
 msg.size = sizeof(m.state);
 need_reply = 1;
 break;
 
 case VHOST_USER_SET_VRING_ADDR:
 memcpy(msg.addr, arg, sizeof(struct vhost_vring_addr));
+msg.addr.index += dev-vq_index;
 msg.size = sizeof(m.addr);
 break;
 
@@ -280,7 +288,7 @@ static int vhost_user_call(struct vhost_dev *dev, unsigned 
long int request,
 case VHOST_USER_SET_VRING_CALL:
 case VHOST_USER_SET_VRING_ERR:
 file = arg;
-msg.u64 = file-index  VHOST_USER_VRING_IDX_MASK;
+msg.u64 = (file-index + dev-vq_index)  VHOST_USER_VRING_IDX_MASK;
 msg.size = sizeof(m.u64);
 if (ioeventfd_enabled()  file-fd  0) {
 fds[fd_num++]