[dpdk-dev] [PATCH] doc: deprecate vhost-cuse

2016-07-21 Thread Loftus, Ciara
> Subject: [dpdk-dev] [PATCH] doc: deprecate vhost-cuse
> 
> Vhost-cuse was invented before vhost-user exist. The both are actually
> doing the same thing: a vhost-net implementation in user space. But they
> are not exactly the same thing.
> 
> Firstly, vhost-cuse is harder for use; no one seems to care it, either.
> Furthermore, since v2.1, a large majority of development effort has gone
> to vhost-user. For example, we extended the vhost-user spec to add the
> multiple queue support. We also added the vhost-user live migration at
> v16.04 and the latest one, vhost-user reconnect that allows vhost app
> restart without restarting the guest. Both of them are very important
> features for product usage and none of them works for vhost-cuse.
> 
> You now see that the difference between vhost-user and vhost-cuse is
> big (and will be bigger and bigger as time moves forward), that you
> should never use vhost-cuse, that we should drop it completely.
> 
> The remove would also result to a much cleaner code base, allowing us
> to do all kinds of extending easier.
> 
> So here to mark vhost-cuse as deprecated in this release and will be
> removed in the next release (v16.11).
> 
> Signed-off-by: Yuanhan Liu 
> ---
>  doc/guides/rel_notes/deprecation.rst | 4 
>  1 file changed, 4 insertions(+)
> 
> diff --git a/doc/guides/rel_notes/deprecation.rst
> b/doc/guides/rel_notes/deprecation.rst
> index f502f86..ee99558 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -41,3 +41,7 @@ Deprecation Notices
>  * The mempool functions for single/multi producer/consumer are
> deprecated and
>will be removed in 16.11.
>It is replaced by rte_mempool_generic_get/put functions.
> +
> +* The vhost-cuse will be removed in 16.11. Since v2.1, a large majority of
> +  development effort has gone to vhost-user, such as multiple-queue, live
> +  migration, reconnect etc. Therefore, vhost-user should be used instead.
> --
> 1.9.0

Acked-by: Ciara Loftus 



[dpdk-dev] [PATCH] vhost: fix missing flag reset on stop

2016-06-29 Thread Loftus, Ciara
> 
> Commit 550c9d27d143 ("vhost: set/reset device flags internally") moves
> the VIRTIO_DEV_RUNNING set/reset to vhost lib. But I missed one reset
> on stop; here fixes it.
> 
> Fixes: 550c9d27d143 ("vhost: set/reset device flags internally")
> 
> Reported-by: Loftus Ciara 
> Signed-off-by: Yuanhan Liu 
> ---
>  lib/librte_vhost/vhost_user/virtio-net-user.c | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/lib/librte_vhost/vhost_user/virtio-net-user.c
> b/lib/librte_vhost/vhost_user/virtio-net-user.c
> index a6a48dc..e7c4347 100644
> --- a/lib/librte_vhost/vhost_user/virtio-net-user.c
> +++ b/lib/librte_vhost/vhost_user/virtio-net-user.c
> @@ -317,8 +317,10 @@ user_get_vring_base(int vid, struct
> vhost_vring_state *state)
>   if (dev == NULL)
>   return -1;
>   /* We have to stop the queue (virtio) if it is running. */
> - if (dev->flags & VIRTIO_DEV_RUNNING)
> + if (dev->flags & VIRTIO_DEV_RUNNING) {
> + dev->flags &= ~VIRTIO_DEV_RUNNING;
>   notify_ops->destroy_device(vid);
> + }
> 
>   /* Here we are safe to get the last used index */
>   vhost_get_vring_base(vid, state->index, state);
> --
> 1.9.0

Thanks for the patch. I've tested it and it solves the issue I was seeing where 
destroy_device was being called too many times.

Tested-by: Ciara Loftus 




[dpdk-dev] [RFC] librte_vhost: Add unix domain socket fd registration

2016-06-24 Thread Loftus, Ciara
> 
> On Tue, Jun 21, 2016 at 09:15:03AM -0400, Aaron Conole wrote:
> > Yuanhan Liu  writes:
> >
> > > On Fri, Jun 17, 2016 at 11:32:36AM -0400, Aaron Conole wrote:
> > >> Prior to this commit, the only way to add a vhost-user socket to the
> > >> system is by relying on librte_vhost to open the unix domain socket and
> > >> add it to the unix socket list.  This is problematic for applications
> > >> which would like to set the permissions,
> > >
> > > So, you want to address the issue raised by following patch?
> > >
> > > http://dpdk.org/dev/patchwork/patch/1/
> >
> > That patch does try to address the issue, however - it has some
> > problems.  The biggest is a TOCTTOU issue when using chown.  The way to
> > solve that issue properly is different depending on which operating
> > system is being used (for instance, FreeBSD doesn't honor
> > fchown(),fchmod() on file descriptors).  My solution is basically to
> > punt that responsibility to the controlling application.
> >
> > > I would still like to stick to my proposal, that is to introduce a
> > > new API to do the permission change at anytime, if we end up with
> > > wanting to introduce a new API.
> >
> > I've spent a lot of time looking at the TOCTTOU problem, and I think
> > that is a really hard problem to solve portably.  Might be good to just
> > start with the flexible mechanism here that lets the application
> > developer satisfy their own needs.
> >
> > >> or applications which are not
> > >> directly allowed to open sockets due to policy restrictions.
> > >
> > > Could you name a specific example?
> >
> > SELinux policy might require one application to open the socket, and
> > pass it back via a dbus mechanism.  I can't actually think of a concrete
> > implemented case, so it may not be valid.
> >
> > > BTW, JFYI, since 16.07, DPDK supports client mode. It's QEMU (acting
> > > as the server) will create the socket file. I guess that would diminish
> > > (or even avoid?) the permission pain that DPDK acting as server brings.
> > > I doubt the API to do the permission change is really needed then.
> >
> > I wouldn't say it 'solves' the issue so much as hopes no one uses server
> > mode in DPDK.  I agree, for OvS, it could.
> 
> Actually, I think I would (personally) suggest people to switch to DPDK
> vhost-user client mode, for two good reasons:
> 
> - it should solve the socket permission issue raised by you and Christian.
> 
> - it has the "reconnect" feature since 16.07. Which means guest network
>   will still work from a DPDK vhost-user restart/crash. DPDK vhost-user
>   as server simply doesn't support that.
> 
> And FYI, Loftus is doing the DPDK for OVS intergration. Not quite sure
> whether she put the client mode as the default mode though.

Hi Yuanhan,

I intend to keep the DPDK server-mode as the default. My reasoning is that not
all users will have access to QEMU v2.7.0 initially. We will keep operating as 
before
but have an option to switch to DPDK client mode, and then perhaps look at
switching the default in a later release.

Thanks,
Ciara

> 
> > Thanks so much for your thoughts and review on this, Yuanhan Liu!
> 
> Thank you for proposing ideas to make DPDK better!
> 
>   --yliu


[dpdk-dev] [PATCH 3/6] vhost: add reconnect ability

2016-05-10 Thread Loftus, Ciara
> On Tue, May 10, 2016 at 09:00:45AM +, Xie, Huawei wrote:
> > On 5/10/2016 4:42 PM, Michael S. Tsirkin wrote:
> > > On Tue, May 10, 2016 at 08:07:00AM +, Xie, Huawei wrote:
> > >> On 5/10/2016 3:56 PM, Michael S. Tsirkin wrote:
> > >>> On Tue, May 10, 2016 at 07:24:10AM +, Xie, Huawei wrote:
> >  On 5/10/2016 2:08 AM, Yuanhan Liu wrote:
> > > On Mon, May 09, 2016 at 04:47:02PM +, Xie, Huawei wrote:
> > >> On 5/7/2016 2:36 PM, Yuanhan Liu wrote:
> > >>> +static void *
> > >>> +vhost_user_client_reconnect(void *arg)
> > >>> +{
> > >>> +   struct reconnect_info *reconn = arg;
> > >>> +   int ret;
> > >>> +
> > >>> +   RTE_LOG(ERR, VHOST_CONFIG, "reconnecting...\n");
> > >>> +   while (1) {
> > >>> +   ret = connect(reconn->fd, (struct sockaddr
> *)>un,
> > >>> +   sizeof(reconn->un));
> > >>> +   if (ret == 0)
> > >>> +   break;
> > >>> +   sleep(1);
> > >>> +   }
> > >>> +
> > >>> +   vhost_user_add_connection(reconn->fd, reconn->vsocket);
> > >>> +   free(reconn);
> > >>> +
> > >>> +   return NULL;
> > >>> +}
> > >>> +
> > >> We could create hundreds of vhost-user ports in OVS. Wihout
> connections
> > >> with QEMU established, those ports are just inactive. This works
> fine in
> > >> server mode.
> > >> With client modes, do we need to create hundreds of vhost
> threads? This
> > >> would be too interruptible.
> > >> How about we create only one thread and do the reconnections
> for all the
> > >> unconnected socket?
> > > Yes, good point and good suggestion. Will do it in v2.
> >  Hi Michael:
> >  This reminds me another irrelevant issue.
> >  In OVS, currently for each vhost port, we create an unix domain
> socket,
> >  and QEMU vhost proxy connects to this socket, and we use this to
> >  identify the connection. This works fine but is our workaround,
> >  otherwise we have no way to identify the connection.
> >  Do you think if this is an issue?
> > >> Let us say vhost creates one unix domain socket, with path as
> "sockpath",
> > >> and two virtio ports in two VMS both connect to the same socket with
> the
> > >> following command line
> > >> -chardev socket,id=char0,path=sockpath
> > >> How could vhost identify the connection?
> > > getpeername(2)?
> >
> > getpeer name returns host/port? then it isn't useful.
> 
> Maybe but I'm still in the dark. Useful for what?
> 
> > The typical scenario in my mind is:
> > We create a OVS port with the name "port1", and when we receive an
> > virtio connection with ID "port1", we attach this virtio interface to
> > the OVS port "port1".
> 
> If you are going to listen on a socket, you can create ports
> and attach socket fds to it dynamically as long as you get connections.
> What is wrong with that?

Hi Michael,

I haven't reviewed the patchset fully, but to hopefully provide more clarify on 
how OVS uses vHost:

OVS with DPDK needs some way to distinguish vHost connections from one another 
so it can switch traffic to the correct port depending on how the switch is 
programmed.
At the moment this is achieved by:
1. user provides unique port name eg. 'vhost0' (this is normal behaviour in OVS 
- checks are put in place to avoid overlapping port names)
2. DPDK vHost lib creates socket called 'vhost0'
3. VM launched with vhost0 socket // -chardev 
socket,id=char0,path=/path/to/vhost0
4. OVS receives 'new_device' vhost callback, checks the name of the device 
(virtio_dev->ifname == vhost0?), if the name matches the name provided in step 
1, OVS stores the virtio_net *dev pointer
5. OVS uses *dev pointer to send and receive traffic to vhost0 via calls to 
vhost library functions eg. enqueue(*dev) / dequeue(*dev)
6. Repeat for multiple vhost devices

We need to make sure that there is still some way to distinguish devices from 
one another like in step 4. Let me know if you need any further clarification.

Thanks,
Ciara

> 
> 
> >
> > >
> > >
> > >> Workarounds:
> > >> vhost creates two unix domain sockets, with path as "sockpath1" and
> > >> "sockpath2" respectively,
> > >> and the virtio ports in two VMS respectively connect to "sockpath1" and
> > >> "sockpath2".
> > >>
> > >> If we have some name string from QEMU over vhost, as you
> mentioned, we
> > >> could create only one socket with path "sockpath".
> > >> User ensure that the names are unique, just as how they do with
> multiple
> > >> sockets.
> > >>
> > > Seems rather fragile.
> >
> > >From the scenario above, it is enough. That is also how it works today
> > in DPDK OVS implementation with multiple sockets.
> > Any other idea?
> >
> > >
> > >>> I'm sorry, I have trouble understanding what you wrote above.
> > >>> What is the issue you are trying to work around?
> > >>>
> >  Do we have plan to support identification in
> VHOST_USER_MESSAGE? With
> >  the 

[dpdk-dev] [PATCH] vhost: call rte_vhost_enable_guest_notification only on enabled queues

2016-04-07 Thread Loftus, Ciara
> On 4/7/2016 8:29 AM, Rich Lane wrote:
> > If the vhost PMD were configured with more queues than the guest, the
> old
> > code would segfault in rte_vhost_enable_guest_notification due to a NULL
> > virtqueue pointer.
> >
> > Fixes: ee584e9710b9 ("vhost: add driver on top of the library")
> > Signed-off-by: Rich Lane 
> > ---
> >   drivers/net/vhost/rte_eth_vhost.c | 5 +++--
> >   1 file changed, 3 insertions(+), 2 deletions(-)
> >
> > diff --git a/drivers/net/vhost/rte_eth_vhost.c
> b/drivers/net/vhost/rte_eth_vhost.c
> > index b1eb082..310cbef 100644
> > --- a/drivers/net/vhost/rte_eth_vhost.c
> > +++ b/drivers/net/vhost/rte_eth_vhost.c
> > @@ -265,7 +265,6 @@ new_device(struct virtio_net *dev)
> > vq->device = dev;
> > vq->internal = internal;
> > vq->port = eth_dev->data->port_id;
> > -   rte_vhost_enable_guest_notification(dev, vq-
> >virtqueue_id, 0);
> > }
> > for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
> > vq = eth_dev->data->tx_queues[i];
> > @@ -274,9 +273,11 @@ new_device(struct virtio_net *dev)
> > vq->device = dev;
> > vq->internal = internal;
> > vq->port = eth_dev->data->port_id;
> > -   rte_vhost_enable_guest_notification(dev, vq-
> >virtqueue_id, 0);
> > }
> >
> > +   for (i = 0; i < dev->virt_qp_nb * VIRTIO_QNUM; i++)
> > +   rte_vhost_enable_guest_notification(dev, i, 0);
> > +
> > dev->flags |= VIRTIO_DEV_RUNNING;
> > dev->priv = eth_dev;
> > eth_dev->data->dev_link.link_status = ETH_LINK_UP;
> 
> Just one question, when qemu starts a vm, usually, only one queue is
> enabled, then only 1 tx and 1 rx are called
> rte_vhost_enable_guest_notification; but after system is up, we use
> "ethtool -K eth0 combined x" to enable multiqueues, there's no chance to
> call rte_vhost_enable_guest_notification for other queues, right?

As far as I know, virt_qp_nb will report the number of queues available, 
regardless of their state enabled/disabled. So for example if we have 4 queues, 
but only one enabled, virt_qp_nb should still = 4 and 
rte_vhost_enable_guest_notification() will be called for all of these queues.

Thanks,
Ciara

> 
> Thanks,
> Jianfeng


[dpdk-dev] [PATCH] vhost: call rte_vhost_enable_guest_notification only on enabled queues

2016-04-07 Thread Loftus, Ciara
> 
> If the vhost PMD were configured with more queues than the guest, the old
> code would segfault in rte_vhost_enable_guest_notification due to a NULL
> virtqueue pointer.
> 
> Fixes: ee584e9710b9 ("vhost: add driver on top of the library")
> Signed-off-by: Rich Lane 
> ---
>  drivers/net/vhost/rte_eth_vhost.c | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/net/vhost/rte_eth_vhost.c
> b/drivers/net/vhost/rte_eth_vhost.c
> index b1eb082..310cbef 100644
> --- a/drivers/net/vhost/rte_eth_vhost.c
> +++ b/drivers/net/vhost/rte_eth_vhost.c
> @@ -265,7 +265,6 @@ new_device(struct virtio_net *dev)
>   vq->device = dev;
>   vq->internal = internal;
>   vq->port = eth_dev->data->port_id;
> - rte_vhost_enable_guest_notification(dev, vq-
> >virtqueue_id, 0);
>   }
>   for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
>   vq = eth_dev->data->tx_queues[i];
> @@ -274,9 +273,11 @@ new_device(struct virtio_net *dev)
>   vq->device = dev;
>   vq->internal = internal;
>   vq->port = eth_dev->data->port_id;
> - rte_vhost_enable_guest_notification(dev, vq-
> >virtqueue_id, 0);
>   }
> 
> + for (i = 0; i < dev->virt_qp_nb * VIRTIO_QNUM; i++)
> + rte_vhost_enable_guest_notification(dev, i, 0);
> +
>   dev->flags |= VIRTIO_DEV_RUNNING;
>   dev->priv = eth_dev;
>   eth_dev->data->dev_link.link_status = ETH_LINK_UP;
> --
> 1.9.1

I see the same issue, and verified that this patch solves it. Thanks!

Tested-by: Ciara Loftus 

Thanks,
Ciara



[dpdk-dev] [PATCH] vhost: Fix retrieval of numa information in PMD

2016-04-06 Thread Loftus, Ciara
> 
> On Wed, Apr 06, 2016 at 03:49:25PM +0900, Tetsuya Mukawa wrote:
> > On 2016/04/06 1:09, Ciara Loftus wrote:
> > > After some testing, it was found that retrieving numa information
> > > about a vhost device via a call to get_mempolicy is more
> > > accurate when performed during the new_device callback versus
> > > the vring_state_changed callback, in particular upon initial boot
> > > of the VM.  Performing this check during new_device is also
> > > potentially more efficient as this callback is only triggered once
> > > during device initialisation, compared with vring_state_changed
> > > which may be called multiple times depending on the number of
> > > queues assigned to the device.
> > >
> > > Reorganise the code to perform this check and assign the correct
> > > socket_id to the device during the new_device callback.
> > >
> > > Signed-off-by: Ciara Loftus 
> > > ---
> > >  drivers/net/vhost/rte_eth_vhost.c | 28 ++--
> > >  1 file changed, 14 insertions(+), 14 deletions(-)
> > >
> > > diff --git a/drivers/net/vhost/rte_eth_vhost.c
> b/drivers/net/vhost/rte_eth_vhost.c
> > > index 4cc6bec..b1eb082 100644
> > > --- a/drivers/net/vhost/rte_eth_vhost.c
> > > +++ b/drivers/net/vhost/rte_eth_vhost.c
> > >
> >
> > Hi,
> >
> > I appreciate fixing it.
> > Just one worry is that state changed event may be occurred before new
> > device event.
> > The users should not call rte_eth_dev_socket_id() until new device event
> > comes, even if they catch queue state events.
> > Otherwise, they will get wrong socket id to call
> > rte_eth_rx/tx_queue_setup().
> 
> There is no way to guarantee that the socket id stuff would work
> perfectly in vhost, right? I mean, it's likely that virtio device
> would allocate memory from 2 or more sockets.
> 
> So, it doesn't matter too much whether it's set perfectly right
> or not. Instead, we should assign it with a saner value instead
> of a obvious wrong one when new_device() is not invoked yet. So,
> I'd suggest to make an assignment first based on vhost_dev (or
> whatever) struct, and then make it "right" at new_device()
> callback?

Thanks for the feedback.
At the moment with this patch numa_node is initially set to rte_socket_id() 
during  pmd init and then updated to the correct value during new_device.
Are you suggesting we set it again in between these two steps ("based on 
vhost_dev")? If so where do you think would be a good place?

Thanks,
Ciara

> 
> > So how about commenting it in 'rte_eth_vhost.h'?
> 
> It asks a different usage than other PMDs, which I don't think
> it's a good idea.
> 
>   --yliu


[dpdk-dev] [PATCH] vhost PMD: Fix wrong handling of maximum value of rx/tx queues

2016-03-22 Thread Loftus, Ciara
> 
> Currently, the maximum value of rx/tx queueus are kept by EAL. But,
> the value are used like below different meanings in vhost PMD.
>  - The maximum value of current enabled queues.
>  - The maximum value of current supported queues.
> 
> This wrong double meaning will cause an issue like below steps.
> 
> * Invoke application with below option.
>   --vdev 'eth_vhost0,iface=,queues=4'
> * Configure queues like below.
>   rte_eth_dev_configure(portid, 2, 2, ...);
> * Configure queues again like below.
>   rte_eth_dev_configure(portid, 4, 4, ...);
> 
> The second rte_eth_dev_configure() will be failed because both
> the maximum value of current enabled queues and supported queues
> will be '2' after calling first rte_eth_dev_configure().
> 
> To fix the issue, the patch prepare one more variable to keep the
> number of maximum supported queues in vhost PMD.
> 
> Signed-off-by: Tetsuya Mukawa 
> ---
>  drivers/net/vhost/rte_eth_vhost.c | 14 --
>  1 file changed, 12 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/net/vhost/rte_eth_vhost.c
> b/drivers/net/vhost/rte_eth_vhost.c
> index 6b9d287..5fd8c70 100644
> --- a/drivers/net/vhost/rte_eth_vhost.c
> +++ b/drivers/net/vhost/rte_eth_vhost.c
> @@ -88,6 +88,7 @@ struct vhost_queue {
>  struct pmd_internal {
>   char *dev_name;
>   char *iface_name;
> + uint16_t max_queues;
> 
>   volatile uint16_t once;
>  };
> @@ -555,11 +556,19 @@ static void
>  eth_dev_info(struct rte_eth_dev *dev,
>struct rte_eth_dev_info *dev_info)
>  {
> + struct pmd_internal *internal;
> +
> + internal = dev->data->dev_private;
> + if (internal == NULL) {
> + RTE_LOG(ERR, PMD, "Invalid device specified\n");
> + return;
> + }
> +
>   dev_info->driver_name = drivername;
>   dev_info->max_mac_addrs = 1;
>   dev_info->max_rx_pktlen = (uint32_t)-1;
> - dev_info->max_rx_queues = dev->data->nb_rx_queues;
> - dev_info->max_tx_queues = dev->data->nb_tx_queues;
> + dev_info->max_rx_queues = internal->max_queues;
> + dev_info->max_tx_queues = internal->max_queues;
>   dev_info->min_rx_bufsize = 0;
>  }
> 
> @@ -751,6 +760,7 @@ eth_dev_vhost_create(const char *name, char
> *iface_name, int16_t queues,
>   memmove(data->name, eth_dev->data->name, sizeof(data-
> >name));
>   data->nb_rx_queues = queues;
>   data->nb_tx_queues = queues;
> + internal->max_queues = queues;
>   data->dev_link = pmd_link;
>   data->mac_addrs = eth_addr;
> 
> --
> 2.1.4

Hi Tetsuya,

Thanks again for the patch.

Acked-by: Ciara Loftus 

Thanks,
Ciara



[dpdk-dev] [PATCH v13 2/2] vhost: Add VHOST PMD

2016-03-22 Thread Loftus, Ciara
> 
> On 2016/03/22 10:55, Tetsuya Mukawa wrote:
> > On 2016/03/22 0:40, Loftus, Ciara wrote:
> >>> +
> >>> +static void
> >>> +eth_dev_info(struct rte_eth_dev *dev,
> >>> +  struct rte_eth_dev_info *dev_info)
> >>> +{
> >>> + dev_info->driver_name = drivername;
> >>> + dev_info->max_mac_addrs = 1;
> >>> + dev_info->max_rx_pktlen = (uint32_t)-1;
> >>> + dev_info->max_rx_queues = dev->data->nb_rx_queues;
> >>> + dev_info->max_tx_queues = dev->data->nb_tx_queues;
> >> I'm not entirely familiar with eth driver code so please correct me if I am
> wrong.
> >>
> >> I'm wondering if assigning the max queue values to dev->data-
> >nb_*x_queues is correct.
> >> A user could change the value of nb_*x_queues with a call to
> rte_eth_dev_configure(n_queues) which in turn calls
> rte_eth_dev_*x_queue_config(n_queues) which will set dev->data-
> >nb_*x_queues to the value of n_queues which can be arbitrary and
> decided by the user. If this is the case, dev->data->nb_*x_queues will no
> longer reflect the max, rather the value the user chose in the call to
> rte_eth_dev_configure. And the max could potentially change with multiple
> calls to configure. Is this intended behaviour?
> > Hi Ciara,
> >
> > Thanks for reviewing it. Here is a part of rte_eth_dev_configure().
> >
> > int
> > rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t
> nb_tx_q,
> >   const struct rte_eth_conf *dev_conf)
> > {
> > 
> > /*
> >  * Check that the numbers of RX and TX queues are not greater
> >  * than the maximum number of RX and TX queues supported by the
> >  * configured device.
> >  */
> > (*dev->dev_ops->dev_infos_get)(dev, _info);
> >
> > if (nb_rx_q == 0 && nb_tx_q == 0) {
> >
> > return -EINVAL;
> > }
> >
> > if (nb_rx_q > dev_info.max_rx_queues) {
> >
> > return -EINVAL;
> > }
> >
> > if (nb_tx_q > dev_info.max_tx_queues) {
> >
> > return -EINVAL;
> > }
> >
> > 
> >
> > /*
> >  * Setup new number of RX/TX queues and reconfigure device.
> >  */
> > diag = rte_eth_dev_rx_queue_config(dev, nb_rx_q);
> > 
> > diag = rte_eth_dev_tx_queue_config(dev, nb_tx_q);
> > 
> > }
> >
> > Anyway, rte_eth_dev_tx/rx_queue_config() will be called only after
> > checking the current maximum number of queues.
> > So the user cannot set the number of queues greater than current
> maximum
> > number.
> >
> > Regards,
> > Tetsuya
> 
> Hi Ciara,
> 
> Now, I understand what you say.
> Probably you pointed out the case that the user specified a value
> smaller than current maximum value.
> 
> For example, if we have 4 queues. Below code will be failed at last line.
> rte_eth_dev_configure(portid, 4, 4, ...);
> rte_eth_dev_configure(portid, 2, 2, ...);
> rte_eth_dev_configure(portid, 4, 4, ...);
> 
> I will submit a patch to fix it. Could you please review and ack it?

Hi Tetsuya,

Correct, sorry for the initial confusion. Thanks for the patch so quickly.
I've reviewed the code - looks good. I just want to run some tests and will 
give my Ack later today all going well.

Thanks,
Ciara

> 
> Regards,
> Tetsuya



[dpdk-dev] [PATCH v13 2/2] vhost: Add VHOST PMD

2016-03-21 Thread Loftus, Ciara
Hi Tetsuya,

Thanks for the patches. Just one query below re max queue numbers.

Thanks,
Ciara

> -Original Message-
> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Tetsuya Mukawa
> Sent: Monday, March 21, 2016 5:45 AM
> To: dev at dpdk.org
> Cc: Richardson, Bruce ;
> ann.zhuangyanying at huawei.com; thomas.monjalon at 6wind.com; Tetsuya
> Mukawa 
> Subject: [dpdk-dev] [PATCH v13 2/2] vhost: Add VHOST PMD
> 
> The patch introduces a new PMD. This PMD is implemented as thin wrapper
> of librte_vhost. It means librte_vhost is also needed to compile the PMD.
> The vhost messages will be handled only when a port is started. So start
> a port first, then invoke QEMU.
> 
> The PMD has 2 parameters.
>  - iface:  The parameter is used to specify a path to connect to a
>virtio-net device.
>  - queues: The parameter is used to specify the number of the queues
>virtio-net device has.
>(Default: 1)
> 
> Here is an example.
> $ ./testpmd -c f -n 4 --vdev 'eth_vhost0,iface=/tmp/sock0,queues=1' -- -i
> 
> To connect above testpmd, here is qemu command example.
> 
> $ qemu-system-x86_64 \
> 
> -chardev socket,id=chr0,path=/tmp/sock0 \
> -netdev vhost-user,id=net0,chardev=chr0,vhostforce,queues=1 \
> -device virtio-net-pci,netdev=net0,mq=on
> 
> Signed-off-by: Tetsuya Mukawa 
> Acked-by: Ferruh Yigit 
> Acked-by: Yuanhan Liu 
> Acked-by: Rich Lane 
> Tested-by: Rich Lane 
> ---
>  MAINTAINERS |   5 +
>  config/common_base  |   6 +
>  config/common_linuxapp  |   1 +
>  doc/guides/nics/index.rst   |   1 +
>  doc/guides/nics/overview.rst|  37 +-
>  doc/guides/nics/vhost.rst   | 110 
>  doc/guides/rel_notes/release_16_04.rst  |   4 +
>  drivers/net/Makefile|   4 +
>  drivers/net/vhost/Makefile  |  62 ++
>  drivers/net/vhost/rte_eth_vhost.c   | 917
> 
>  drivers/net/vhost/rte_eth_vhost.h   | 109 
>  drivers/net/vhost/rte_pmd_vhost_version.map |  10 +
>  mk/rte.app.mk   |   6 +
>  13 files changed, 1254 insertions(+), 18 deletions(-)
>  create mode 100644 doc/guides/nics/vhost.rst
>  create mode 100644 drivers/net/vhost/Makefile
>  create mode 100644 drivers/net/vhost/rte_eth_vhost.c
>  create mode 100644 drivers/net/vhost/rte_eth_vhost.h
>  create mode 100644 drivers/net/vhost/rte_pmd_vhost_version.map
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 8b21979..7a47fc0 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -352,6 +352,11 @@ Null PMD
>  M: Tetsuya Mukawa 
>  F: drivers/net/null/
> 
> +Vhost PMD
> +M: Tetsuya Mukawa 
> +M: Yuanhan Liu 
> +F: drivers/net/vhost/
> +
>  Intel AES-NI GCM PMD
>  M: Declan Doherty 
>  F: drivers/crypto/aesni_gcm/
> diff --git a/config/common_base b/config/common_base
> index dbd405b..5efee07 100644
> --- a/config/common_base
> +++ b/config/common_base
> @@ -514,6 +514,12 @@ CONFIG_RTE_LIBRTE_VHOST_NUMA=n
>  CONFIG_RTE_LIBRTE_VHOST_DEBUG=n
> 
>  #
> +# Compile vhost PMD
> +# To compile, CONFIG_RTE_LIBRTE_VHOST should be enabled.
> +#
> +CONFIG_RTE_LIBRTE_PMD_VHOST=n
> +
> +#
>  #Compile Xen domain0 support
>  #
>  CONFIG_RTE_LIBRTE_XEN_DOM0=n
> diff --git a/config/common_linuxapp b/config/common_linuxapp
> index ffbe260..7e698e2 100644
> --- a/config/common_linuxapp
> +++ b/config/common_linuxapp
> @@ -40,5 +40,6 @@ CONFIG_RTE_EAL_VFIO=y
>  CONFIG_RTE_KNI_KMOD=y
>  CONFIG_RTE_LIBRTE_KNI=y
>  CONFIG_RTE_LIBRTE_VHOST=y
> +CONFIG_RTE_LIBRTE_PMD_VHOST=y
>  CONFIG_RTE_LIBRTE_PMD_AF_PACKET=y
>  CONFIG_RTE_LIBRTE_POWER=y
> diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
> index 0b353a8..d53b0c7 100644
> --- a/doc/guides/nics/index.rst
> +++ b/doc/guides/nics/index.rst
> @@ -49,6 +49,7 @@ Network Interface Controller Drivers
>  nfp
>  szedata2
>  virtio
> +vhost
>  vmxnet3
>  pcap_ring
> 
> diff --git a/doc/guides/nics/overview.rst b/doc/guides/nics/overview.rst
> index 2d4f014..40ca5ec 100644
> --- a/doc/guides/nics/overview.rst
> +++ b/doc/guides/nics/overview.rst
> @@ -74,20 +74,21 @@ Most of these differences are summarized below.
> 
>  .. table:: Features availability in networking drivers
> 
> -    = = = = = = = = = = = = = = = = = = = = = = = = = =
> = = = = =
> -   Feature  a b b b c e e i i i i i i i i i i f f m m m n n p r 
> s v v v x
> -f n n o x 1 n 4 4 4 4 g g x x x x m m l l p f u c i 
> z i i m e
> -p x x n g 0 i 0 0 0 0 b b g g g g 1 1 x x i p l a n 
> e r r x n
> -a 2 2 d b 0 c e e e e   v b b b b 0 0 4 5 p   l p g 
> d t t n v
> -c x x i e 0 . v v   f e e e e k k e 
> a i i e i
> -k   v n . f f   . v v   .   
> t o o t 

[dpdk-dev] [PATCH v2 3/3] vhost: fix vq realloc at numa_realloc

2016-03-07 Thread Loftus, Ciara
> 
> vq is allocated on pairs, hence we should do pair reallocation
> at numa_realloc() as well, otherwise an error like following
> occurs while do numa reallocation:
> 
> VHOST_CONFIG: reallocate vq from 0 to 1 node
> PANIC in rte_free():
> Fatal error: Invalid memory
> 
> The reason we don't catch it is because numa_realloc() will
> not take effect when RTE_LIBRTE_VHOST_NUMA is not enabled,
> which is the default case.
> 
> Fixes: e049ca6d10e0 ("vhost-user: prepare multiple queue setup")
> 
> Signed-off-by: Yuanhan Liu 
> Acked-by: Huawei Xie 
> ---
>  lib/librte_vhost/virtio-net.c | 13 +++--
>  1 file changed, 11 insertions(+), 2 deletions(-)
> 
> diff --git a/lib/librte_vhost/virtio-net.c b/lib/librte_vhost/virtio-net.c
> index 1566c93..7469312 100644
> --- a/lib/librte_vhost/virtio-net.c
> +++ b/lib/librte_vhost/virtio-net.c
> @@ -445,6 +445,13 @@ numa_realloc(struct virtio_net *dev, int index)
>   struct vhost_virtqueue *old_vq, *vq;
>   int ret;
> 
> + /*
> +  * vq is allocated on pairs, we should try to do realloc
> +  * on first queue of one queue pair only.
> +  */
> + if (index % VIRTIO_QNUM != 0)
> + return dev;
> +
>   old_dev = dev;
>   vq = old_vq = dev->virtqueue[index];
> 
> @@ -461,11 +468,12 @@ numa_realloc(struct virtio_net *dev, int index)
>   if (oldnode != newnode) {
>   RTE_LOG(INFO, VHOST_CONFIG,
>   "reallocate vq from %d to %d node\n", oldnode,
> newnode);
> - vq = rte_malloc_socket(NULL, sizeof(*vq), 0, newnode);
> + vq = rte_malloc_socket(NULL, sizeof(*vq) * VIRTIO_QNUM,
> 0,
> +newnode);
>   if (!vq)
>   return dev;
> 
> - memcpy(vq, old_vq, sizeof(*vq));
> + memcpy(vq, old_vq, sizeof(*vq) * VIRTIO_QNUM);
>   rte_free(old_vq);
>   }
> 
> @@ -491,6 +499,7 @@ numa_realloc(struct virtio_net *dev, int index)
> 
>  out:
>   dev->virtqueue[index] = vq;
> + dev->virtqueue[index + 1] = vq + 1;
>   vhost_devices[dev->device_fh] = dev;
> 
>   return dev;
> --
> 1.9.0

I encountered the " PANIC in rte_free():" error when using 
RTE_LIBRTE_VHOST_NUMA too, and applying this series resolved the issue. Thanks 
for the patches.

Tested-by: Ciara Loftus 

Thanks,
Ciara


[dpdk-dev] [ovs-dev] OVS with DPDK Meetup notes

2015-12-02 Thread Loftus, Ciara
> >
> > On Thu, Nov 26, 2015 at 05:56:08PM +, Traynor, Kevin wrote:
> > > Hi All,
> > >
> > > Just wanted to post some summary notes on the recent OVS with DPDK
> Meetup
> > we
> > > had after the OVS conference. Thanks to everyone for the often lively
> > discussion.
> > > I've collated and condensed Maryam's notes (Thank you Maryam) with
> my own.
> > > Corrections and additions are welcome.
> >
> > Thanks for having organized the event and for the good notes.
> >
> >
> > > Usability
> > > ==
> > > * Single binary for OVS/OVS with DPDK and static vs. dynamic linking
> > >   - Discussion around deployment and what the best model is.
> > >   - Flavio has posted a mail on this
> > >http://openvswitch.org/pipermail/dev/2015-November/062599.html
> >
> > Let us know if you find a performance difference between static vs
> > dynamic linking.  We might be able to accommodate both options in
> > the same spec, but it seems we should go with shared linking only
> > to keep it simple for now.
> >
> 
> Yes, will do. I seem to recall from when we looked at this on a previous
> project it was a few hundred kpps but it was a long time ago, so I'm not
> certain how many.
> 
> >
> > > Features
> > > 
> > > * Multiqueue vhost-user
> > >   - Looks really promising - will help us scale out performance to the VM.
> >
> > I see that vhost PMD is moving and if it gets accepted, it would
> > be a nice clean up for OVS.  Do you know if there is someone working
> > on this already?
> 
> I agree, it should simplify the code a lot. Ciara reviewed it and did a
> quick integration to see if the api would work. The patch was churning quite
> a bit, so we decided to hold off doing any more work with it for the time
> being.

Correct, the vHost PMD really cleans things up and removes the need for a lot 
of code in netdev-dpdk. The netdev_class for phy ports and vhost-user ports 
could be pretty much the same, except for the construct functions.

> 
> >
> > > * dpdkr/ivshmem
> > >   - Still useful. Check/Update documentation to ensure limitations are
> > clear.
> >
> > Yeah, same thing here.
> >
> > Thanks,
> > fbl



[dpdk-dev] [PATCH 2/3] vhost: Add callback and private data for vhost PMD

2015-10-30 Thread Loftus, Ciara
> 
> These variables are needed to be able to manage one of virtio devices
> using both vhost library APIs and vhost PMD.
> For example, if vhost PMD uses current callback handler and private data
> provided by vhost library, A DPDK application that links vhost library
> cannot use some of vhost library APIs. To avoid it, callback and private
> data for vhost PMD are needed.
> 
> Signed-off-by: Tetsuya Mukawa 
> ---
>  lib/librte_vhost/rte_vhost_version.map|  6 +++
>  lib/librte_vhost/rte_virtio_net.h |  3 ++
>  lib/librte_vhost/vhost_user/virtio-net-user.c | 13 +++
>  lib/librte_vhost/virtio-net.c | 56 
> +--
>  lib/librte_vhost/virtio-net.h |  4 +-
>  5 files changed, 70 insertions(+), 12 deletions(-)
> 
> diff --git a/lib/librte_vhost/rte_vhost_version.map
> b/lib/librte_vhost/rte_vhost_version.map
> index 3d8709e..00a9ce5 100644
> --- a/lib/librte_vhost/rte_vhost_version.map
> +++ b/lib/librte_vhost/rte_vhost_version.map
> @@ -20,3 +20,9 @@ DPDK_2.1 {
>   rte_vhost_driver_unregister;
> 
>  } DPDK_2.0;
> +
> +DPDK_2.2 {
> + global:
> +
> + rte_vhost_driver_pmd_callback_register;
> +} DPDK_2.1;
> diff --git a/lib/librte_vhost/rte_virtio_net.h
> b/lib/librte_vhost/rte_virtio_net.h
> index 426a70d..08e77af 100644
> --- a/lib/librte_vhost/rte_virtio_net.h
> +++ b/lib/librte_vhost/rte_virtio_net.h
> @@ -106,6 +106,7 @@ struct virtio_net {
>   charifname[IF_NAME_SZ]; /**< Name of the tap
> device or socket path. */
>   uint32_tvirt_qp_nb; /**< number of queue pair
> we have allocated */
>   void*priv;  /**< private context */
> + void*pmd_priv;  /**< private context for
> vhost PMD */
>   struct vhost_virtqueue
>   *virtqueue[VIRTIO_NET_CTRL_MQ_VQ_PAIRS_MAX];/**< Contains
> all virtqueue information. */
>  } __rte_cache_aligned;
> 
> @@ -202,6 +203,8 @@ int rte_vhost_driver_unregister(const char
> *dev_name);
> 
>  /* Register callbacks. */
>  int rte_vhost_driver_callback_register(struct virtio_net_device_ops const *
> const);
> +/* Register callbacks for vhost PMD (Only for internal). */
> +int rte_vhost_driver_pmd_callback_register(struct virtio_net_device_ops
> const * const);
>  /* Start vhost driver session blocking loop. */
>  int rte_vhost_driver_session_start(void);
> 
> diff --git a/lib/librte_vhost/vhost_user/virtio-net-user.c
> b/lib/librte_vhost/vhost_user/virtio-net-user.c
> index 3e8dfea..dad083b 100644
> --- a/lib/librte_vhost/vhost_user/virtio-net-user.c
> +++ b/lib/librte_vhost/vhost_user/virtio-net-user.c
> @@ -111,7 +111,7 @@ user_set_mem_table(struct vhost_device_ctx ctx,
> struct VhostUserMsg *pmsg)
> 
>   /* Remove from the data plane. */
>   if (dev->flags & VIRTIO_DEV_RUNNING)
> - notify_ops->destroy_device(dev);
> + notify_destroy_device(dev);
> 
>   if (dev->mem) {
>   free_mem_region(dev);
> @@ -272,7 +272,7 @@ user_set_vring_kick(struct vhost_device_ctx ctx,
> struct VhostUserMsg *pmsg)
> 
>   if (virtio_is_ready(dev) &&
>   !(dev->flags & VIRTIO_DEV_RUNNING))
> - notify_ops->new_device(dev);
> + notify_new_device(dev);
>  }
> 
>  /*
> @@ -307,7 +307,7 @@ user_get_vring_base(struct vhost_device_ctx ctx,
>   if ((dev->flags & VIRTIO_DEV_RUNNING) &&
>   (dev->virtqueue[base_idx + VIRTIO_RXQ]->kickfd ==
> -1) &&
>   (dev->virtqueue[base_idx + VIRTIO_TXQ]->kickfd ==
> -1))
> - notify_ops->destroy_device(dev);
> + notify_destroy_device(dev);
> 
>   return 0;
>  }
> @@ -328,10 +328,7 @@ user_set_vring_enable(struct vhost_device_ctx ctx,
>   "set queue enable: %d to qp idx: %d\n",
>   enable, state->index);
> 
> - if (notify_ops->vring_state_changed) {
> - notify_ops->vring_state_changed(dev, base_idx /
> VIRTIO_QNUM,
> - enable);
> - }
> + notify_vring_state_changed(dev, base_idx / VIRTIO_QNUM,
> enable);
> 
>   dev->virtqueue[base_idx + VIRTIO_RXQ]->enabled = enable;
>   dev->virtqueue[base_idx + VIRTIO_TXQ]->enabled = enable;
> @@ -345,7 +342,7 @@ user_destroy_device(struct vhost_device_ctx ctx)
>   struct virtio_net *dev = get_device(ctx);
> 
>   if (dev && (dev->flags & VIRTIO_DEV_RUNNING))
> - notify_ops->destroy_device(dev);
> + notify_destroy_device(dev);
> 
>   if (dev && dev->mem) {
>   free_mem_region(dev);
> diff --git a/lib/librte_vhost/virtio-net.c b/lib/librte_vhost/virtio-net.c
> index ee2e84d..de5d8ff 100644
> --- a/lib/librte_vhost/virtio-net.c
> +++ b/lib/librte_vhost/virtio-net.c
> @@ -65,6 +65,8 @@ struct virtio_net_config_ll {
> 
>  /* device ops to add/remove device to/from data core. */
>  struct virtio_net_device_ops const 

[dpdk-dev] [RFC PATCH v2] vhost: Add VHOST PMD

2015-10-20 Thread Loftus, Ciara
> 
> On 2015/09/24 2:47, Loftus, Ciara wrote:
> >> The patch introduces a new PMD. This PMD is implemented as thin
> wrapper
> >> of librte_vhost. It means librte_vhost is also needed to compile the PMD.
> >> The PMD can have 'iface' parameter like below to specify a path to
> connect
> >> to a virtio-net device.
> >>
> >> $ ./testpmd -c f -n 4 --vdev 'eth_vhost0,iface=/tmp/sock0' -- -i
> >>
> >> To connect above testpmd, here is qemu command example.
> >>
> >> $ qemu-system-x86_64 \
> >> 
> >> -chardev socket,id=chr0,path=/tmp/sock0 \
> >> -netdev vhost-user,id=net0,chardev=chr0,vhostforce \
> >> -device virtio-net-pci,netdev=net0
> >>
> >> Signed-off-by: Tetsuya Mukawa 
> >> ---
> >>  config/common_linuxapp  |   6 +
> >>  drivers/net/Makefile|   4 +
> >>  drivers/net/vhost/Makefile  |  61 +++
> >>  drivers/net/vhost/rte_eth_vhost.c   | 640
> >> 
> >>  drivers/net/vhost/rte_pmd_vhost_version.map |   4 +
> >>  mk/rte.app.mk   |   8 +-
> >>  6 files changed, 722 insertions(+), 1 deletion(-)
> >>  create mode 100644 drivers/net/vhost/Makefile
> >>  create mode 100644 drivers/net/vhost/rte_eth_vhost.c
> >>  create mode 100644 drivers/net/vhost/rte_pmd_vhost_version.map
> >>
> >> +struct pmd_internal {
> >> +  TAILQ_ENTRY(pmd_internal) next;
> >> +  char *dev_name;
> >> +  char *iface_name;
> >> +  unsigned nb_rx_queues;
> >> +  unsigned nb_tx_queues;
> >> +  rte_atomic16_t xfer;
> > Is this flag just used to indicate the state of the virtio_net device?
> > Ie. if =0 then virtio_dev=NULL and if =1 then virtio_net !=NULL & the
> VIRTIO_DEV_RUNNING flag is set?
> 
> Hi Clara,
> 
> I am sorry for very late reply.
> 
> Yes, it is. Probably we can optimize it more.
> I will change this implementation a bit in next patches.
> Could you please check it?
Of course, thanks.

> 
> >> +
> >> +static uint16_t
> >> +eth_vhost_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
> >> +{
> >> +  struct vhost_queue *r = q;
> >> +  uint16_t i, nb_tx = 0;
> >> +
> >> +  if (unlikely(r->internal == NULL))
> >> +  return 0;
> >> +
> >> +  if (unlikely(rte_atomic16_read(>internal->xfer) == 0))
> >> +  return 0;
> >> +
> >> +  rte_atomic16_set(>tx_executing, 1);
> >> +
> >> +  if (unlikely(rte_atomic16_read(>internal->xfer) == 0))
> >> +  goto out;
> >> +
> >> +  nb_tx = (uint16_t)rte_vhost_enqueue_burst(r->device,
> >> +  VIRTIO_RXQ, bufs, nb_bufs);
> >> +
> >> +  rte_atomic64_add(&(r->tx_pkts), nb_tx);
> >> +  rte_atomic64_add(&(r->err_pkts), nb_bufs - nb_tx);
> >> +
> >> +  for (i = 0; likely(i < nb_tx); i++)
> >> +  rte_pktmbuf_free(bufs[i]);
> > We may not always want to free these mbufs. For example, if a call is made
> to rte_eth_tx_burst with buffers from another (non DPDK) source, they may
> not be ours to free.
> 
> Sorry, I am not sure what type of buffer you want to transfer.
> 
> This is a PMD that wraps librte_vhost.
> And I guess other PMDs cannot handle buffers from another non DPDK
> source.
> Should we take care such buffers?
> 
> I have also checked af_packet PMD.
> It seems the tx function of af_packet PMD just frees mbuf.

For example if using the PMD with an application that receives buffers from 
another source. Eg. a virtual switch receiving packets from an interface using 
the kernel driver.
I see that af_packet also frees the mbuf. I've checked the ixgbe and ring pmds 
though and they don't seem to free the buffers, although I may have missed 
something, the code for these is rather large and I am unfamiliar with most of 
it. If I am correct though, should this behaviour vary from PMD to PMD I wonder?
> 
> >> +
> >> +
> >> +  eth_dev = rte_eth_dev_allocated(internal->dev_name);
> >> +  if (eth_dev == NULL) {
> >> +  RTE_LOG(INFO, PMD, "failuer to find ethdev\n");
> > Typo: Failure. Same for the destroy_device function
> 
> Thanks, I will fix it in next patches.
> 
> >> +  return -1;
> >> +  }
> >> +
> >> +  internal = eth_dev->data->dev_private;
> >> +
> >> +  for 

[dpdk-dev] [RFC PATCH v2] vhost: Add VHOST PMD

2015-10-19 Thread Loftus, Ciara
> On 2015/10/16 21:52, Bruce Richardson wrote:
> > On Mon, Aug 31, 2015 at 12:55:26PM +0900, Tetsuya Mukawa wrote:
> >> The patch introduces a new PMD. This PMD is implemented as thin
> wrapper
> >> of librte_vhost. It means librte_vhost is also needed to compile the PMD.
> >> The PMD can have 'iface' parameter like below to specify a path to
> connect
> >> to a virtio-net device.
> >>
> >> $ ./testpmd -c f -n 4 --vdev 'eth_vhost0,iface=/tmp/sock0' -- -i
> >>
> >> To connect above testpmd, here is qemu command example.
> >>
> >> $ qemu-system-x86_64 \
> >> 
> >> -chardev socket,id=chr0,path=/tmp/sock0 \
> >> -netdev vhost-user,id=net0,chardev=chr0,vhostforce \
> >> -device virtio-net-pci,netdev=net0
> >>
> >> Signed-off-by: Tetsuya Mukawa 
> > With this PMD in place, is there any need to keep the existing vhost library
> > around as a separate entity? Can the existing library be
> subsumed/converted into
> > a standard PMD?
> >
> > /Bruce
> 
> Hi Bruce,
> 
> I concern about whether the PMD has all features of librte_vhost,
> because librte_vhost provides more features and freedom than ethdev API
> provides.
> In some cases, user needs to choose limited implementation without
> librte_vhost.
> I am going to eliminate such cases while implementing the PMD.
> But I don't have strong belief that we can remove librte_vhost now.
> 
> So how about keeping current separation in next DPDK?
> I guess people will try to replace librte_vhost to vhost PMD, because
> apparently using ethdev APIs will be useful in many cases.
> And we will get feedbacks like "vhost PMD needs to support like this usage".
> (Or we will not have feedbacks, but it's also OK.)
> Then, we will be able to merge librte_vhost and vhost PMD.

I agree with the above. One the concerns I had when reviewing the patch was 
that the PMD removes some freedom that is available with the library. Eg. 
Ability to implement the new_device and destroy_device callbacks. If using the 
PMD you are constrained to the implementations of these in the PMD driver, but 
if using librte_vhost, you can implement your own with whatever functionality 
you like - a good example of this can be seen in the vhost sample app.
On the other hand, the PMD is useful in that it removes a lot of complexity for 
the user and may work for some more general use cases. So I would be in favour 
of having both options available too.

Ciara

> 
> Thanks,
> Tetsuya


[dpdk-dev] [RFC PATCH v2] vhost: Add VHOST PMD

2015-09-23 Thread Loftus, Ciara
> The patch introduces a new PMD. This PMD is implemented as thin wrapper
> of librte_vhost. It means librte_vhost is also needed to compile the PMD.
> The PMD can have 'iface' parameter like below to specify a path to connect
> to a virtio-net device.
> 
> $ ./testpmd -c f -n 4 --vdev 'eth_vhost0,iface=/tmp/sock0' -- -i
> 
> To connect above testpmd, here is qemu command example.
> 
> $ qemu-system-x86_64 \
> 
> -chardev socket,id=chr0,path=/tmp/sock0 \
> -netdev vhost-user,id=net0,chardev=chr0,vhostforce \
> -device virtio-net-pci,netdev=net0
> 
> Signed-off-by: Tetsuya Mukawa 
> ---
>  config/common_linuxapp  |   6 +
>  drivers/net/Makefile|   4 +
>  drivers/net/vhost/Makefile  |  61 +++
>  drivers/net/vhost/rte_eth_vhost.c   | 640
> 
>  drivers/net/vhost/rte_pmd_vhost_version.map |   4 +
>  mk/rte.app.mk   |   8 +-
>  6 files changed, 722 insertions(+), 1 deletion(-)
>  create mode 100644 drivers/net/vhost/Makefile
>  create mode 100644 drivers/net/vhost/rte_eth_vhost.c
>  create mode 100644 drivers/net/vhost/rte_pmd_vhost_version.map
> 
> diff --git a/config/common_linuxapp b/config/common_linuxapp
> index 0de43d5..7310240 100644
> --- a/config/common_linuxapp
> +++ b/config/common_linuxapp
> @@ -446,6 +446,12 @@ CONFIG_RTE_LIBRTE_VHOST_NUMA=n
>  CONFIG_RTE_LIBRTE_VHOST_DEBUG=n
> 
>  #
> +# Compile vhost PMD
> +# To compile, CONFIG_RTE_LIBRTE_VHOST should be enabled.
> +#
> +CONFIG_RTE_LIBRTE_PMD_VHOST=y
> +
> +#
>  #Compile Xen domain0 support
>  #
>  CONFIG_RTE_LIBRTE_XEN_DOM0=n
> diff --git a/drivers/net/Makefile b/drivers/net/Makefile
> index 5ebf963..e46a38e 100644
> --- a/drivers/net/Makefile
> +++ b/drivers/net/Makefile
> @@ -49,5 +49,9 @@ DIRS-$(CONFIG_RTE_LIBRTE_VIRTIO_PMD) += virtio
>  DIRS-$(CONFIG_RTE_LIBRTE_VMXNET3_PMD) += vmxnet3
>  DIRS-$(CONFIG_RTE_LIBRTE_PMD_XENVIRT) += xenvirt
> 
> +ifeq ($(CONFIG_RTE_LIBRTE_VHOST),y)
> +DIRS-$(CONFIG_RTE_LIBRTE_PMD_VHOST) += vhost
> +endif # $(CONFIG_RTE_LIBRTE_VHOST)
> +
>  include $(RTE_SDK)/mk/rte.sharelib.mk
>  include $(RTE_SDK)/mk/rte.subdir.mk
> diff --git a/drivers/net/vhost/Makefile b/drivers/net/vhost/Makefile
> new file mode 100644
> index 000..018edde
> --- /dev/null
> +++ b/drivers/net/vhost/Makefile
> @@ -0,0 +1,61 @@
> +#   BSD LICENSE
> +#
> +#   Copyright (c) 2010-2015 Intel Corporation.
> +#   All rights reserved.
> +#
> +#   Redistribution and use in source and binary forms, with or without
> +#   modification, are permitted provided that the following conditions
> +#   are met:
> +#
> +# * Redistributions of source code must retain the above copyright
> +#   notice, this list of conditions and the following disclaimer.
> +# * Redistributions in binary form must reproduce the above copyright
> +#   notice, this list of conditions and the following disclaimer in
> +#   the documentation and/or other materials provided with the
> +#   distribution.
> +# * Neither the name of Intel corporation nor the names of its
> +#   contributors may be used to endorse or promote products derived
> +#   from this software without specific prior written permission.
> +#
> +#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
> CONTRIBUTORS
> +#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT
> NOT
> +#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND
> FITNESS FOR
> +#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
> COPYRIGHT
> +#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
> INCIDENTAL,
> +#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
> NOT
> +#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
> OF USE,
> +#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED
> AND ON ANY
> +#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
> TORT
> +#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
> THE USE
> +#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
> DAMAGE.
> +
> +include $(RTE_SDK)/mk/rte.vars.mk
> +
> +#
> +# library name
> +#
> +LIB = librte_pmd_vhost.a
> +
> +CFLAGS += -O3
> +CFLAGS += $(WERROR_FLAGS)
> +
> +EXPORT_MAP := rte_pmd_vhost_version.map
> +
> +LIBABIVER := 1
> +
> +#
> +# all source are stored in SRCS-y
> +#
> +SRCS-$(CONFIG_RTE_LIBRTE_PMD_VHOST) += rte_eth_vhost.c
> +
> +#
> +# Export include files
> +#
> +SYMLINK-y-include +=
> +
> +# this lib depends upon:
> +DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_VHOST) += lib/librte_mbuf
> +DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_VHOST) += lib/librte_ether
> +DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_VHOST) += lib/librte_kvargs
> +
> +include $(RTE_SDK)/mk/rte.lib.mk
> diff --git a/drivers/net/vhost/rte_eth_vhost.c
> b/drivers/net/vhost/rte_eth_vhost.c
> new file mode 100644
> index 000..679e893
> --- /dev/null
> +++ b/drivers/net/vhost/rte_eth_vhost.c
> @@ -0,0 +1,640 @@

[dpdk-dev] [RFC PATCH] vhost: Add VHOST PMD

2015-09-17 Thread Loftus, Ciara
> 
> On 2015/09/16 1:27, Loftus, Ciara wrote:
> >> +
> >> +static int
> >> +rte_pmd_vhost_devinit(const char *name, const char *params)
> >> +{
> >> +  struct rte_kvargs *kvlist = NULL;
> >> +  int ret = 0;
> >> +  int index;
> >> +  char *iface_name;
> >> +
> >> +  RTE_LOG(INFO, PMD, "Initializing pmd_vhost for %s\n", name);
> >> +
> >> +  kvlist = rte_kvargs_parse(params, valid_arguments);
> >> +  if (kvlist == NULL)
> >> +  return -1;
> >> +
> >> +  if (strlen(name) < strlen("eth_vhost"))
> >> +  return -1;
> >> +
> >> +  index = strtol(name + strlen("eth_vhost"), NULL, 0);
> >> +  if (errno == ERANGE)
> >> +  return -1;
> >> +
> >> +  if (rte_kvargs_count(kvlist, ETH_VHOST_IFACE_ARG) == 1) {
> >> +  ret = rte_kvargs_process(kvlist, ETH_VHOST_IFACE_ARG,
> >> +  _iface, _name);
> >> +  if (ret < 0)
> >> +  goto out_free;
> >> +
> >> +  eth_dev_vhost_create(name, index, iface_name,
> >> rte_socket_id());
> >> +  }
> >> +
> >> +out_free:
> >> +  rte_kvargs_free(kvlist);
> >> +  return ret;
> >> +}
> >> +
> > This suggests to me that vHost ports will only be available/created if one
> supplies the " --vdev 'eth_vhost0,iface=...' " options when launching the
> application. There seems to be no option available to add vHost ports on-the-
> fly after the init process. One would have to restart the application with
> different parameters in order to modify the vHost port configuration. Is this
> correct?
> 
> Hi Ciara,
> 
> Thanks for your checking and description.
> We can attach and detach a port created by vhost PMD using Port Hotplug
> functionality.
> 
> example)
> ./testpmd -c f -n 4 -- -i
> testpmd> port attach eth_vhost0,iface=/tmp/aaa
> 
> Does this fit your case?
> 
> Thanks,
> Tetsuya

Hi,

Thanks for your reply. I wasn't aware of the hotplug functionality but this 
should work for this use case. Thanks!
I will continue to review the remainder of the patch and reply if I have any 
further feedback.

Ciara

> 
> > If so, this pmd implementation will not work with Open vSwitch. OVS relies
> on the ability to call the rte_vhost_driver_register function at any point in 
> the
> lifetime of the application, in order to create new vHost ports and
> subsequently register/create the sockets. Being bound to the selection
> chosen on the command line when launching the application is not suitable
> for OVS.
> >
> > Thanks,
> > Ciara



[dpdk-dev] [RFC PATCH] vhost: Add VHOST PMD

2015-09-15 Thread Loftus, Ciara
> +
> +static int
> +rte_pmd_vhost_devinit(const char *name, const char *params)
> +{
> + struct rte_kvargs *kvlist = NULL;
> + int ret = 0;
> + int index;
> + char *iface_name;
> +
> + RTE_LOG(INFO, PMD, "Initializing pmd_vhost for %s\n", name);
> +
> + kvlist = rte_kvargs_parse(params, valid_arguments);
> + if (kvlist == NULL)
> + return -1;
> +
> + if (strlen(name) < strlen("eth_vhost"))
> + return -1;
> +
> + index = strtol(name + strlen("eth_vhost"), NULL, 0);
> + if (errno == ERANGE)
> + return -1;
> +
> + if (rte_kvargs_count(kvlist, ETH_VHOST_IFACE_ARG) == 1) {
> + ret = rte_kvargs_process(kvlist, ETH_VHOST_IFACE_ARG,
> + _iface, _name);
> + if (ret < 0)
> + goto out_free;
> +
> + eth_dev_vhost_create(name, index, iface_name,
> rte_socket_id());
> + }
> +
> +out_free:
> + rte_kvargs_free(kvlist);
> + return ret;
> +}
> +

This suggests to me that vHost ports will only be available/created if one 
supplies the " --vdev 'eth_vhost0,iface=...' " options when launching the 
application. There seems to be no option available to add vHost ports 
on-the-fly after the init process. One would have to restart the application 
with different parameters in order to modify the vHost port configuration. Is 
this correct?

If so, this pmd implementation will not work with Open vSwitch. OVS relies on 
the ability to call the rte_vhost_driver_register function at any point in the 
lifetime of the application, in order to create new vHost ports and 
subsequently register/create the sockets. Being bound to the selection chosen 
on the command line when launching the application is not suitable for OVS.

Thanks,
Ciara


[dpdk-dev] [PATCH v2] vhost: provide vhost API to unregister vhost unix domain socket

2015-06-05 Thread Loftus, Ciara


> -Original Message-
> From: Xie, Huawei
> Sent: Friday, June 05, 2015 4:26 AM
> To: dev at dpdk.org
> Cc: Loftus, Ciara; Xie, Huawei; Sun, Peng A
> Subject: [PATCH v2] vhost: provide vhost API to unregister vhost unix domain
> socket
> 
> rte_vhost_driver_unregister will remove the listenfd from event list, and
> then close it.
> 
> Signed-off-by: Huawei Xie 
> Signed-off-by: Peng Sun 
> ---
>  lib/librte_vhost/rte_virtio_net.h|  3 ++
>  lib/librte_vhost/vhost_cuse/vhost-net-cdev.c |  9 
>  lib/librte_vhost/vhost_user/vhost-net-user.c | 68
> +++-
>  lib/librte_vhost/vhost_user/vhost-net-user.h |  2 +-
>  4 files changed, 69 insertions(+), 13 deletions(-)
> 
> diff --git a/lib/librte_vhost/rte_virtio_net.h
> b/lib/librte_vhost/rte_virtio_net.h
> index 5d38185..5630fbc 100644
> --- a/lib/librte_vhost/rte_virtio_net.h
> +++ b/lib/librte_vhost/rte_virtio_net.h
> @@ -188,6 +188,9 @@ int rte_vhost_enable_guest_notification(struct
> virtio_net *dev, uint16_t queue_i
>  /* Register vhost driver. dev_name could be different for multiple instance
> support. */
>  int rte_vhost_driver_register(const char *dev_name);
> 
> +/* Unregister vhost driver. This is only meaningful to vhost user. */
> +int rte_vhost_driver_unregister(const char *dev_name);
> +
>  /* Register callbacks. */
>  int rte_vhost_driver_callback_register(struct virtio_net_device_ops const *
> const);
>  /* Start vhost driver session blocking loop. */
> diff --git a/lib/librte_vhost/vhost_cuse/vhost-net-cdev.c
> b/lib/librte_vhost/vhost_cuse/vhost-net-cdev.c
> index 6b68abf..1ae7c49 100644
> --- a/lib/librte_vhost/vhost_cuse/vhost-net-cdev.c
> +++ b/lib/librte_vhost/vhost_cuse/vhost-net-cdev.c
> @@ -405,6 +405,15 @@ rte_vhost_driver_register(const char *dev_name)
>  }
> 
>  /**
> + * An empty function for unregister
> + */
> +int
> +rte_vhost_driver_unregister(const char *dev_name __rte_unused)
> +{
> + return 0;
> +}
> +
> +/**
>   * The CUSE session is launched allowing the application to receive open,
>   * release and ioctl calls.
>   */
> diff --git a/lib/librte_vhost/vhost_user/vhost-net-user.c
> b/lib/librte_vhost/vhost_user/vhost-net-user.c
> index 31f1215..87a4711 100644
> --- a/lib/librte_vhost/vhost_user/vhost-net-user.c
> +++ b/lib/librte_vhost/vhost_user/vhost-net-user.c
> @@ -66,6 +66,8 @@ struct connfd_ctx {
>  struct _vhost_server {
>   struct vhost_server *server[MAX_VHOST_SERVER];
>   struct fdset fdset;
> + int vserver_cnt;
> + pthread_mutex_t server_mutex;
>  };
> 
>  static struct _vhost_server g_vhost_server = {
> @@ -74,10 +76,10 @@ static struct _vhost_server g_vhost_server = {
>   .fd_mutex = PTHREAD_MUTEX_INITIALIZER,
>   .num = 0
>   },
> + .vserver_cnt = 0,
> + .server_mutex = PTHREAD_MUTEX_INITIALIZER,
>  };
> 
> -static int vserver_idx;
> -
>  static const char *vhost_message_str[VHOST_USER_MAX] = {
>   [VHOST_USER_NONE] = "VHOST_USER_NONE",
>   [VHOST_USER_GET_FEATURES] = "VHOST_USER_GET_FEATURES",
> @@ -427,7 +429,6 @@ vserver_message_handler(int connfd, void *dat, int
> *remove)
>   }
>  }
> 
> -
>  /**
>   * Creates and initialise the vhost server.
>   */
> @@ -436,34 +437,77 @@ rte_vhost_driver_register(const char *path)
>  {
>   struct vhost_server *vserver;
> 
> - if (vserver_idx == 0)
> + pthread_mutex_lock(_vhost_server.server_mutex);
> + if (ops == NULL)
>   ops = get_virtio_net_callbacks();
> - if (vserver_idx == MAX_VHOST_SERVER)
> +
> + if (g_vhost_server.vserver_cnt == MAX_VHOST_SERVER) {
> + RTE_LOG(ERR, VHOST_CONFIG,
> + "error: the number of servers reaches maximum\n");
> + pthread_mutex_unlock(_vhost_server.server_mutex);
>   return -1;
> + }
> 
>   vserver = calloc(sizeof(struct vhost_server), 1);
> - if (vserver == NULL)
> + if (vserver == NULL) {
> + pthread_mutex_unlock(_vhost_server.server_mutex);
>   return -1;
> -
> - unlink(path);
> + }
> 
>   vserver->listenfd = uds_socket(path);
>   if (vserver->listenfd < 0) {
>   free(vserver);
> + pthread_mutex_unlock(_vhost_server.server_mutex);
>   return -1;
>   }
> - vserver->path = path;
> +
> + vserver->path = strdup(path);
> 
>   fdset_add(_vhost_server.fdset, vserver->listenfd,
> - vserver_new_vq_conn, NULL,
> - vserver);
> + vserver_new

[dpdk-dev] [PATCH] vhost: provide vhost API to unregister vhost unix domain socket

2015-06-03 Thread Loftus, Ciara
> -Original Message-
> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Huawei Xie
> Sent: Tuesday, June 02, 2015 2:50 AM
> To: dev at dpdk.org
> Cc: Sun, Peng A
> Subject: [dpdk-dev] [PATCH] vhost: provide vhost API to unregister vhost
> unix domain socket
> 
> rte_vhost_driver_unregister will remove the listenfd from event list, and
> then close it.
> 
> Signed-off-by: Huawei Xie 
> Signed-off-by: Peng Sun 
> ---
>  lib/librte_vhost/rte_virtio_net.h|  3 ++
>  lib/librte_vhost/vhost_cuse/vhost-net-cdev.c |  9 
>  lib/librte_vhost/vhost_user/vhost-net-user.c | 70
> +++-
>  lib/librte_vhost/vhost_user/vhost-net-user.h |  2 +-
>  4 files changed, 71 insertions(+), 13 deletions(-)
> 
> diff --git a/lib/librte_vhost/rte_virtio_net.h
> b/lib/librte_vhost/rte_virtio_net.h
> index 5d38185..5630fbc 100644
> --- a/lib/librte_vhost/rte_virtio_net.h
> +++ b/lib/librte_vhost/rte_virtio_net.h
> @@ -188,6 +188,9 @@ int rte_vhost_enable_guest_notification(struct
> virtio_net *dev, uint16_t queue_i
>  /* Register vhost driver. dev_name could be different for multiple instance
> support. */
>  int rte_vhost_driver_register(const char *dev_name);
> 
> +/* Unregister vhost driver. This is only meaningful to vhost user. */
> +int rte_vhost_driver_unregister(const char *dev_name);
> +
>  /* Register callbacks. */
>  int rte_vhost_driver_callback_register(struct virtio_net_device_ops const *
> const);
>  /* Start vhost driver session blocking loop. */
> diff --git a/lib/librte_vhost/vhost_cuse/vhost-net-cdev.c
> b/lib/librte_vhost/vhost_cuse/vhost-net-cdev.c
> index 6b68abf..1ae7c49 100644
> --- a/lib/librte_vhost/vhost_cuse/vhost-net-cdev.c
> +++ b/lib/librte_vhost/vhost_cuse/vhost-net-cdev.c
> @@ -405,6 +405,15 @@ rte_vhost_driver_register(const char *dev_name)
>  }
> 
>  /**
> + * An empty function for unregister
> + */
> +int
> +rte_vhost_driver_unregister(const char *dev_name __rte_unused)
> +{
> + return 0;
> +}
> +
> +/**
>   * The CUSE session is launched allowing the application to receive open,
>   * release and ioctl calls.
>   */
> diff --git a/lib/librte_vhost/vhost_user/vhost-net-user.c
> b/lib/librte_vhost/vhost_user/vhost-net-user.c
> index 31f1215..dff46ee 100644
> --- a/lib/librte_vhost/vhost_user/vhost-net-user.c
> +++ b/lib/librte_vhost/vhost_user/vhost-net-user.c
> @@ -66,6 +66,8 @@ struct connfd_ctx {
>  struct _vhost_server {
>   struct vhost_server *server[MAX_VHOST_SERVER];
>   struct fdset fdset;
> + int vserver_cnt;
> + pthread_mutex_t server_mutex;
>  };
> 
>  static struct _vhost_server g_vhost_server = {
> @@ -74,10 +76,10 @@ static struct _vhost_server g_vhost_server = {
>   .fd_mutex = PTHREAD_MUTEX_INITIALIZER,
>   .num = 0
>   },
> + .vserver_cnt = 0,
> + .server_mutex = PTHREAD_MUTEX_INITIALIZER,
>  };
> 
> -static int vserver_idx;
> -
>  static const char *vhost_message_str[VHOST_USER_MAX] = {
>   [VHOST_USER_NONE] = "VHOST_USER_NONE",
>   [VHOST_USER_GET_FEATURES] = "VHOST_USER_GET_FEATURES",
> @@ -427,7 +429,6 @@ vserver_message_handler(int connfd, void *dat, int
> *remove)
>   }
>  }
> 
> -
>  /**
>   * Creates and initialise the vhost server.
>   */
> @@ -436,34 +437,79 @@ rte_vhost_driver_register(const char *path)
>  {
>   struct vhost_server *vserver;
> 
> - if (vserver_idx == 0)
> + pthread_mutex_lock(_vhost_server.server_mutex);
> + if (ops == NULL)
>   ops = get_virtio_net_callbacks();
> - if (vserver_idx == MAX_VHOST_SERVER)
> +
> + if (g_vhost_server.vserver_cnt == MAX_VHOST_SERVER) {
> + RTE_LOG(ERR, VHOST_CONFIG,
> + "error: the number of servers reaches maximum\n");
> + pthread_mutex_unlock(_vhost_server.server_mutex);
>   return -1;
> + }
> 
>   vserver = calloc(sizeof(struct vhost_server), 1);
> - if (vserver == NULL)
> + if (vserver == NULL) {
> + pthread_mutex_unlock(_vhost_server.server_mutex);
>   return -1;
> -
> - unlink(path);
> + }
> 
>   vserver->listenfd = uds_socket(path);
>   if (vserver->listenfd < 0) {
>   free(vserver);
> + pthread_mutex_unlock(_vhost_server.server_mutex);
>   return -1;
>   }
> - vserver->path = path;
> +
> + vserver->path = strdup(path);
> 
>   fdset_add(_vhost_server.fdset, vserver->listenfd,
> - vserver_new_vq_conn, NULL,
> - vserver);
> + vserver_new_vq_conn, NULL, vserver);
> 
> - g_vhost_server.server[vserver_idx++] = vserver;
> + g_vhost_server.server[g_vhost_server.vserver_cnt++] = vserver;
> + pthread_mutex_unlock(_vhost_server.server_mutex);
> 
>   return 0;
>  }
> 
> 
> +/**
> + * Unregister the specified vhost server
> + */
> +int
> +rte_vhost_driver_unregister(const char *path)
> +{
> + int i;
> + int count;
> +
> +