Re: [dpdk-dev] [PATCH v2 3/3] net/ifcvf: add ifcvf driver

2018-03-23 Thread Wang, Xiao W
Hi Thomas,

> -Original Message-
> From: Thomas Monjalon [mailto:tho...@monjalon.net]
> Sent: Thursday, March 22, 2018 4:52 AM
> To: Wang, Xiao W ; Xu, Rosen 
> Cc: dev@dpdk.org; maxime.coque...@redhat.com; y...@fridaylinux.org; Wang,
> Zhihong ; Bie, Tiwei ; Chen,
> Junjie J ; Daly, Dan ; Liang,
> Cunming ; Burakov, Anatoly
> ; gaetan.ri...@6wind.com
> Subject: Re: [dpdk-dev] [PATCH v2 3/3] net/ifcvf: add ifcvf driver
> 
> 21/03/2018 14:21, Xiao Wang:
> > ifcvf driver uses vdev as a control domain to manage ifc VFs that belong
> > to it. It registers vDPA device ops to vhost lib to enable these VFs to be
> > used as vhost data path accelerator.
> 
> Not everybody work at Intel.
> Please explain what means ifcvf and what is a control domain.

OK, and I will add a document.
> 
> > Live migration feature is supported by ifc VF and this driver enables
> > it based on vhost lib.
> >
> > Because vDPA driver needs to set up MSI-X vector to interrupt the guest,
> > only vfio-pci is supported currently.
> >
> > Signed-off-by: Xiao Wang 
> > Signed-off-by: Rosen Xu 
> > ---
> > v2:
> > - Rebase on Zhihong's vDPA v3 patch set.
> > ---
> >  config/common_base  |6 +
> >  config/common_linuxapp  |1 +
> >  drivers/net/Makefile|1 +
> >  drivers/net/ifcvf/Makefile  |   40 +
> >  drivers/net/ifcvf/base/ifcvf.c  |  329 
> >  drivers/net/ifcvf/base/ifcvf.h  |  156 
> >  drivers/net/ifcvf/base/ifcvf_osdep.h|   52 ++
> >  drivers/net/ifcvf/ifcvf_ethdev.c| 1240
> +++
> >  drivers/net/ifcvf/rte_ifcvf_version.map |4 +
> >  mk/rte.app.mk   |1 +
> 
> This feature needs to be explained and documented.
> It will be helpful to understand the mechanism and to have a good review.
> Please do not merge it until there is a good documentation.
> 

Will add a doc with more details.

BRs,
Xiao





Re: [dpdk-dev] [PATCH v2 3/3] net/ifcvf: add ifcvf driver

2018-03-23 Thread Wang, Xiao W
Hi Maxime,

> -Original Message-
> From: Maxime Coquelin [mailto:maxime.coque...@redhat.com]
> Sent: Thursday, March 22, 2018 4:58 AM
> To: Wang, Xiao W ; y...@fridaylinux.org
> Cc: dev@dpdk.org; Wang, Zhihong ; Bie, Tiwei
> ; Chen, Junjie J ; Xu, Rosen
> ; Daly, Dan ; Liang, Cunming
> ; Burakov, Anatoly ;
> gaetan.ri...@6wind.com
> Subject: Re: [PATCH v2 3/3] net/ifcvf: add ifcvf driver
> 
> 
> 
> On 03/21/2018 02:21 PM, Xiao Wang wrote:
> > ifcvf driver uses vdev as a control domain to manage ifc VFs that belong
> > to it. It registers vDPA device ops to vhost lib to enable these VFs to be
> > used as vhost data path accelerator.
> >
> > Live migration feature is supported by ifc VF and this driver enables
> > it based on vhost lib.
> >
> > Because vDPA driver needs to set up MSI-X vector to interrupt the guest,
> > only vfio-pci is supported currently.
> >
> > Signed-off-by: Xiao Wang 
> > Signed-off-by: Rosen Xu 
> > ---
> > v2:
> > - Rebase on Zhihong's vDPA v3 patch set.
> > ---
> >   config/common_base  |6 +
> >   config/common_linuxapp  |1 +
> >   drivers/net/Makefile|1 +
> >   drivers/net/ifcvf/Makefile  |   40 +
> >   drivers/net/ifcvf/base/ifcvf.c  |  329 
> >   drivers/net/ifcvf/base/ifcvf.h  |  156 
> >   drivers/net/ifcvf/base/ifcvf_osdep.h|   52 ++
> >   drivers/net/ifcvf/ifcvf_ethdev.c| 1240
> +++
> >   drivers/net/ifcvf/rte_ifcvf_version.map |4 +
> >   mk/rte.app.mk   |1 +
> >   10 files changed, 1830 insertions(+)
> >   create mode 100644 drivers/net/ifcvf/Makefile
> >   create mode 100644 drivers/net/ifcvf/base/ifcvf.c
> >   create mode 100644 drivers/net/ifcvf/base/ifcvf.h
> >   create mode 100644 drivers/net/ifcvf/base/ifcvf_osdep.h
> >   create mode 100644 drivers/net/ifcvf/ifcvf_ethdev.c
> >   create mode 100644 drivers/net/ifcvf/rte_ifcvf_version.map
> >
> 
> ...
> 
> > +static int
> > +eth_dev_ifcvf_create(struct rte_vdev_device *dev,
> > +   struct rte_pci_addr *pci_addr, int devices)
> > +{
> > +   const char *name = rte_vdev_device_name(dev);
> > +   struct rte_eth_dev *eth_dev = NULL;
> > +   struct ether_addr *eth_addr = NULL;
> > +   struct ifcvf_internal *internal = NULL;
> > +   struct internal_list *list = NULL;
> > +   struct rte_eth_dev_data *data = NULL;
> > +   struct rte_pci_addr pf_addr = *pci_addr;
> > +   int i;
> > +
> > +   list = rte_zmalloc_socket(name, sizeof(*list), 0,
> > +   dev->device.numa_node);
> > +   if (list == NULL)
> > +   goto error;
> > +
> > +   /* reserve an ethdev entry */
> > +   eth_dev = rte_eth_vdev_allocate(dev, sizeof(*internal));
> > +   if (eth_dev == NULL)
> > +   goto error;
> > +
> > +   eth_addr = rte_zmalloc_socket(name, sizeof(*eth_addr), 0,
> > +   dev->device.numa_node);
> > +   if (eth_addr == NULL)
> > +   goto error;
> > +
> > +   *eth_addr = base_eth_addr;
> > +   eth_addr->addr_bytes[5] = eth_dev->data->port_id;
> > +
> > +   internal = eth_dev->data->dev_private;
> > +   internal->dev_name = strdup(name);
> > +   if (internal->dev_name == NULL)
> > +   goto error;
> > +
> > +   internal->eng_addr.pci_addr = *pci_addr;
> > +   for (i = 0; i < devices; i++) {
> > +   pf_addr.domain = pci_addr->domain;
> > +   pf_addr.bus = pci_addr->bus;
> > +   pf_addr.devid = pci_addr->devid + (i + 1) / 8;
> > +   pf_addr.function = pci_addr->function + (i + 1) % 8;
> > +   internal->vf_info[i].pdev.addr = pf_addr;
> > +   rte_spinlock_init(&internal->vf_info[i].lock);
> > +   }
> > +   internal->max_devices = devices;
> > +
> > +   list->eth_dev = eth_dev;
> > +   pthread_mutex_lock(&internal_list_lock);
> > +   TAILQ_INSERT_TAIL(&internal_list, list, next);
> > +   pthread_mutex_unlock(&internal_list_lock);
> > +
> > +   data = eth_dev->data;
> > +   data->nb_rx_queues = IFCVF_MAX_QUEUES;
> > +   data->nb_tx_queues = IFCVF_MAX_QUEUES;
> > +   data->dev_link = vdpa_link;
> > +   data->mac_addrs = eth_addr;
> 
> We might want one ethernet device per VF, as for example you set
> dev_link.link_status to UP as soon as a VF is configured, and DOWN
> as when a single VF is removed.

Ideally it will be one representor port per VF, each representor port
has a link_status. Will integrate port representor when it's ready.
I will remove the vdev's ethdev registering for now, and add it back when we
need to implement flow APIs on the vdev.

> 
> > +   data->dev_flags = RTE_ETH_DEV_INTR_LSC;
> > +   eth_dev->dev_ops = &ops;
> > +
> > +   /* assign rx and tx ops, could be used as vDPA fallback */
> > +   eth_dev->rx_pkt_burst = eth_ifcvf_rx;
> > +   eth_dev->tx_pkt_burst = eth_ifcvf_tx;
> > +
> > +   if (rte_vdpa_register_engine(vdpa_ifcvf_driver.name,
> > +   &internal->eng_addr) < 0)
> > +   goto error;
> > +
> > + 

Re: [dpdk-dev] [PATCH v2 3/3] net/ifcvf: add ifcvf driver

2018-03-22 Thread Wang, Xiao W
Hi Ferruh,

> -Original Message-
> From: Yigit, Ferruh
> Sent: Thursday, March 22, 2018 4:51 PM
> To: Wang, Xiao W ; maxime.coque...@redhat.com;
> y...@fridaylinux.org
> Cc: dev@dpdk.org; Wang, Zhihong ; Bie, Tiwei
> ; Chen, Junjie J ; Xu, Rosen
> ; Daly, Dan ; Liang, Cunming
> ; Burakov, Anatoly ;
> gaetan.ri...@6wind.com
> Subject: Re: [dpdk-dev] [PATCH v2 3/3] net/ifcvf: add ifcvf driver
> 
> On 3/21/2018 1:21 PM, Xiao Wang wrote:
> > ifcvf driver uses vdev as a control domain to manage ifc VFs that belong
> > to it. It registers vDPA device ops to vhost lib to enable these VFs to be
> > used as vhost data path accelerator.
> >
> > Live migration feature is supported by ifc VF and this driver enables
> > it based on vhost lib.
> >
> > Because vDPA driver needs to set up MSI-X vector to interrupt the guest,
> > only vfio-pci is supported currently.
> >
> > Signed-off-by: Xiao Wang 
> > Signed-off-by: Rosen Xu 
> > ---
> > v2:
> > - Rebase on Zhihong's vDPA v3 patch set.
> > ---
> >  config/common_base  |6 +
> >  config/common_linuxapp  |1 +
> >  drivers/net/Makefile|1 +
> >  drivers/net/ifcvf/Makefile  |   40 +
> >  drivers/net/ifcvf/base/ifcvf.c  |  329 
> >  drivers/net/ifcvf/base/ifcvf.h  |  156 
> >  drivers/net/ifcvf/base/ifcvf_osdep.h|   52 ++
> >  drivers/net/ifcvf/ifcvf_ethdev.c| 1240
> +++
> >  drivers/net/ifcvf/rte_ifcvf_version.map |4 +
> >  mk/rte.app.mk   |1 +
> 
> Need .ini file to represent driver features.
> Also it is good to add driver documentation and a note into release note to
> announce new driver.

Will do.

> 
> >  10 files changed, 1830 insertions(+)
> >  create mode 100644 drivers/net/ifcvf/Makefile
> >  create mode 100644 drivers/net/ifcvf/base/ifcvf.c
> >  create mode 100644 drivers/net/ifcvf/base/ifcvf.h
> >  create mode 100644 drivers/net/ifcvf/base/ifcvf_osdep.h
> >  create mode 100644 drivers/net/ifcvf/ifcvf_ethdev.c
> >  create mode 100644 drivers/net/ifcvf/rte_ifcvf_version.map
> >
> > diff --git a/config/common_base b/config/common_base
> > index ad03cf433..06fce1ebf 100644
> > --- a/config/common_base
> > +++ b/config/common_base
> > @@ -791,6 +791,12 @@ CONFIG_RTE_LIBRTE_VHOST_DEBUG=n
> >  #
> >  CONFIG_RTE_LIBRTE_PMD_VHOST=n
> >
> > +#
> > +# Compile IFCVF driver
> > +# To compile, CONFIG_RTE_LIBRTE_VHOST should be enabled.
> > +#
> > +CONFIG_RTE_LIBRTE_IFCVF=n
> > +
> >  #
> >  # Compile the test application
> >  #
> > diff --git a/config/common_linuxapp b/config/common_linuxapp
> > index ff98f2355..358d00468 100644
> > --- a/config/common_linuxapp
> > +++ b/config/common_linuxapp
> > @@ -15,6 +15,7 @@ CONFIG_RTE_LIBRTE_PMD_KNI=y
> >  CONFIG_RTE_LIBRTE_VHOST=y
> >  CONFIG_RTE_LIBRTE_VHOST_NUMA=y
> >  CONFIG_RTE_LIBRTE_PMD_VHOST=y
> > +CONFIG_RTE_LIBRTE_IFCVF=y
> 
> Current syntax for PMD config options:
> Virtual ones: CONFIG_RTE_LIBRTE_PMD_XXX
> Physical ones: CONFIG_RTE_LIBRTE_XXX_PMD
> 
> Virtual / Physical difference most probably not done intentionally but that is
> what it is right now.
> 
> Is "PMD" not added intentionally to the config option?

I think vDPA driver is not polling mode, so I didn't put a "PMD" here. Do you 
think CONFIG_RTE_LIBRTE_VDPA_IFCVF is better?

> 
> And what is the config time dependency of the driver, I assume VHOST is one
> of
> them but are there more?

This dependency is described in drivers/net/Makefile, CONFIG_RTE_EAL_VFIO is 
another one, will add it.

> 
> >  CONFIG_RTE_LIBRTE_PMD_AF_PACKET=y
> >  CONFIG_RTE_LIBRTE_PMD_TAP=y
> >  CONFIG_RTE_LIBRTE_AVP_PMD=y
> > diff --git a/drivers/net/Makefile b/drivers/net/Makefile
> > index e1127326b..496acf2d2 100644
> > --- a/drivers/net/Makefile
> > +++ b/drivers/net/Makefile
> > @@ -53,6 +53,7 @@ endif # $(CONFIG_RTE_LIBRTE_SCHED)
> >
> >  ifeq ($(CONFIG_RTE_LIBRTE_VHOST),y)
> >  DIRS-$(CONFIG_RTE_LIBRTE_PMD_VHOST) += vhost
> > +DIRS-$(CONFIG_RTE_LIBRTE_IFCVF) += ifcvf
> 
> Since this is mainly vpda driver, does it make sense to put it under
> drivers/net/virtio/vpda/ifcvf
> 
> When there are more vpda driver they can go into drivers/net/virtio/vpda/*

vDPA is for vhost offloading/acceleration, the device can be quite different 
from virtio,
they just need to be virtio ring compatible, and the usage model is quite 
different from virt

Re: [dpdk-dev] [PATCH v2 3/3] net/ifcvf: add ifcvf driver

2018-03-22 Thread Ferruh Yigit
On 3/21/2018 1:21 PM, Xiao Wang wrote:
> ifcvf driver uses vdev as a control domain to manage ifc VFs that belong
> to it. It registers vDPA device ops to vhost lib to enable these VFs to be
> used as vhost data path accelerator.
> 
> Live migration feature is supported by ifc VF and this driver enables
> it based on vhost lib.
> 
> Because vDPA driver needs to set up MSI-X vector to interrupt the guest,
> only vfio-pci is supported currently.
> 
> Signed-off-by: Xiao Wang 
> Signed-off-by: Rosen Xu 
> ---
> v2:
> - Rebase on Zhihong's vDPA v3 patch set.
> ---
>  config/common_base  |6 +
>  config/common_linuxapp  |1 +
>  drivers/net/Makefile|1 +
>  drivers/net/ifcvf/Makefile  |   40 +
>  drivers/net/ifcvf/base/ifcvf.c  |  329 
>  drivers/net/ifcvf/base/ifcvf.h  |  156 
>  drivers/net/ifcvf/base/ifcvf_osdep.h|   52 ++
>  drivers/net/ifcvf/ifcvf_ethdev.c| 1240 
> +++
>  drivers/net/ifcvf/rte_ifcvf_version.map |4 +
>  mk/rte.app.mk   |1 +

Need .ini file to represent driver features.
Also it is good to add driver documentation and a note into release note to
announce new driver.

>  10 files changed, 1830 insertions(+)
>  create mode 100644 drivers/net/ifcvf/Makefile
>  create mode 100644 drivers/net/ifcvf/base/ifcvf.c
>  create mode 100644 drivers/net/ifcvf/base/ifcvf.h
>  create mode 100644 drivers/net/ifcvf/base/ifcvf_osdep.h
>  create mode 100644 drivers/net/ifcvf/ifcvf_ethdev.c
>  create mode 100644 drivers/net/ifcvf/rte_ifcvf_version.map
> 
> diff --git a/config/common_base b/config/common_base
> index ad03cf433..06fce1ebf 100644
> --- a/config/common_base
> +++ b/config/common_base
> @@ -791,6 +791,12 @@ CONFIG_RTE_LIBRTE_VHOST_DEBUG=n
>  #
>  CONFIG_RTE_LIBRTE_PMD_VHOST=n
>  
> +#
> +# Compile IFCVF driver
> +# To compile, CONFIG_RTE_LIBRTE_VHOST should be enabled.
> +#
> +CONFIG_RTE_LIBRTE_IFCVF=n
> +
>  #
>  # Compile the test application
>  #
> diff --git a/config/common_linuxapp b/config/common_linuxapp
> index ff98f2355..358d00468 100644
> --- a/config/common_linuxapp
> +++ b/config/common_linuxapp
> @@ -15,6 +15,7 @@ CONFIG_RTE_LIBRTE_PMD_KNI=y
>  CONFIG_RTE_LIBRTE_VHOST=y
>  CONFIG_RTE_LIBRTE_VHOST_NUMA=y
>  CONFIG_RTE_LIBRTE_PMD_VHOST=y
> +CONFIG_RTE_LIBRTE_IFCVF=y

Current syntax for PMD config options:
Virtual ones: CONFIG_RTE_LIBRTE_PMD_XXX
Physical ones: CONFIG_RTE_LIBRTE_XXX_PMD

Virtual / Physical difference most probably not done intentionally but that is
what it is right now.

Is "PMD" not added intentionally to the config option?

And what is the config time dependency of the driver, I assume VHOST is one of
them but are there more?

>  CONFIG_RTE_LIBRTE_PMD_AF_PACKET=y
>  CONFIG_RTE_LIBRTE_PMD_TAP=y
>  CONFIG_RTE_LIBRTE_AVP_PMD=y
> diff --git a/drivers/net/Makefile b/drivers/net/Makefile
> index e1127326b..496acf2d2 100644
> --- a/drivers/net/Makefile
> +++ b/drivers/net/Makefile
> @@ -53,6 +53,7 @@ endif # $(CONFIG_RTE_LIBRTE_SCHED)
>  
>  ifeq ($(CONFIG_RTE_LIBRTE_VHOST),y)
>  DIRS-$(CONFIG_RTE_LIBRTE_PMD_VHOST) += vhost
> +DIRS-$(CONFIG_RTE_LIBRTE_IFCVF) += ifcvf

Since this is mainly vpda driver, does it make sense to put it under
drivers/net/virtio/vpda/ifcvf

When there are more vpda driver they can go into drivers/net/virtio/vpda/*

Combining with below not registering ethdev comment, virtual driver can register
itself as vpda_ifcvf:
RTE_PMD_REGISTER_VDEV(vpda_ifcvf, ifcvf_drv);

>  endif # $(CONFIG_RTE_LIBRTE_VHOST)
>  
>  ifeq ($(CONFIG_RTE_LIBRTE_MRVL_PMD),y)
> diff --git a/drivers/net/ifcvf/Makefile b/drivers/net/ifcvf/Makefile
> new file mode 100644
> index 0..f3670cdf2
> --- /dev/null
> +++ b/drivers/net/ifcvf/Makefile
> @@ -0,0 +1,40 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2018 Intel Corporation
> +
> +include $(RTE_SDK)/mk/rte.vars.mk
> +
> +#
> +# library name
> +#
> +LIB = librte_ifcvf.a
> +
> +LDLIBS += -lpthread
> +LDLIBS += -lrte_eal -lrte_mempool -lrte_pci
> +LDLIBS += -lrte_ethdev -lrte_net -lrte_kvargs -lrte_vhost
> +LDLIBS += -lrte_bus_vdev -lrte_bus_pci
> +
> +CFLAGS += -O3
> +CFLAGS += $(WERROR_FLAGS)
> +CFLAGS += -I$(RTE_SDK)/lib/librte_eal/linuxapp/eal
> +CFLAGS += -I$(RTE_SDK)/drivers/bus/pci/linux
> +CFLAGS += -DALLOW_EXPERIMENTAL_API
> +
> +#
> +# Add extra flags for base driver source files to disable warnings in them
> +#
> +BASE_DRIVER_OBJS=$(sort $(patsubst %.c,%.o,$(notdir $(wildcard 
> $(SRCDIR)/base/*.c
> +$(foreach obj, $(BASE_DRIVER_OBJS), $(eval 
> CFLAGS_$(obj)+=$(CFLAGS_BASE_DRIVER)))

It seems no CFLAGS_BASE_DRIVER defined yet, above lines can be removed for now.

> +
> +VPATH += $(SRCDIR)/base
> +
> +EXPORT_MAP := rte_ifcvf_version.map
> +
> +LIBABIVER := 1
> +
> +#
> +# all source are stored in SRCS-y
> +#
> +SRCS-$(CONFIG_RTE_LIBRTE_PMD_VHOST) += ifcvf_ethdev.c
> +SRCS-$(CONFIG_RTE_LIBRTE_PMD_VHOST) += ifcvf.c

Is it intenti

Re: [dpdk-dev] [PATCH v2 3/3] net/ifcvf: add ifcvf driver

2018-03-21 Thread Maxime Coquelin



On 03/21/2018 02:21 PM, Xiao Wang wrote:

ifcvf driver uses vdev as a control domain to manage ifc VFs that belong
to it. It registers vDPA device ops to vhost lib to enable these VFs to be
used as vhost data path accelerator.

Live migration feature is supported by ifc VF and this driver enables
it based on vhost lib.

Because vDPA driver needs to set up MSI-X vector to interrupt the guest,
only vfio-pci is supported currently.

Signed-off-by: Xiao Wang 
Signed-off-by: Rosen Xu 
---
v2:
- Rebase on Zhihong's vDPA v3 patch set.
---
  config/common_base  |6 +
  config/common_linuxapp  |1 +
  drivers/net/Makefile|1 +
  drivers/net/ifcvf/Makefile  |   40 +
  drivers/net/ifcvf/base/ifcvf.c  |  329 
  drivers/net/ifcvf/base/ifcvf.h  |  156 
  drivers/net/ifcvf/base/ifcvf_osdep.h|   52 ++
  drivers/net/ifcvf/ifcvf_ethdev.c| 1240 +++
  drivers/net/ifcvf/rte_ifcvf_version.map |4 +
  mk/rte.app.mk   |1 +
  10 files changed, 1830 insertions(+)
  create mode 100644 drivers/net/ifcvf/Makefile
  create mode 100644 drivers/net/ifcvf/base/ifcvf.c
  create mode 100644 drivers/net/ifcvf/base/ifcvf.h
  create mode 100644 drivers/net/ifcvf/base/ifcvf_osdep.h
  create mode 100644 drivers/net/ifcvf/ifcvf_ethdev.c
  create mode 100644 drivers/net/ifcvf/rte_ifcvf_version.map



...


+static int
+eth_dev_ifcvf_create(struct rte_vdev_device *dev,
+   struct rte_pci_addr *pci_addr, int devices)
+{
+   const char *name = rte_vdev_device_name(dev);
+   struct rte_eth_dev *eth_dev = NULL;
+   struct ether_addr *eth_addr = NULL;
+   struct ifcvf_internal *internal = NULL;
+   struct internal_list *list = NULL;
+   struct rte_eth_dev_data *data = NULL;
+   struct rte_pci_addr pf_addr = *pci_addr;
+   int i;
+
+   list = rte_zmalloc_socket(name, sizeof(*list), 0,
+   dev->device.numa_node);
+   if (list == NULL)
+   goto error;
+
+   /* reserve an ethdev entry */
+   eth_dev = rte_eth_vdev_allocate(dev, sizeof(*internal));
+   if (eth_dev == NULL)
+   goto error;
+
+   eth_addr = rte_zmalloc_socket(name, sizeof(*eth_addr), 0,
+   dev->device.numa_node);
+   if (eth_addr == NULL)
+   goto error;
+
+   *eth_addr = base_eth_addr;
+   eth_addr->addr_bytes[5] = eth_dev->data->port_id;
+
+   internal = eth_dev->data->dev_private;
+   internal->dev_name = strdup(name);
+   if (internal->dev_name == NULL)
+   goto error;
+
+   internal->eng_addr.pci_addr = *pci_addr;
+   for (i = 0; i < devices; i++) {
+   pf_addr.domain = pci_addr->domain;
+   pf_addr.bus = pci_addr->bus;
+   pf_addr.devid = pci_addr->devid + (i + 1) / 8;
+   pf_addr.function = pci_addr->function + (i + 1) % 8;
+   internal->vf_info[i].pdev.addr = pf_addr;
+   rte_spinlock_init(&internal->vf_info[i].lock);
+   }
+   internal->max_devices = devices;
+
+   list->eth_dev = eth_dev;
+   pthread_mutex_lock(&internal_list_lock);
+   TAILQ_INSERT_TAIL(&internal_list, list, next);
+   pthread_mutex_unlock(&internal_list_lock);
+
+   data = eth_dev->data;
+   data->nb_rx_queues = IFCVF_MAX_QUEUES;
+   data->nb_tx_queues = IFCVF_MAX_QUEUES;
+   data->dev_link = vdpa_link;
+   data->mac_addrs = eth_addr;


We might want one ethernet device per VF, as for example you set
dev_link.link_status to UP as soon as a VF is configured, and DOWN
as when a single VF is removed.


+   data->dev_flags = RTE_ETH_DEV_INTR_LSC;
+   eth_dev->dev_ops = &ops;
+
+   /* assign rx and tx ops, could be used as vDPA fallback */
+   eth_dev->rx_pkt_burst = eth_ifcvf_rx;
+   eth_dev->tx_pkt_burst = eth_ifcvf_tx;
+
+   if (rte_vdpa_register_engine(vdpa_ifcvf_driver.name,
+   &internal->eng_addr) < 0)
+   goto error;
+
+   return 0;
+
+error:
+   rte_free(list);
+   rte_free(eth_addr);
+   if (internal && internal->dev_name)
+   free(internal->dev_name);
+   rte_free(internal);
+   if (eth_dev)
+   rte_eth_dev_release_port(eth_dev);
+
+   return -1;
+}
+
+static int
+get_pci_addr(const char *key __rte_unused, const char *value, void *extra_args)
+{
+   if (value == NULL || extra_args == NULL)
+   return -1;
+
+   return rte_pci_addr_parse(value, extra_args);
+}
+
+static inline int
+open_int(const char *key __rte_unused, const char *value, void *extra_args)
+{
+   uint16_t *n = extra_args;
+
+   if (value == NULL || extra_args == NULL)
+   return -EINVAL;
+
+   *n = (uint16_t)strtoul(value, NULL, 0);
+   if (*n == USHRT_MAX && errno == ERANGE)
+   return -1;
+
+   

Re: [dpdk-dev] [PATCH v2 3/3] net/ifcvf: add ifcvf driver

2018-03-21 Thread Thomas Monjalon
21/03/2018 14:21, Xiao Wang:
> ifcvf driver uses vdev as a control domain to manage ifc VFs that belong
> to it. It registers vDPA device ops to vhost lib to enable these VFs to be
> used as vhost data path accelerator.

Not everybody work at Intel.
Please explain what means ifcvf and what is a control domain.

> Live migration feature is supported by ifc VF and this driver enables
> it based on vhost lib.
> 
> Because vDPA driver needs to set up MSI-X vector to interrupt the guest,
> only vfio-pci is supported currently.
> 
> Signed-off-by: Xiao Wang 
> Signed-off-by: Rosen Xu 
> ---
> v2:
> - Rebase on Zhihong's vDPA v3 patch set.
> ---
>  config/common_base  |6 +
>  config/common_linuxapp  |1 +
>  drivers/net/Makefile|1 +
>  drivers/net/ifcvf/Makefile  |   40 +
>  drivers/net/ifcvf/base/ifcvf.c  |  329 
>  drivers/net/ifcvf/base/ifcvf.h  |  156 
>  drivers/net/ifcvf/base/ifcvf_osdep.h|   52 ++
>  drivers/net/ifcvf/ifcvf_ethdev.c| 1240 
> +++
>  drivers/net/ifcvf/rte_ifcvf_version.map |4 +
>  mk/rte.app.mk   |1 +

This feature needs to be explained and documented.
It will be helpful to understand the mechanism and to have a good review.
Please do not merge it until there is a good documentation.