Re: [dpdk-users] attach/detach on secondary process

2017-12-13 Thread Thomas Monjalon
13/12/2017 22:10, Stephen Hemminger:
> On Wed, 13 Dec 2017 22:00:48 +0100
> Thomas Monjalon  wrote:
> 
> > 13/12/2017 18:09, Stephen Hemminger:
> > > Many DPDK drivers require that setup and initialization be done by
> > > the primary process. This is mostly to avoid dealing with concurrency 
> > > since
> > > there can be multiple secondary processes.  
> > 
> > I think we should consider this limitation as a bug.
> > We must allow a secondary process to initialize a device.
> > The race in device creation must be fixed.
> > 
> 
> Secondary processes should be able to do setup.
> But it is up to the application not to do it concurrently from multiple
> processes.

Yes there can be synchronization between processes.
But I think it is safer to fix the device creation race in ethdev.
Note that I am not talking about configuration concurrency,
but just race in probing.


Re: [dpdk-users] attach/detach on secondary process

2017-12-13 Thread Stephen Hemminger
On Wed, 13 Dec 2017 22:00:48 +0100
Thomas Monjalon  wrote:

> 13/12/2017 18:09, Stephen Hemminger:
> > Many DPDK drivers require that setup and initialization be done by
> > the primary process. This is mostly to avoid dealing with concurrency since
> > there can be multiple secondary processes.  
> 
> I think we should consider this limitation as a bug.
> We must allow a secondary process to initialize a device.
> The race in device creation must be fixed.
> 

Secondary processes should be able to do setup.
But it is up to the application not to do it concurrently from multiple
processes.


Re: [dpdk-users] attach/detach on secondary process

2017-12-13 Thread Thomas Monjalon
13/12/2017 18:09, Stephen Hemminger:
> Many DPDK drivers require that setup and initialization be done by
> the primary process. This is mostly to avoid dealing with concurrency since
> there can be multiple secondary processes.

I think we should consider this limitation as a bug.
We must allow a secondary process to initialize a device.
The race in device creation must be fixed.



[dpdk-users] lpm trie

2017-12-13 Thread Pragash Vijayaragavan
Hi,

Can someone let me know which trie algorithm is used in dpdk for lpm.
Is it a simple trie, without any level or path compression or a different
trie.


I saw this code :

struct rte_lpm_tbl_entry {
uint32_t depth   :6;
uint32_t valid_group :1;
uint32_t valid   :1;
uint32_t next_hop:24;

};


Thanks,

Pragash Vijayaragavan
Grad Student at Rochester Institute of Technology
email : pxv3...@rit.edu
ph : 585 764 4662


Re: [dpdk-users] attach/detach on secondary process

2017-12-13 Thread Stephen Hemminger
On Wed, 13 Dec 2017 17:58:53 +0100
Ricardo Roldan  wrote:

> Hi,
> 
> We have a multi-process application and we need to support 
> attaching/detaching of ports. We are using the 17.11 version with the 
> Intel x520 (ixgbe driver) and virtio.
> 
> At the time we initialize our processes there are not any devices binded 
> with the DPDK drivers, so we initialize all processes (primary and 
> secondaries) with 0 ports.
> 
> This seems to work fine only on the primary processes, but on the 
> secondary processes we see some problems. In the following paragraphs I 
> describe the procedure used to attach/detach interfaces with DPDK.
> 
> For the attach procedure (all processes initially have no devices 
> attached):
> 
> - Bind the devices we want to attach to the DPDK driver (with the script 
> dpdk-devbind, from external process)
> 
> - Primary process: Call rte_eth_dev_attach
> 
> - Primary process: Configure ports using ...
> 
> - Secondary processes: Call to rte_eth_dev_attach
> 
> 
> Start to send/receive packets from all processes.
> 
> 
> For the detach procedure:
> 
> - Secondary processes: For each port, call rte_eth_dev_stop(port), 
> rte_eth_dev_close(port) and rte_eth_dev_detach(port, dev).
> 
> - Primary process: After the secondary processes have detach all their 
> ports, for each port call rte_eth_dev_stop(port), 
> rte_eth_dev_close(port) and rte_eth_dev_detach(port, dev).
> 
> - Bind the device to the original Linux driver (with the script 
> dpdk-devbind, from external process)
> 
> 
> With this approach we have noticed that when the secondary processes 
> call rte_dev_detach there is an error, because it calls the remove 
> operation, which ends up calling eth_ixgbe_dev_uninit that returns 
> -EPERM (because it does not allow a non primary process to uninitialize 
> the driver).
> 
> 
> Therefore, the port attach never works again on the secondary processes 
> as the function rte_eal_hotplug_add fails because it cannot find the 
> device.
> 
> 
> dev = bus->find_device(NULL, cmp_detached_dev_name, devname);
>      if (dev == NULL) {
>      RTE_LOG(ERR, EAL, "Cannot find unplugged device (%s)\n",
>      devname);
>      ret = -ENODEV;
>      goto err_devarg;
>      }
> 
> 
> This happens because in order to check unplugged devices, the function 
> cmp_detached_dev_name  checks if there is a pointer to the driver and 
> fails if it is already set, because the detach procedure never set the 
> driver variable to NULL.
> 
> static int cmp_detached_dev_name(const struct rte_device *dev,
>      const void *_name)
> {
>      const char *name = _name;
> 
>      /* skip attached devices */
>      RTE_LOG(ERR, EAL, "cmp_detached_dev_name dev %p name %s driver %p"
>      " search %s\n",
>      dev, dev->name, dev->driver, name);
>      if (dev->driver != NULL)
>      return 1;
> 
>      return strcmp(dev->name, name);
> }
> 
> To fix this behavior we have done the following changes on the DPDK code.
> 
> First, in order to prevent cmp_detached_dev_name from failing, 
> rte_eal_dev_detach sets driver to NULL.
> 
> diff --git a/lib/librte_eal/common/eal_common_dev.c 
> b/lib/librte_eal/common/eal_common_dev.c
> 
> index dda8f5835..9a363dcf7 100644
> --- a/lib/librte_eal/common/eal_common_dev.c
> +++ b/lib/librte_eal/common/eal_common_dev.c
> @@ -114,6 +114,7 @@ int rte_eal_dev_detach(struct rte_device *dev)
>      if (ret)
>      RTE_LOG(ERR, EAL, "Driver cannot detach the device (%s)\n",
>      dev->name);
> +   dev->driver = NULL;
>      return ret;
>   }/*
> */
> 
> 
> Then, in the rte_eth_dev_pci_generic_remove function, the call to 
> dev_uninit does not consider -EPERM an error, because when detaching a 
> port, some drivers return 0 and other drivers return -EPERM to indicate 
> that it is called from a secondary process.
> 
> diff --git a/lib/librte_ether/rte_ethdev_pci.h 
> b/lib/librte_ether/rte_ethdev_pci.h
> @@ -184,7 +184,7 @@ rte_eth_dev_pci_generic_remove(struct rte_pci_device 
> *pci_dev,
> 
>      if (dev_uninit) {
>      ret = dev_uninit(eth_dev);
> -   if (ret)
> +   if (ret && ret != -EPERM)
>      return ret;
>      }
> 
> Finally, in the rte_eth_dev_pci_release function, only the fields in the 
> shared memory region are reset if called from a primary process.
> 
> /**/diff --git a/lib/librte_ether/rte_ethdev_pci.h 
> b/lib/librte_ether/rte_ethdev_pci.h
> index 722075e09..a79188fbf 100644
> --- a/lib/librte_ether/rte_ethdev_pci.h
> +++ b/lib/librte_ether/rte_ethdev_pci.h
> @@ -125,16 +125,16 @@ rte_eth_dev_pci_release(struct rte_eth_dev *eth_dev)
>      /* free ether device */
>      rte_eth_dev_release_port(eth_dev);
> 
> -   if (rte_eal_process_type() == RTE_PROC_PRIMARY)
> +   if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
> 

[dpdk-users] attach/detach on secondary process

2017-12-13 Thread Ricardo Roldan

Hi,

We have a multi-process application and we need to support 
attaching/detaching of ports. We are using the 17.11 version with the 
Intel x520 (ixgbe driver) and virtio.


At the time we initialize our processes there are not any devices binded 
with the DPDK drivers, so we initialize all processes (primary and 
secondaries) with 0 ports.


This seems to work fine only on the primary processes, but on the 
secondary processes we see some problems. In the following paragraphs I 
describe the procedure used to attach/detach interfaces with DPDK.


For the attach procedure (all processes initially have no devices 
attached):


- Bind the devices we want to attach to the DPDK driver (with the script 
dpdk-devbind, from external process)


- Primary process: Call rte_eth_dev_attach

- Primary process: Configure ports using ...

- Secondary processes: Call to rte_eth_dev_attach


Start to send/receive packets from all processes.


For the detach procedure:

- Secondary processes: For each port, call rte_eth_dev_stop(port), 
rte_eth_dev_close(port) and rte_eth_dev_detach(port, dev).


- Primary process: After the secondary processes have detach all their 
ports, for each port call rte_eth_dev_stop(port), 
rte_eth_dev_close(port) and rte_eth_dev_detach(port, dev).


- Bind the device to the original Linux driver (with the script 
dpdk-devbind, from external process)



With this approach we have noticed that when the secondary processes 
call rte_dev_detach there is an error, because it calls the remove 
operation, which ends up calling eth_ixgbe_dev_uninit that returns 
-EPERM (because it does not allow a non primary process to uninitialize 
the driver).



Therefore, the port attach never works again on the secondary processes 
as the function rte_eal_hotplug_add fails because it cannot find the 
device.



dev = bus->find_device(NULL, cmp_detached_dev_name, devname);
    if (dev == NULL) {
    RTE_LOG(ERR, EAL, "Cannot find unplugged device (%s)\n",
    devname);
    ret = -ENODEV;
    goto err_devarg;
    }


This happens because in order to check unplugged devices, the function 
cmp_detached_dev_name  checks if there is a pointer to the driver and 
fails if it is already set, because the detach procedure never set the 
driver variable to NULL.


static int cmp_detached_dev_name(const struct rte_device *dev,
    const void *_name)
{
    const char *name = _name;

    /* skip attached devices */
    RTE_LOG(ERR, EAL, "cmp_detached_dev_name dev %p name %s driver %p"
    " search %s\n",
    dev, dev->name, dev->driver, name);
    if (dev->driver != NULL)
    return 1;

    return strcmp(dev->name, name);
}

To fix this behavior we have done the following changes on the DPDK code.

First, in order to prevent cmp_detached_dev_name from failing, 
rte_eal_dev_detach sets driver to NULL.


diff --git a/lib/librte_eal/common/eal_common_dev.c 
b/lib/librte_eal/common/eal_common_dev.c


index dda8f5835..9a363dcf7 100644
--- a/lib/librte_eal/common/eal_common_dev.c
+++ b/lib/librte_eal/common/eal_common_dev.c
@@ -114,6 +114,7 @@ int rte_eal_dev_detach(struct rte_device *dev)
    if (ret)
    RTE_LOG(ERR, EAL, "Driver cannot detach the device (%s)\n",
    dev->name);
+   dev->driver = NULL;
    return ret;
 }/*
*/


Then, in the rte_eth_dev_pci_generic_remove function, the call to 
dev_uninit does not consider -EPERM an error, because when detaching a 
port, some drivers return 0 and other drivers return -EPERM to indicate 
that it is called from a secondary process.


diff --git a/lib/librte_ether/rte_ethdev_pci.h 
b/lib/librte_ether/rte_ethdev_pci.h
@@ -184,7 +184,7 @@ rte_eth_dev_pci_generic_remove(struct rte_pci_device 
*pci_dev,


    if (dev_uninit) {
    ret = dev_uninit(eth_dev);
-   if (ret)
+   if (ret && ret != -EPERM)
    return ret;
    }

Finally, in the rte_eth_dev_pci_release function, only the fields in the 
shared memory region are reset if called from a primary process.


/**/diff --git a/lib/librte_ether/rte_ethdev_pci.h 
b/lib/librte_ether/rte_ethdev_pci.h

index 722075e09..a79188fbf 100644
--- a/lib/librte_ether/rte_ethdev_pci.h
+++ b/lib/librte_ether/rte_ethdev_pci.h
@@ -125,16 +125,16 @@ rte_eth_dev_pci_release(struct rte_eth_dev *eth_dev)
    /* free ether device */
    rte_eth_dev_release_port(eth_dev);

-   if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+   if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
rte_free(eth_dev->data->dev_private);
+ eth_dev->data->dev_private = NULL;

-   eth_dev->data->dev_private = NULL;
-
-   /*
-    * Secondary process will check the name to attach.
-    * Clear this field to avoid attaching a released ports.
-    */
-   eth_dev->data->name[0] = '\0';
+   /*
+    * Secondary 

Re: [dpdk-users] VF RSS availble in I350-T2?

2017-12-13 Thread ..
Hi, sorry the ignored comment was a bit brash.

Its my first time posting, and I didnt see the email I sent come into my
inbox (I guess you don't get them sent to yourself), so I did wonder if it
posted ok.  However the list archives showed that it did.

Thanks
On Wed, 13 Dec 2017, 13:20 ..,  wrote:

> Hi Paul,
>
> No I didn't spot that.
>
> I guess my only option now is 10gb card that supports it.
>
> Thanks.
>
> On Wed, 13 Dec 2017, 12:35 Paul Emmerich,  wrote:
>
>> Did you consult the datasheet? It says that the VF only supports one
>> queue.
>>
>> Paul
>>
>> > Am 12.12.2017 um 13:58 schrieb .. :
>> >
>> > I assume my message was ignored due to it not being related to dpdk
>> > software?
>> >
>> >
>> > On 11 December 2017 at 10:14, ..  wrote:
>> >
>> >> Hi,
>> >>
>> >> I have an intel I350-T2 which I use for SR-IOV, however, I am hitting
>> some
>> >> rx_dropped on the card when I start increasing traffic. (I have got
>> more
>> >> with the same software out of a identical bare metal system)
>> >>
>> >> I am using the Intel igb driver on Centos 7.2 (downloaded from Intel
>> not
>> >> the driver installed with Centos), so the RSS parameters amongst
>> others are
>> >> availbe to me
>> >>
>> >> This then led me to investigate the interrupts on the tx rx ring
>> buffers
>> >> and I noticed that the interface (vfs enabled) only had on tx/rx
>> queue. Its
>> >> distributed between   This is on the KVM Host
>> >>
>> >> CPU0   CPU1   CPU2   CPU3   CPU4
>> >> CPU5   CPU6   CPU7   CPU8
>> >> 100:  1 33137  0  0
>> >> 0  0  0  0 IR-PCI-MSI-edge  ens2f1
>> >> 101:   2224  0  0   6309 178807
>> >> 0  0  0  0 IR-PCI-MSI-edge
>> ens2f1-TxRx-0
>> >>
>> >> Looking at my standard nic ethernet ports I see 1 rx and 4 rx queues
>> >>
>> >> On the VM I only get one tx one rx queue ( I know all the interrupts
>> are
>> >> only using CPU0) but that is defined in our builds.
>> >>
>> >> egrep "CPU|ens11" /proc/interrupts
>> >>   CPU0   CPU1   CPU2   CPU3   CPU4
>> >> CPU5   CPU6   CPU7
>> >> 34:  715885552  0  0  0  0
>> >> 0  0  0  0   PCI-MSI-edge
>> ens11-tx-0
>> >> 35:  559402399  0  0  0  0
>> >> 0  0  0  0   PCI-MSI-edge
>> ens11-rx-0
>> >>
>> >> I activated RSS in my card, and can set if, however if I use the param
>> >> max_vfs=n then it  defaults back to to 1 rx 1 tx queue per nic port
>> >>
>> >> [  392.833410] igb :07:00.0: Using MSI-X interrupts. 1 rx
>> queue(s), 1
>> >> tx queue(s)
>> >> [  393.035408] igb :07:00.1: Using MSI-X interrupts. 1 rx
>> queue(s), 1
>> >> tx queue(s)
>> >>
>> >> I have been reading some of the dpdk older posts and see that VF RSS is
>> >> implemented in some cards, does anybody know if its available in this
>> card
>> >> (from reading it only seemed the 10GB cards)
>> >>
>> >> One of my plans aside from trying to create more RSS per VM is to add
>> more
>> >> CPUS to the VM that are not isolated so that the rx and tx queues can
>> >> distribute their load a bit to see if this helps.
>> >>
>> >> Also is it worth investigating the VMDq options, however I understand
>> this
>> >> to be less useful than SR-IOV which works well for me with KVM.
>> >>
>> >>
>> >> Thanks in advance,
>> >>
>> >> Rolando
>> >>
>>
>> --
>> Chair of Network Architectures and Services
>> Department of Informatics
>> Technical University of Munich
>> Boltzmannstr. 3
>> 85748 Garching bei München, Germany
>>
>>
>>
>>
>>


Re: [dpdk-users] VF RSS availble in I350-T2?

2017-12-13 Thread ..
Hi Paul,

No I didn't spot that.

I guess my only option now is 10gb card that supports it.

Thanks.

On Wed, 13 Dec 2017, 12:35 Paul Emmerich,  wrote:

> Did you consult the datasheet? It says that the VF only supports one queue.
>
> Paul
>
> > Am 12.12.2017 um 13:58 schrieb .. :
> >
> > I assume my message was ignored due to it not being related to dpdk
> > software?
> >
> >
> > On 11 December 2017 at 10:14, ..  wrote:
> >
> >> Hi,
> >>
> >> I have an intel I350-T2 which I use for SR-IOV, however, I am hitting
> some
> >> rx_dropped on the card when I start increasing traffic. (I have got more
> >> with the same software out of a identical bare metal system)
> >>
> >> I am using the Intel igb driver on Centos 7.2 (downloaded from Intel not
> >> the driver installed with Centos), so the RSS parameters amongst others
> are
> >> availbe to me
> >>
> >> This then led me to investigate the interrupts on the tx rx ring buffers
> >> and I noticed that the interface (vfs enabled) only had on tx/rx queue.
> Its
> >> distributed between   This is on the KVM Host
> >>
> >> CPU0   CPU1   CPU2   CPU3   CPU4
> >> CPU5   CPU6   CPU7   CPU8
> >> 100:  1 33137  0  0
> >> 0  0  0  0 IR-PCI-MSI-edge  ens2f1
> >> 101:   2224  0  0   6309 178807
> >> 0  0  0  0 IR-PCI-MSI-edge
> ens2f1-TxRx-0
> >>
> >> Looking at my standard nic ethernet ports I see 1 rx and 4 rx queues
> >>
> >> On the VM I only get one tx one rx queue ( I know all the interrupts are
> >> only using CPU0) but that is defined in our builds.
> >>
> >> egrep "CPU|ens11" /proc/interrupts
> >>   CPU0   CPU1   CPU2   CPU3   CPU4
> >> CPU5   CPU6   CPU7
> >> 34:  715885552  0  0  0  0
> >> 0  0  0  0   PCI-MSI-edge
> ens11-tx-0
> >> 35:  559402399  0  0  0  0
> >> 0  0  0  0   PCI-MSI-edge
> ens11-rx-0
> >>
> >> I activated RSS in my card, and can set if, however if I use the param
> >> max_vfs=n then it  defaults back to to 1 rx 1 tx queue per nic port
> >>
> >> [  392.833410] igb :07:00.0: Using MSI-X interrupts. 1 rx queue(s),
> 1
> >> tx queue(s)
> >> [  393.035408] igb :07:00.1: Using MSI-X interrupts. 1 rx queue(s),
> 1
> >> tx queue(s)
> >>
> >> I have been reading some of the dpdk older posts and see that VF RSS is
> >> implemented in some cards, does anybody know if its available in this
> card
> >> (from reading it only seemed the 10GB cards)
> >>
> >> One of my plans aside from trying to create more RSS per VM is to add
> more
> >> CPUS to the VM that are not isolated so that the rx and tx queues can
> >> distribute their load a bit to see if this helps.
> >>
> >> Also is it worth investigating the VMDq options, however I understand
> this
> >> to be less useful than SR-IOV which works well for me with KVM.
> >>
> >>
> >> Thanks in advance,
> >>
> >> Rolando
> >>
>
> --
> Chair of Network Architectures and Services
> Department of Informatics
> Technical University of Munich
> Boltzmannstr. 3
> 85748 Garching bei München, Germany
>
>
>
>
>


Re: [dpdk-users] VF RSS availble in I350-T2?

2017-12-13 Thread Paul Emmerich
Did you consult the datasheet? It says that the VF only supports one queue.

Paul

> Am 12.12.2017 um 13:58 schrieb .. :
> 
> I assume my message was ignored due to it not being related to dpdk
> software?
> 
> 
> On 11 December 2017 at 10:14, ..  wrote:
> 
>> Hi,
>> 
>> I have an intel I350-T2 which I use for SR-IOV, however, I am hitting some
>> rx_dropped on the card when I start increasing traffic. (I have got more
>> with the same software out of a identical bare metal system)
>> 
>> I am using the Intel igb driver on Centos 7.2 (downloaded from Intel not
>> the driver installed with Centos), so the RSS parameters amongst others are
>> availbe to me
>> 
>> This then led me to investigate the interrupts on the tx rx ring buffers
>> and I noticed that the interface (vfs enabled) only had on tx/rx queue. Its
>> distributed between   This is on the KVM Host
>> 
>> CPU0   CPU1   CPU2   CPU3   CPU4
>> CPU5   CPU6   CPU7   CPU8
>> 100:  1 33137  0  0
>> 0  0  0  0 IR-PCI-MSI-edge  ens2f1
>> 101:   2224  0  0   6309 178807
>> 0  0  0  0 IR-PCI-MSI-edge  ens2f1-TxRx-0
>> 
>> Looking at my standard nic ethernet ports I see 1 rx and 4 rx queues
>> 
>> On the VM I only get one tx one rx queue ( I know all the interrupts are
>> only using CPU0) but that is defined in our builds.
>> 
>> egrep "CPU|ens11" /proc/interrupts
>>   CPU0   CPU1   CPU2   CPU3   CPU4
>> CPU5   CPU6   CPU7
>> 34:  715885552  0  0  0  0
>> 0  0  0  0   PCI-MSI-edge  ens11-tx-0
>> 35:  559402399  0  0  0  0
>> 0  0  0  0   PCI-MSI-edge  ens11-rx-0
>> 
>> I activated RSS in my card, and can set if, however if I use the param
>> max_vfs=n then it  defaults back to to 1 rx 1 tx queue per nic port
>> 
>> [  392.833410] igb :07:00.0: Using MSI-X interrupts. 1 rx queue(s), 1
>> tx queue(s)
>> [  393.035408] igb :07:00.1: Using MSI-X interrupts. 1 rx queue(s), 1
>> tx queue(s)
>> 
>> I have been reading some of the dpdk older posts and see that VF RSS is
>> implemented in some cards, does anybody know if its available in this card
>> (from reading it only seemed the 10GB cards)
>> 
>> One of my plans aside from trying to create more RSS per VM is to add more
>> CPUS to the VM that are not isolated so that the rx and tx queues can
>> distribute their load a bit to see if this helps.
>> 
>> Also is it worth investigating the VMDq options, however I understand this
>> to be less useful than SR-IOV which works well for me with KVM.
>> 
>> 
>> Thanks in advance,
>> 
>> Rolando
>> 

-- 
Chair of Network Architectures and Services
Department of Informatics
Technical University of Munich
Boltzmannstr. 3
85748 Garching bei München, Germany 






Re: [dpdk-users] VF RSS availble in I350-T2?

2017-12-13 Thread Thomas Monjalon
12/12/2017 13:58, ..:
> I assume my message was ignored due to it not being related to dpdk
> software?

It is ignored because people have not read it or are not expert in
this hardware.
I am CC'ing the maintainer of igb/e1000.


> On 11 December 2017 at 10:14, ..  wrote:
> 
> > Hi,
> >
> > I have an intel I350-T2 which I use for SR-IOV, however, I am hitting some
> > rx_dropped on the card when I start increasing traffic. (I have got more
> > with the same software out of a identical bare metal system)
> >
> > I am using the Intel igb driver on Centos 7.2 (downloaded from Intel not
> > the driver installed with Centos), so the RSS parameters amongst others are
> > availbe to me
> >
> > This then led me to investigate the interrupts on the tx rx ring buffers
> > and I noticed that the interface (vfs enabled) only had on tx/rx queue. Its
> > distributed between   This is on the KVM Host
> >
> >  CPU0   CPU1   CPU2   CPU3   CPU4
> > CPU5   CPU6   CPU7   CPU8
> >  100:  1 33137  0  0
> > 0  0  0  0 IR-PCI-MSI-edge  ens2f1
> >  101:   2224  0  0   6309 178807
> > 0  0  0  0 IR-PCI-MSI-edge  ens2f1-TxRx-0
> >
> > Looking at my standard nic ethernet ports I see 1 rx and 4 rx queues
> >
> > On the VM I only get one tx one rx queue ( I know all the interrupts are
> > only using CPU0) but that is defined in our builds.
> >
> > egrep "CPU|ens11" /proc/interrupts
> >CPU0   CPU1   CPU2   CPU3   CPU4
> > CPU5   CPU6   CPU7
> >  34:  715885552  0  0  0  0
> > 0  0  0  0   PCI-MSI-edge  ens11-tx-0
> >  35:  559402399  0  0  0  0
> > 0  0  0  0   PCI-MSI-edge  ens11-rx-0
> >
> > I activated RSS in my card, and can set if, however if I use the param
> > max_vfs=n then it  defaults back to to 1 rx 1 tx queue per nic port
> >
> > [  392.833410] igb :07:00.0: Using MSI-X interrupts. 1 rx queue(s), 1
> > tx queue(s)
> > [  393.035408] igb :07:00.1: Using MSI-X interrupts. 1 rx queue(s), 1
> > tx queue(s)
> >
> > I have been reading some of the dpdk older posts and see that VF RSS is
> > implemented in some cards, does anybody know if its available in this card
> > (from reading it only seemed the 10GB cards)
> >
> > One of my plans aside from trying to create more RSS per VM is to add more
> > CPUS to the VM that are not isolated so that the rx and tx queues can
> > distribute their load a bit to see if this helps.
> >
> > Also is it worth investigating the VMDq options, however I understand this
> > to be less useful than SR-IOV which works well for me with KVM.
> >
> >
> > Thanks in advance,
> >
> > Rolando
> >
> 





Re: [dpdk-users] DPDK Performance tips

2017-12-13 Thread Thomas Monjalon
13/12/2017 09:14, Anand Prasad:
>  Hi Dpdk team,
>Can anyone please share tips to get better DPDK performance? I have tried 
> to run DPDK test applications on various PC configurations with different CPU 
> Speeds, RAM Size/Speed and and PCIex16 2nd and 3rd generation connections... 
> but I don't get very consistent results.
>The same test application does not behave similarly on 2 PC's with same 
> hardware configuration.But general observation is that, performance is better 
> on Intel motherboard with 3rd generation PCIe slot. When tried on Gigabyte 
> motherboard (even with higher CPU and RAM speeds), performance was very poor.
> The performance issue I am facing is the Packet Drop issue on the Rx side.
>  Two PC's with exactly same hardware configuration, one PC drops packets 
> after few hours, but in the other PC I don't observe packet drop
> Highly appreciate a quick response
> Regards,Anand Prasad

This is very complicate because it really depends on the hardware.
Managing performance requires a very good knowledge of the hardware.

You can find some basic advices in this guide for some Intel hardware:
http://dpdk.org/doc/guides/linux_gsg/nic_perf_intel_platform.html
You will also find some informations in the drivers guide. Example:
http://dpdk.org/doc/guides/nics/mlx5.html#performance-tuning



[dpdk-users] DPDK Performance tips

2017-12-13 Thread Anand Prasad
 Hi Dpdk team,
   Can anyone please share tips to get better DPDK performance? I have tried to 
run DPDK test applications on various PC configurations with different CPU 
Speeds, RAM Size/Speed and and PCIex16 2nd and 3rd generation connections... 
but I don't get very consistent results.
   The same test application does not behave similarly on 2 PC's with same 
hardware configuration.But general observation is that, performance is better 
on Intel motherboard with 3rd generation PCIe slot. When tried on Gigabyte 
motherboard (even with higher CPU and RAM speeds), performance was very poor.
    The performance issue I am facing is the Packet Drop issue on the Rx side.
 Two PC's with exactly same hardware configuration, one PC drops packets after 
few hours, but in the other PC I don't observe packet drop
Highly appreciate a quick response
Regards,Anand Prasad