Re: [lng-odp] unexpected packet handling behavior with dpdk pktio support

2018-02-07 Thread Elo, Matias (Nokia - FI/Espoo)
Did you try with the latest netmap master branch code? That seemed to work for 
me.

-Matias

On 7 Feb 2018, at 17.32, gyanesh patra 
> wrote:

Is it possible to fix for netmap too in similar fashion?

P Gyanesh Kumar Patra

On Wed, Feb 7, 2018 at 1:19 PM, Elo, Matias (Nokia - FI/Espoo) 
> wrote:
The PR is now available: https://github.com/Linaro/odp/pull/458

-Matias

> On 7 Feb 2018, at 15:31, gyanesh patra 
> > wrote:
>
> This patch works on Intel X540-AT2 NICs too.
>
> P Gyanesh Kumar Patra
>
> On Wed, Feb 7, 2018 at 11:28 AM, Bill Fischofer 
> > wrote:
> Thanks, Matias. Please open a bug for this and reference it in the fix.
>
> On Wed, Feb 7, 2018 at 6:36 AM, Elo, Matias (Nokia - FI/Espoo) 
> > wrote:
> Hi,
>
> I actually just figured out the problem. For e.g. Niantic NICs the 
> rte_eth_rxconf.rx_drop_en has to be enabled for the NIC to continue working 
> properly when all RX queues are not emptied. The following patch fixes the 
> problem for me:
>
> diff --git a/platform/linux-generic/pktio/dpdk.c 
> b/platform/linux-generic/pktio/dpdk.c
> index bd6920e..fc535e3 100644
> --- a/platform/linux-generic/pktio/dpdk.c
> +++ b/platform/linux-generic/pktio/dpdk.c
> @@ -1402,6 +1402,7 @@ static int dpdk_open(odp_pktio_t id ODP_UNUSED,
>
>  static int dpdk_start(pktio_entry_t *pktio_entry)
>  {
> +   struct rte_eth_dev_info dev_info;
> pkt_dpdk_t *pkt_dpdk = _entry->s.pkt_dpdk;
> uint8_t port_id = pkt_dpdk->port_id;
> int ret;
> @@ -1420,7 +1421,6 @@ static int dpdk_start(pktio_entry_t *pktio_entry)
> }
> /* Init TX queues */
> for (i = 0; i < pktio_entry->s.num_out_queue; i++) {
> -   struct rte_eth_dev_info dev_info;
> const struct rte_eth_txconf *txconf = NULL;
> int ip_ena  = 
> pktio_entry->s.config.pktout.bit.ipv4_chksum_ena;
> int udp_ena = pktio_entry->s.config.pktout.bit.udp_chksum_ena;
> @@ -1470,9 +1470,14 @@ static int dpdk_start(pktio_entry_t *pktio_entry)
> }
> /* Init RX queues */
> for (i = 0; i < pktio_entry->s.num_in_queue; i++) {
> +   struct rte_eth_rxconf *rxconf = NULL;
> +
> +   rte_eth_dev_info_get(port_id, _info);
> +   rxconf = _info.default_rxconf;
> +   rxconf->rx_drop_en = 1;
> ret = rte_eth_rx_queue_setup(port_id, i, DPDK_NM_RX_DESC,
>  rte_eth_dev_socket_id(port_id),
> -NULL, pkt_dpdk->pkt_pool);
> +rxconf, pkt_dpdk->pkt_pool);
> if (ret < 0) {
> ODP_ERR("Queue setup failed: err=%d, port=%" PRIu8 
> "\n",
> ret, port_id);
>
> I'll test it a bit more for performance effects and then send a fix PR.
>
> -Matias
>
>
>
> > On 7 Feb 2018, at 14:18, gyanesh patra 
> > > wrote:
> >
> > Thank you.
> > I am curious what might be the reason.
> >
> > P Gyanesh Kumar Patra
> >
> > On Wed, Feb 7, 2018 at 9:51 AM, Elo, Matias (Nokia - FI/Espoo) 
> > > wrote:
> > I'm currently trying to figure out what's happening. I'll report back when 
> > I find out something.
> >
> > -Matias
> >
> >
> > > On 7 Feb 2018, at 13:44, gyanesh patra 
> > > > wrote:
> > >
> > > Do you have any theory for the issue in 82599 (Niantic) NIC and why it 
> > > might be working in Intel XL710 (Fortville)? Can i identify a new 
> > > hardware without this issue by looking at their datasheet/specs?
> > > Thanks for the insight.
> > >
> > > P Gyanesh Kumar Patra
> > >
> > > On Wed, Feb 7, 2018 at 9:12 AM, Elo, Matias (Nokia - FI/Espoo) 
> > > > wrote:
> > > I was unable to reproduce this with Intel XL710 (Fortville) but with 
> > > 82599 (Niantic) l2fwd operates as you have described. This may be a NIC 
> > > HW limitation since the same issue is also observed with netmap pktio.
> > >
> > > -Matias
> > >
> > >
> > > > On 7 Feb 2018, at 11:14, gyanesh patra 
> > > > > wrote:
> > > >
> > > > Thanks for the info. I verified this with both odp 1.16 and odp 1.17 
> > > > with same behavior.
> > > > The traffic consists of diff Mac and ip addresses.
> > > > Without the busy loop, I could see that all the threads were receiving 
> > > > packets. So i think packet distribution is not an issue. In our case, 
> > > > we are sending packet at line rate of 10G interface. That might be 
> > > > 

Re: [lng-odp] unexpected packet handling behavior with dpdk pktio support

2018-02-07 Thread Ilias Apalodimas
netmap is not detaching the driver the way DPDK does.

I am guessing you'll have to edit the kernel driver if that config
option is not available in sysfs/ethtool


Regards
Ilias

On 7 February 2018 at 17:31, gyanesh patra  wrote:
> Is it possible to fix for netmap too in similar fashion?
>
> P Gyanesh Kumar Patra
>
> On Wed, Feb 7, 2018 at 1:19 PM, Elo, Matias (Nokia - FI/Espoo) <
> matias@nokia.com> wrote:
>
>> The PR is now available: https://github.com/Linaro/odp/pull/458
>>
>> -Matias
>>
>> > On 7 Feb 2018, at 15:31, gyanesh patra  wrote:
>> >
>> > This patch works on Intel X540-AT2 NICs too.
>> >
>> > P Gyanesh Kumar Patra
>> >
>> > On Wed, Feb 7, 2018 at 11:28 AM, Bill Fischofer <
>> bill.fischo...@linaro.org> wrote:
>> > Thanks, Matias. Please open a bug for this and reference it in the fix.
>> >
>> > On Wed, Feb 7, 2018 at 6:36 AM, Elo, Matias (Nokia - FI/Espoo) <
>> matias@nokia.com> wrote:
>> > Hi,
>> >
>> > I actually just figured out the problem. For e.g. Niantic NICs the
>> rte_eth_rxconf.rx_drop_en has to be enabled for the NIC to continue working
>> properly when all RX queues are not emptied. The following patch fixes the
>> problem for me:
>> >
>> > diff --git a/platform/linux-generic/pktio/dpdk.c
>> b/platform/linux-generic/pktio/dpdk.c
>> > index bd6920e..fc535e3 100644
>> > --- a/platform/linux-generic/pktio/dpdk.c
>> > +++ b/platform/linux-generic/pktio/dpdk.c
>> > @@ -1402,6 +1402,7 @@ static int dpdk_open(odp_pktio_t id ODP_UNUSED,
>> >
>> >  static int dpdk_start(pktio_entry_t *pktio_entry)
>> >  {
>> > +   struct rte_eth_dev_info dev_info;
>> > pkt_dpdk_t *pkt_dpdk = _entry->s.pkt_dpdk;
>> > uint8_t port_id = pkt_dpdk->port_id;
>> > int ret;
>> > @@ -1420,7 +1421,6 @@ static int dpdk_start(pktio_entry_t *pktio_entry)
>> > }
>> > /* Init TX queues */
>> > for (i = 0; i < pktio_entry->s.num_out_queue; i++) {
>> > -   struct rte_eth_dev_info dev_info;
>> > const struct rte_eth_txconf *txconf = NULL;
>> > int ip_ena  = pktio_entry->s.config.pktout.
>> bit.ipv4_chksum_ena;
>> > int udp_ena = pktio_entry->s.config.pktout.
>> bit.udp_chksum_ena;
>> > @@ -1470,9 +1470,14 @@ static int dpdk_start(pktio_entry_t *pktio_entry)
>> > }
>> > /* Init RX queues */
>> > for (i = 0; i < pktio_entry->s.num_in_queue; i++) {
>> > +   struct rte_eth_rxconf *rxconf = NULL;
>> > +
>> > +   rte_eth_dev_info_get(port_id, _info);
>> > +   rxconf = _info.default_rxconf;
>> > +   rxconf->rx_drop_en = 1;
>> > ret = rte_eth_rx_queue_setup(port_id, i,
>> DPDK_NM_RX_DESC,
>> >
>> rte_eth_dev_socket_id(port_id),
>> > -NULL, pkt_dpdk->pkt_pool);
>> > +rxconf, pkt_dpdk->pkt_pool);
>> > if (ret < 0) {
>> > ODP_ERR("Queue setup failed: err=%d, port=%"
>> PRIu8 "\n",
>> > ret, port_id);
>> >
>> > I'll test it a bit more for performance effects and then send a fix PR.
>> >
>> > -Matias
>> >
>> >
>> >
>> > > On 7 Feb 2018, at 14:18, gyanesh patra 
>> wrote:
>> > >
>> > > Thank you.
>> > > I am curious what might be the reason.
>> > >
>> > > P Gyanesh Kumar Patra
>> > >
>> > > On Wed, Feb 7, 2018 at 9:51 AM, Elo, Matias (Nokia - FI/Espoo) <
>> matias@nokia.com> wrote:
>> > > I'm currently trying to figure out what's happening. I'll report back
>> when I find out something.
>> > >
>> > > -Matias
>> > >
>> > >
>> > > > On 7 Feb 2018, at 13:44, gyanesh patra 
>> wrote:
>> > > >
>> > > > Do you have any theory for the issue in 82599 (Niantic) NIC and why
>> it might be working in Intel XL710 (Fortville)? Can i identify a new
>> hardware without this issue by looking at their datasheet/specs?
>> > > > Thanks for the insight.
>> > > >
>> > > > P Gyanesh Kumar Patra
>> > > >
>> > > > On Wed, Feb 7, 2018 at 9:12 AM, Elo, Matias (Nokia - FI/Espoo) <
>> matias@nokia.com> wrote:
>> > > > I was unable to reproduce this with Intel XL710 (Fortville) but with
>> 82599 (Niantic) l2fwd operates as you have described. This may be a NIC HW
>> limitation since the same issue is also observed with netmap pktio.
>> > > >
>> > > > -Matias
>> > > >
>> > > >
>> > > > > On 7 Feb 2018, at 11:14, gyanesh patra 
>> wrote:
>> > > > >
>> > > > > Thanks for the info. I verified this with both odp 1.16 and odp
>> 1.17 with same behavior.
>> > > > > The traffic consists of diff Mac and ip addresses.
>> > > > > Without the busy loop, I could see that all the threads were
>> receiving packets. So i think packet distribution is not an issue. In our
>> case, we are sending packet at line rate of 10G interface. That might be
>> causing this behaviour.
>> > > > > 

Re: [lng-odp] unexpected packet handling behavior with dpdk pktio support

2018-02-07 Thread gyanesh patra
Is it possible to fix for netmap too in similar fashion?

P Gyanesh Kumar Patra

On Wed, Feb 7, 2018 at 1:19 PM, Elo, Matias (Nokia - FI/Espoo) <
matias@nokia.com> wrote:

> The PR is now available: https://github.com/Linaro/odp/pull/458
>
> -Matias
>
> > On 7 Feb 2018, at 15:31, gyanesh patra  wrote:
> >
> > This patch works on Intel X540-AT2 NICs too.
> >
> > P Gyanesh Kumar Patra
> >
> > On Wed, Feb 7, 2018 at 11:28 AM, Bill Fischofer <
> bill.fischo...@linaro.org> wrote:
> > Thanks, Matias. Please open a bug for this and reference it in the fix.
> >
> > On Wed, Feb 7, 2018 at 6:36 AM, Elo, Matias (Nokia - FI/Espoo) <
> matias@nokia.com> wrote:
> > Hi,
> >
> > I actually just figured out the problem. For e.g. Niantic NICs the
> rte_eth_rxconf.rx_drop_en has to be enabled for the NIC to continue working
> properly when all RX queues are not emptied. The following patch fixes the
> problem for me:
> >
> > diff --git a/platform/linux-generic/pktio/dpdk.c
> b/platform/linux-generic/pktio/dpdk.c
> > index bd6920e..fc535e3 100644
> > --- a/platform/linux-generic/pktio/dpdk.c
> > +++ b/platform/linux-generic/pktio/dpdk.c
> > @@ -1402,6 +1402,7 @@ static int dpdk_open(odp_pktio_t id ODP_UNUSED,
> >
> >  static int dpdk_start(pktio_entry_t *pktio_entry)
> >  {
> > +   struct rte_eth_dev_info dev_info;
> > pkt_dpdk_t *pkt_dpdk = _entry->s.pkt_dpdk;
> > uint8_t port_id = pkt_dpdk->port_id;
> > int ret;
> > @@ -1420,7 +1421,6 @@ static int dpdk_start(pktio_entry_t *pktio_entry)
> > }
> > /* Init TX queues */
> > for (i = 0; i < pktio_entry->s.num_out_queue; i++) {
> > -   struct rte_eth_dev_info dev_info;
> > const struct rte_eth_txconf *txconf = NULL;
> > int ip_ena  = pktio_entry->s.config.pktout.
> bit.ipv4_chksum_ena;
> > int udp_ena = pktio_entry->s.config.pktout.
> bit.udp_chksum_ena;
> > @@ -1470,9 +1470,14 @@ static int dpdk_start(pktio_entry_t *pktio_entry)
> > }
> > /* Init RX queues */
> > for (i = 0; i < pktio_entry->s.num_in_queue; i++) {
> > +   struct rte_eth_rxconf *rxconf = NULL;
> > +
> > +   rte_eth_dev_info_get(port_id, _info);
> > +   rxconf = _info.default_rxconf;
> > +   rxconf->rx_drop_en = 1;
> > ret = rte_eth_rx_queue_setup(port_id, i,
> DPDK_NM_RX_DESC,
> >
> rte_eth_dev_socket_id(port_id),
> > -NULL, pkt_dpdk->pkt_pool);
> > +rxconf, pkt_dpdk->pkt_pool);
> > if (ret < 0) {
> > ODP_ERR("Queue setup failed: err=%d, port=%"
> PRIu8 "\n",
> > ret, port_id);
> >
> > I'll test it a bit more for performance effects and then send a fix PR.
> >
> > -Matias
> >
> >
> >
> > > On 7 Feb 2018, at 14:18, gyanesh patra 
> wrote:
> > >
> > > Thank you.
> > > I am curious what might be the reason.
> > >
> > > P Gyanesh Kumar Patra
> > >
> > > On Wed, Feb 7, 2018 at 9:51 AM, Elo, Matias (Nokia - FI/Espoo) <
> matias@nokia.com> wrote:
> > > I'm currently trying to figure out what's happening. I'll report back
> when I find out something.
> > >
> > > -Matias
> > >
> > >
> > > > On 7 Feb 2018, at 13:44, gyanesh patra 
> wrote:
> > > >
> > > > Do you have any theory for the issue in 82599 (Niantic) NIC and why
> it might be working in Intel XL710 (Fortville)? Can i identify a new
> hardware without this issue by looking at their datasheet/specs?
> > > > Thanks for the insight.
> > > >
> > > > P Gyanesh Kumar Patra
> > > >
> > > > On Wed, Feb 7, 2018 at 9:12 AM, Elo, Matias (Nokia - FI/Espoo) <
> matias@nokia.com> wrote:
> > > > I was unable to reproduce this with Intel XL710 (Fortville) but with
> 82599 (Niantic) l2fwd operates as you have described. This may be a NIC HW
> limitation since the same issue is also observed with netmap pktio.
> > > >
> > > > -Matias
> > > >
> > > >
> > > > > On 7 Feb 2018, at 11:14, gyanesh patra 
> wrote:
> > > > >
> > > > > Thanks for the info. I verified this with both odp 1.16 and odp
> 1.17 with same behavior.
> > > > > The traffic consists of diff Mac and ip addresses.
> > > > > Without the busy loop, I could see that all the threads were
> receiving packets. So i think packet distribution is not an issue. In our
> case, we are sending packet at line rate of 10G interface. That might be
> causing this behaviour.
> > > > > If I can provide any other info, let me know.
> > > > >
> > > > > Thanks
> > > > >
> > > > > Gyanesh
> > > > >
> > > > > On Wed, Feb 7, 2018, 05:15 Elo, Matias (Nokia - FI/Espoo) <
> matias@nokia.com> wrote:
> > > > > Hi Gyanesh,
> > > > >
> > > > > I tested the patch on my system and everything seems to work as
> expected. Based on the log you're not running the latest code 

Re: [lng-odp] unexpected packet handling behavior with dpdk pktio support

2018-02-07 Thread Elo, Matias (Nokia - FI/Espoo)
The PR is now available: https://github.com/Linaro/odp/pull/458

-Matias

> On 7 Feb 2018, at 15:31, gyanesh patra  wrote:
> 
> This patch works on Intel X540-AT2 NICs too.
> 
> P Gyanesh Kumar Patra
> 
> On Wed, Feb 7, 2018 at 11:28 AM, Bill Fischofer  
> wrote:
> Thanks, Matias. Please open a bug for this and reference it in the fix.
> 
> On Wed, Feb 7, 2018 at 6:36 AM, Elo, Matias (Nokia - FI/Espoo) 
>  wrote:
> Hi,
> 
> I actually just figured out the problem. For e.g. Niantic NICs the 
> rte_eth_rxconf.rx_drop_en has to be enabled for the NIC to continue working 
> properly when all RX queues are not emptied. The following patch fixes the 
> problem for me:
> 
> diff --git a/platform/linux-generic/pktio/dpdk.c 
> b/platform/linux-generic/pktio/dpdk.c
> index bd6920e..fc535e3 100644
> --- a/platform/linux-generic/pktio/dpdk.c
> +++ b/platform/linux-generic/pktio/dpdk.c
> @@ -1402,6 +1402,7 @@ static int dpdk_open(odp_pktio_t id ODP_UNUSED,
> 
>  static int dpdk_start(pktio_entry_t *pktio_entry)
>  {
> +   struct rte_eth_dev_info dev_info;
> pkt_dpdk_t *pkt_dpdk = _entry->s.pkt_dpdk;
> uint8_t port_id = pkt_dpdk->port_id;
> int ret;
> @@ -1420,7 +1421,6 @@ static int dpdk_start(pktio_entry_t *pktio_entry)
> }
> /* Init TX queues */
> for (i = 0; i < pktio_entry->s.num_out_queue; i++) {
> -   struct rte_eth_dev_info dev_info;
> const struct rte_eth_txconf *txconf = NULL;
> int ip_ena  = 
> pktio_entry->s.config.pktout.bit.ipv4_chksum_ena;
> int udp_ena = pktio_entry->s.config.pktout.bit.udp_chksum_ena;
> @@ -1470,9 +1470,14 @@ static int dpdk_start(pktio_entry_t *pktio_entry)
> }
> /* Init RX queues */
> for (i = 0; i < pktio_entry->s.num_in_queue; i++) {
> +   struct rte_eth_rxconf *rxconf = NULL;
> +
> +   rte_eth_dev_info_get(port_id, _info);
> +   rxconf = _info.default_rxconf;
> +   rxconf->rx_drop_en = 1;
> ret = rte_eth_rx_queue_setup(port_id, i, DPDK_NM_RX_DESC,
>  rte_eth_dev_socket_id(port_id),
> -NULL, pkt_dpdk->pkt_pool);
> +rxconf, pkt_dpdk->pkt_pool);
> if (ret < 0) {
> ODP_ERR("Queue setup failed: err=%d, port=%" PRIu8 
> "\n",
> ret, port_id);
> 
> I'll test it a bit more for performance effects and then send a fix PR.
> 
> -Matias
> 
> 
> 
> > On 7 Feb 2018, at 14:18, gyanesh patra  wrote:
> >
> > Thank you.
> > I am curious what might be the reason.
> >
> > P Gyanesh Kumar Patra
> >
> > On Wed, Feb 7, 2018 at 9:51 AM, Elo, Matias (Nokia - FI/Espoo) 
> >  wrote:
> > I'm currently trying to figure out what's happening. I'll report back when 
> > I find out something.
> >
> > -Matias
> >
> >
> > > On 7 Feb 2018, at 13:44, gyanesh patra  wrote:
> > >
> > > Do you have any theory for the issue in 82599 (Niantic) NIC and why it 
> > > might be working in Intel XL710 (Fortville)? Can i identify a new 
> > > hardware without this issue by looking at their datasheet/specs?
> > > Thanks for the insight.
> > >
> > > P Gyanesh Kumar Patra
> > >
> > > On Wed, Feb 7, 2018 at 9:12 AM, Elo, Matias (Nokia - FI/Espoo) 
> > >  wrote:
> > > I was unable to reproduce this with Intel XL710 (Fortville) but with 
> > > 82599 (Niantic) l2fwd operates as you have described. This may be a NIC 
> > > HW limitation since the same issue is also observed with netmap pktio.
> > >
> > > -Matias
> > >
> > >
> > > > On 7 Feb 2018, at 11:14, gyanesh patra  wrote:
> > > >
> > > > Thanks for the info. I verified this with both odp 1.16 and odp 1.17 
> > > > with same behavior.
> > > > The traffic consists of diff Mac and ip addresses.
> > > > Without the busy loop, I could see that all the threads were receiving 
> > > > packets. So i think packet distribution is not an issue. In our case, 
> > > > we are sending packet at line rate of 10G interface. That might be 
> > > > causing this behaviour.
> > > > If I can provide any other info, let me know.
> > > >
> > > > Thanks
> > > >
> > > > Gyanesh
> > > >
> > > > On Wed, Feb 7, 2018, 05:15 Elo, Matias (Nokia - FI/Espoo) 
> > > >  wrote:
> > > > Hi Gyanesh,
> > > >
> > > > I tested the patch on my system and everything seems to work as 
> > > > expected. Based on the log you're not running the latest code (v1.17.0) 
> > > > but I doubt that is the issue here.
> > > >
> > > > What kind of test traffic are you using? The l2fwd example uses IPv4 
> > > > addresses and UDP ports to do the input hashing. If test packets are 
> > > > identical they will all end up in the 

Re: [lng-odp] unexpected packet handling behavior with dpdk pktio support

2018-02-07 Thread gyanesh patra
This patch works on Intel X540-AT2 NICs too.

P Gyanesh Kumar Patra

On Wed, Feb 7, 2018 at 11:28 AM, Bill Fischofer 
wrote:

> Thanks, Matias. Please open a bug for this and reference it in the fix.
>
> On Wed, Feb 7, 2018 at 6:36 AM, Elo, Matias (Nokia - FI/Espoo) <
> matias@nokia.com> wrote:
>
>> Hi,
>>
>> I actually just figured out the problem. For e.g. Niantic NICs the
>> rte_eth_rxconf.rx_drop_en has to be enabled for the NIC to continue working
>> properly when all RX queues are not emptied. The following patch fixes the
>> problem for me:
>>
>> diff --git a/platform/linux-generic/pktio/dpdk.c
>> b/platform/linux-generic/pktio/dpdk.c
>> index bd6920e..fc535e3 100644
>> --- a/platform/linux-generic/pktio/dpdk.c
>> +++ b/platform/linux-generic/pktio/dpdk.c
>> @@ -1402,6 +1402,7 @@ static int dpdk_open(odp_pktio_t id ODP_UNUSED,
>>
>>  static int dpdk_start(pktio_entry_t *pktio_entry)
>>  {
>> +   struct rte_eth_dev_info dev_info;
>> pkt_dpdk_t *pkt_dpdk = _entry->s.pkt_dpdk;
>> uint8_t port_id = pkt_dpdk->port_id;
>> int ret;
>> @@ -1420,7 +1421,6 @@ static int dpdk_start(pktio_entry_t *pktio_entry)
>> }
>> /* Init TX queues */
>> for (i = 0; i < pktio_entry->s.num_out_queue; i++) {
>> -   struct rte_eth_dev_info dev_info;
>> const struct rte_eth_txconf *txconf = NULL;
>> int ip_ena  = pktio_entry->s.config.pktout.b
>> it.ipv4_chksum_ena;
>> int udp_ena = pktio_entry->s.config.pktout.b
>> it.udp_chksum_ena;
>> @@ -1470,9 +1470,14 @@ static int dpdk_start(pktio_entry_t *pktio_entry)
>> }
>> /* Init RX queues */
>> for (i = 0; i < pktio_entry->s.num_in_queue; i++) {
>> +   struct rte_eth_rxconf *rxconf = NULL;
>> +
>> +   rte_eth_dev_info_get(port_id, _info);
>> +   rxconf = _info.default_rxconf;
>> +   rxconf->rx_drop_en = 1;
>> ret = rte_eth_rx_queue_setup(port_id, i, DPDK_NM_RX_DESC,
>>  rte_eth_dev_socket_id(port_
>> id),
>> -NULL, pkt_dpdk->pkt_pool);
>> +rxconf, pkt_dpdk->pkt_pool);
>> if (ret < 0) {
>> ODP_ERR("Queue setup failed: err=%d, port=%"
>> PRIu8 "\n",
>> ret, port_id);
>>
>> I'll test it a bit more for performance effects and then send a fix PR.
>>
>> -Matias
>>
>>
>>
>> > On 7 Feb 2018, at 14:18, gyanesh patra 
>> wrote:
>> >
>> > Thank you.
>> > I am curious what might be the reason.
>> >
>> > P Gyanesh Kumar Patra
>> >
>> > On Wed, Feb 7, 2018 at 9:51 AM, Elo, Matias (Nokia - FI/Espoo) <
>> matias@nokia.com> wrote:
>> > I'm currently trying to figure out what's happening. I'll report back
>> when I find out something.
>> >
>> > -Matias
>> >
>> >
>> > > On 7 Feb 2018, at 13:44, gyanesh patra 
>> wrote:
>> > >
>> > > Do you have any theory for the issue in 82599 (Niantic) NIC and why
>> it might be working in Intel XL710 (Fortville)? Can i identify a new
>> hardware without this issue by looking at their datasheet/specs?
>> > > Thanks for the insight.
>> > >
>> > > P Gyanesh Kumar Patra
>> > >
>> > > On Wed, Feb 7, 2018 at 9:12 AM, Elo, Matias (Nokia - FI/Espoo) <
>> matias@nokia.com> wrote:
>> > > I was unable to reproduce this with Intel XL710 (Fortville) but with
>> 82599 (Niantic) l2fwd operates as you have described. This may be a NIC HW
>> limitation since the same issue is also observed with netmap pktio.
>> > >
>> > > -Matias
>> > >
>> > >
>> > > > On 7 Feb 2018, at 11:14, gyanesh patra 
>> wrote:
>> > > >
>> > > > Thanks for the info. I verified this with both odp 1.16 and odp
>> 1.17 with same behavior.
>> > > > The traffic consists of diff Mac and ip addresses.
>> > > > Without the busy loop, I could see that all the threads were
>> receiving packets. So i think packet distribution is not an issue. In our
>> case, we are sending packet at line rate of 10G interface. That might be
>> causing this behaviour.
>> > > > If I can provide any other info, let me know.
>> > > >
>> > > > Thanks
>> > > >
>> > > > Gyanesh
>> > > >
>> > > > On Wed, Feb 7, 2018, 05:15 Elo, Matias (Nokia - FI/Espoo) <
>> matias@nokia.com> wrote:
>> > > > Hi Gyanesh,
>> > > >
>> > > > I tested the patch on my system and everything seems to work as
>> expected. Based on the log you're not running the latest code (v1.17.0) but
>> I doubt that is the issue here.
>> > > >
>> > > > What kind of test traffic are you using? The l2fwd example uses
>> IPv4 addresses and UDP ports to do the input hashing. If test packets are
>> identical they will all end up in the same input queue, which would explain
>> what you are seeing.
>> > > >
>> > > > -Matias
>> > > >
>> > > >
>> > > > > On 6 Feb 2018, at 

Re: [lng-odp] unexpected packet handling behavior with dpdk pktio support

2018-02-07 Thread Bill Fischofer
Thanks, Matias. Please open a bug for this and reference it in the fix.

On Wed, Feb 7, 2018 at 6:36 AM, Elo, Matias (Nokia - FI/Espoo) <
matias@nokia.com> wrote:

> Hi,
>
> I actually just figured out the problem. For e.g. Niantic NICs the
> rte_eth_rxconf.rx_drop_en has to be enabled for the NIC to continue working
> properly when all RX queues are not emptied. The following patch fixes the
> problem for me:
>
> diff --git a/platform/linux-generic/pktio/dpdk.c b/platform/linux-generic/
> pktio/dpdk.c
> index bd6920e..fc535e3 100644
> --- a/platform/linux-generic/pktio/dpdk.c
> +++ b/platform/linux-generic/pktio/dpdk.c
> @@ -1402,6 +1402,7 @@ static int dpdk_open(odp_pktio_t id ODP_UNUSED,
>
>  static int dpdk_start(pktio_entry_t *pktio_entry)
>  {
> +   struct rte_eth_dev_info dev_info;
> pkt_dpdk_t *pkt_dpdk = _entry->s.pkt_dpdk;
> uint8_t port_id = pkt_dpdk->port_id;
> int ret;
> @@ -1420,7 +1421,6 @@ static int dpdk_start(pktio_entry_t *pktio_entry)
> }
> /* Init TX queues */
> for (i = 0; i < pktio_entry->s.num_out_queue; i++) {
> -   struct rte_eth_dev_info dev_info;
> const struct rte_eth_txconf *txconf = NULL;
> int ip_ena  = pktio_entry->s.config.pktout.
> bit.ipv4_chksum_ena;
> int udp_ena = pktio_entry->s.config.pktout.
> bit.udp_chksum_ena;
> @@ -1470,9 +1470,14 @@ static int dpdk_start(pktio_entry_t *pktio_entry)
> }
> /* Init RX queues */
> for (i = 0; i < pktio_entry->s.num_in_queue; i++) {
> +   struct rte_eth_rxconf *rxconf = NULL;
> +
> +   rte_eth_dev_info_get(port_id, _info);
> +   rxconf = _info.default_rxconf;
> +   rxconf->rx_drop_en = 1;
> ret = rte_eth_rx_queue_setup(port_id, i, DPDK_NM_RX_DESC,
>  rte_eth_dev_socket_id(port_id)
> ,
> -NULL, pkt_dpdk->pkt_pool);
> +rxconf, pkt_dpdk->pkt_pool);
> if (ret < 0) {
> ODP_ERR("Queue setup failed: err=%d, port=%" PRIu8
> "\n",
> ret, port_id);
>
> I'll test it a bit more for performance effects and then send a fix PR.
>
> -Matias
>
>
>
> > On 7 Feb 2018, at 14:18, gyanesh patra  wrote:
> >
> > Thank you.
> > I am curious what might be the reason.
> >
> > P Gyanesh Kumar Patra
> >
> > On Wed, Feb 7, 2018 at 9:51 AM, Elo, Matias (Nokia - FI/Espoo) <
> matias@nokia.com> wrote:
> > I'm currently trying to figure out what's happening. I'll report back
> when I find out something.
> >
> > -Matias
> >
> >
> > > On 7 Feb 2018, at 13:44, gyanesh patra 
> wrote:
> > >
> > > Do you have any theory for the issue in 82599 (Niantic) NIC and why it
> might be working in Intel XL710 (Fortville)? Can i identify a new hardware
> without this issue by looking at their datasheet/specs?
> > > Thanks for the insight.
> > >
> > > P Gyanesh Kumar Patra
> > >
> > > On Wed, Feb 7, 2018 at 9:12 AM, Elo, Matias (Nokia - FI/Espoo) <
> matias@nokia.com> wrote:
> > > I was unable to reproduce this with Intel XL710 (Fortville) but with
> 82599 (Niantic) l2fwd operates as you have described. This may be a NIC HW
> limitation since the same issue is also observed with netmap pktio.
> > >
> > > -Matias
> > >
> > >
> > > > On 7 Feb 2018, at 11:14, gyanesh patra 
> wrote:
> > > >
> > > > Thanks for the info. I verified this with both odp 1.16 and odp 1.17
> with same behavior.
> > > > The traffic consists of diff Mac and ip addresses.
> > > > Without the busy loop, I could see that all the threads were
> receiving packets. So i think packet distribution is not an issue. In our
> case, we are sending packet at line rate of 10G interface. That might be
> causing this behaviour.
> > > > If I can provide any other info, let me know.
> > > >
> > > > Thanks
> > > >
> > > > Gyanesh
> > > >
> > > > On Wed, Feb 7, 2018, 05:15 Elo, Matias (Nokia - FI/Espoo) <
> matias@nokia.com> wrote:
> > > > Hi Gyanesh,
> > > >
> > > > I tested the patch on my system and everything seems to work as
> expected. Based on the log you're not running the latest code (v1.17.0) but
> I doubt that is the issue here.
> > > >
> > > > What kind of test traffic are you using? The l2fwd example uses IPv4
> addresses and UDP ports to do the input hashing. If test packets are
> identical they will all end up in the same input queue, which would explain
> what you are seeing.
> > > >
> > > > -Matias
> > > >
> > > >
> > > > > On 6 Feb 2018, at 19:00, gyanesh patra 
> wrote:
> > > > >
> > > > > Hi,
> > > > > I tried with netmap, dpdk and dpdk with zero-copy enabled. All of
> them have the same behaviour. I also tried with (200*2048) as packet pool
> size without any success.
> > > > > I am attaching 

Re: [lng-odp] unexpected packet handling behavior with dpdk pktio support

2018-02-07 Thread Elo, Matias (Nokia - FI/Espoo)
Hi,

I actually just figured out the problem. For e.g. Niantic NICs the 
rte_eth_rxconf.rx_drop_en has to be enabled for the NIC to continue working 
properly when all RX queues are not emptied. The following patch fixes the 
problem for me:

diff --git a/platform/linux-generic/pktio/dpdk.c 
b/platform/linux-generic/pktio/dpdk.c
index bd6920e..fc535e3 100644
--- a/platform/linux-generic/pktio/dpdk.c
+++ b/platform/linux-generic/pktio/dpdk.c
@@ -1402,6 +1402,7 @@ static int dpdk_open(odp_pktio_t id ODP_UNUSED,
 
 static int dpdk_start(pktio_entry_t *pktio_entry)
 {
+   struct rte_eth_dev_info dev_info;
pkt_dpdk_t *pkt_dpdk = _entry->s.pkt_dpdk;
uint8_t port_id = pkt_dpdk->port_id;
int ret;
@@ -1420,7 +1421,6 @@ static int dpdk_start(pktio_entry_t *pktio_entry)
}
/* Init TX queues */
for (i = 0; i < pktio_entry->s.num_out_queue; i++) {
-   struct rte_eth_dev_info dev_info;
const struct rte_eth_txconf *txconf = NULL;
int ip_ena  = pktio_entry->s.config.pktout.bit.ipv4_chksum_ena;
int udp_ena = pktio_entry->s.config.pktout.bit.udp_chksum_ena;
@@ -1470,9 +1470,14 @@ static int dpdk_start(pktio_entry_t *pktio_entry)
}
/* Init RX queues */
for (i = 0; i < pktio_entry->s.num_in_queue; i++) {
+   struct rte_eth_rxconf *rxconf = NULL;
+
+   rte_eth_dev_info_get(port_id, _info);
+   rxconf = _info.default_rxconf;
+   rxconf->rx_drop_en = 1;
ret = rte_eth_rx_queue_setup(port_id, i, DPDK_NM_RX_DESC,
 rte_eth_dev_socket_id(port_id),
-NULL, pkt_dpdk->pkt_pool);
+rxconf, pkt_dpdk->pkt_pool);
if (ret < 0) {
ODP_ERR("Queue setup failed: err=%d, port=%" PRIu8 "\n",
ret, port_id);

I'll test it a bit more for performance effects and then send a fix PR.

-Matias



> On 7 Feb 2018, at 14:18, gyanesh patra  wrote:
> 
> Thank you.
> I am curious what might be the reason.
> 
> P Gyanesh Kumar Patra
> 
> On Wed, Feb 7, 2018 at 9:51 AM, Elo, Matias (Nokia - FI/Espoo) 
>  wrote:
> I'm currently trying to figure out what's happening. I'll report back when I 
> find out something.
> 
> -Matias
> 
> 
> > On 7 Feb 2018, at 13:44, gyanesh patra  wrote:
> >
> > Do you have any theory for the issue in 82599 (Niantic) NIC and why it 
> > might be working in Intel XL710 (Fortville)? Can i identify a new hardware 
> > without this issue by looking at their datasheet/specs?
> > Thanks for the insight.
> >
> > P Gyanesh Kumar Patra
> >
> > On Wed, Feb 7, 2018 at 9:12 AM, Elo, Matias (Nokia - FI/Espoo) 
> >  wrote:
> > I was unable to reproduce this with Intel XL710 (Fortville) but with 82599 
> > (Niantic) l2fwd operates as you have described. This may be a NIC HW 
> > limitation since the same issue is also observed with netmap pktio.
> >
> > -Matias
> >
> >
> > > On 7 Feb 2018, at 11:14, gyanesh patra  wrote:
> > >
> > > Thanks for the info. I verified this with both odp 1.16 and odp 1.17 with 
> > > same behavior.
> > > The traffic consists of diff Mac and ip addresses.
> > > Without the busy loop, I could see that all the threads were receiving 
> > > packets. So i think packet distribution is not an issue. In our case, we 
> > > are sending packet at line rate of 10G interface. That might be causing 
> > > this behaviour.
> > > If I can provide any other info, let me know.
> > >
> > > Thanks
> > >
> > > Gyanesh
> > >
> > > On Wed, Feb 7, 2018, 05:15 Elo, Matias (Nokia - FI/Espoo) 
> > >  wrote:
> > > Hi Gyanesh,
> > >
> > > I tested the patch on my system and everything seems to work as expected. 
> > > Based on the log you're not running the latest code (v1.17.0) but I doubt 
> > > that is the issue here.
> > >
> > > What kind of test traffic are you using? The l2fwd example uses IPv4 
> > > addresses and UDP ports to do the input hashing. If test packets are 
> > > identical they will all end up in the same input queue, which would 
> > > explain what you are seeing.
> > >
> > > -Matias
> > >
> > >
> > > > On 6 Feb 2018, at 19:00, gyanesh patra  wrote:
> > > >
> > > > Hi,
> > > > I tried with netmap, dpdk and dpdk with zero-copy enabled. All of them 
> > > > have the same behaviour. I also tried with (200*2048) as packet pool 
> > > > size without any success.
> > > > I am attaching the patch for test/performance/odp_l2fwd example here to 
> > > > demonstrate the behaviour. Also find the output of the example below:
> > > >
> > > > root@india:~/pktio/odp_ipv6/test/performance# ./odp_l2fwd -i 0,1
> > > > HW time counter freq: 2094954892 hz
> > > >
> > > > PKTIO: initialized 

Re: [lng-odp] unexpected packet handling behavior with dpdk pktio support

2018-02-07 Thread Elo, Matias (Nokia - FI/Espoo)
I'm currently trying to figure out what's happening. I'll report back when I 
find out something.

-Matias


> On 7 Feb 2018, at 13:44, gyanesh patra  wrote:
> 
> Do you have any theory for the issue in 82599 (Niantic) NIC and why it might 
> be working in Intel XL710 (Fortville)? Can i identify a new hardware without 
> this issue by looking at their datasheet/specs?
> Thanks for the insight.
> 
> P Gyanesh Kumar Patra
> 
> On Wed, Feb 7, 2018 at 9:12 AM, Elo, Matias (Nokia - FI/Espoo) 
>  wrote:
> I was unable to reproduce this with Intel XL710 (Fortville) but with 82599 
> (Niantic) l2fwd operates as you have described. This may be a NIC HW 
> limitation since the same issue is also observed with netmap pktio.
> 
> -Matias
> 
> 
> > On 7 Feb 2018, at 11:14, gyanesh patra  wrote:
> >
> > Thanks for the info. I verified this with both odp 1.16 and odp 1.17 with 
> > same behavior.
> > The traffic consists of diff Mac and ip addresses.
> > Without the busy loop, I could see that all the threads were receiving 
> > packets. So i think packet distribution is not an issue. In our case, we 
> > are sending packet at line rate of 10G interface. That might be causing 
> > this behaviour.
> > If I can provide any other info, let me know.
> >
> > Thanks
> >
> > Gyanesh
> >
> > On Wed, Feb 7, 2018, 05:15 Elo, Matias (Nokia - FI/Espoo) 
> >  wrote:
> > Hi Gyanesh,
> >
> > I tested the patch on my system and everything seems to work as expected. 
> > Based on the log you're not running the latest code (v1.17.0) but I doubt 
> > that is the issue here.
> >
> > What kind of test traffic are you using? The l2fwd example uses IPv4 
> > addresses and UDP ports to do the input hashing. If test packets are 
> > identical they will all end up in the same input queue, which would explain 
> > what you are seeing.
> >
> > -Matias
> >
> >
> > > On 6 Feb 2018, at 19:00, gyanesh patra  wrote:
> > >
> > > Hi,
> > > I tried with netmap, dpdk and dpdk with zero-copy enabled. All of them 
> > > have the same behaviour. I also tried with (200*2048) as packet pool size 
> > > without any success.
> > > I am attaching the patch for test/performance/odp_l2fwd example here to 
> > > demonstrate the behaviour. Also find the output of the example below:
> > >
> > > root@india:~/pktio/odp_ipv6/test/performance# ./odp_l2fwd -i 0,1
> > > HW time counter freq: 2094954892 hz
> > >
> > > PKTIO: initialized loop interface.
> > > PKTIO: initialized dpdk pktio, use export ODP_PKTIO_DISABLE_DPDK=1 to 
> > > disable.
> > > PKTIO: initialized pcap interface.
> > > PKTIO: initialized ipc interface.
> > > PKTIO: initialized socket mmap, use export 
> > > ODP_PKTIO_DISABLE_SOCKET_MMAP=1 to disable.
> > > PKTIO: initialized socket mmsg,use export ODP_PKTIO_DISABLE_SOCKET_MMSG=1 
> > > to disable.
> > >
> > > ODP system info
> > > ---
> > > ODP API version: 1.16.0
> > > ODP impl name:   "odp-linux"
> > > CPU model:   Intel(R) Xeon(R) CPU E5-2620 v2
> > > CPU freq (hz):   26
> > > Cache line size: 64
> > > CPU count:   12
> > >
> > >
> > > CPU features supported:
> > > SSE3 PCLMULQDQ DTES64 MONITOR DS_CPL VMX SMX EIST TM2 SSSE3 CMPXCHG16B 
> > > XTPR PDCM PCID DCA SSE4_1 SSE4_2 X2APIC POPCNT TSC_DEADLINE AES XSAVE 
> > > OSXSAVE AVX F16C RDRAND FPU VME DE PSE TSC MSR PAE MCE CX8 APIC SEP MTRR 
> > > PGE MCA CMOV PAT PSE36 CLFSH DS ACPI MMX FXSR SSE SSE2 SS HTT TM PBE 
> > > DIGTEMP ARAT PLN ECMD PTM MPERF_APERF_MSR ENERGY_EFF FSGSBASE BMI2 
> > > LAHF_SAHF SYSCALL XD 1GB_PG RDTSCP EM64T INVTSC
> > >
> > > CPU features NOT supported:
> > > CNXT_ID FMA MOVBE PSN TRBOBST ACNT2 BMI1 HLE AVX2 SMEP ERMS INVPCID RTM 
> > > AVX512F LZCNT
> > >
> > > Running ODP appl: "odp_l2fwd"
> > > -
> > > IF-count:2
> > > Using IFs:   0 1
> > > Mode:PKTIN_DIRECT, PKTOUT_DIRECT
> > >
> > > num worker threads: 10
> > > first CPU:  2
> > > cpu mask:   0xFFC
> > >
> > >
> > > Pool info
> > > -
> > >   pool0
> > >   namepacket pool
> > >   pool type   packet
> > >   pool shm11
> > >   user area shm   0
> > >   num 8192
> > >   align   64
> > >   headroom128
> > >   seg len 8064
> > >   max data len65536
> > >   tailroom0
> > >   block size  8896
> > >   uarea size  0
> > >   shm size73196288
> > >   base addr   0x7f566940
> > >   uarea shm size  0
> > >   uarea base addr (nil)
> > >
> > > EAL: Detected 12 lcore(s)
> > > EAL: No free hugepages reported in hugepages-1048576kB
> > > EAL: Probing VFIO support...
> > > EAL: PCI device :03:00.0 on NUMA socket 0
> > > EAL:   probe driver: 8086:10fb net_ixgbe
> > > EAL: PCI device :03:00.1 on NUMA socket 0
> > > EAL:   probe driver: 8086:10fb net_ixgbe
> > > EAL: PCI device :05:00.0 on NUMA socket 0
> > > EAL:   

Re: [lng-odp] unexpected packet handling behavior with dpdk pktio support

2018-02-07 Thread gyanesh patra
Do you have any theory for the issue in 82599 (Niantic) NIC and why it
might be working in Intel XL710 (Fortville)? Can i identify a new hardware
without this issue by looking at their datasheet/specs?
Thanks for the insight.

P Gyanesh Kumar Patra

On Wed, Feb 7, 2018 at 9:12 AM, Elo, Matias (Nokia - FI/Espoo) <
matias@nokia.com> wrote:

> I was unable to reproduce this with Intel XL710 (Fortville) but with 82599
> (Niantic) l2fwd operates as you have described. This may be a NIC HW
> limitation since the same issue is also observed with netmap pktio.
>
> -Matias
>
>
> > On 7 Feb 2018, at 11:14, gyanesh patra  wrote:
> >
> > Thanks for the info. I verified this with both odp 1.16 and odp 1.17
> with same behavior.
> > The traffic consists of diff Mac and ip addresses.
> > Without the busy loop, I could see that all the threads were receiving
> packets. So i think packet distribution is not an issue. In our case, we
> are sending packet at line rate of 10G interface. That might be causing
> this behaviour.
> > If I can provide any other info, let me know.
> >
> > Thanks
> >
> > Gyanesh
> >
> > On Wed, Feb 7, 2018, 05:15 Elo, Matias (Nokia - FI/Espoo) <
> matias@nokia.com> wrote:
> > Hi Gyanesh,
> >
> > I tested the patch on my system and everything seems to work as
> expected. Based on the log you're not running the latest code (v1.17.0) but
> I doubt that is the issue here.
> >
> > What kind of test traffic are you using? The l2fwd example uses IPv4
> addresses and UDP ports to do the input hashing. If test packets are
> identical they will all end up in the same input queue, which would explain
> what you are seeing.
> >
> > -Matias
> >
> >
> > > On 6 Feb 2018, at 19:00, gyanesh patra 
> wrote:
> > >
> > > Hi,
> > > I tried with netmap, dpdk and dpdk with zero-copy enabled. All of them
> have the same behaviour. I also tried with (200*2048) as packet pool size
> without any success.
> > > I am attaching the patch for test/performance/odp_l2fwd example here
> to demonstrate the behaviour. Also find the output of the example below:
> > >
> > > root@india:~/pktio/odp_ipv6/test/performance# ./odp_l2fwd -i 0,1
> > > HW time counter freq: 2094954892 hz
> > >
> > > PKTIO: initialized loop interface.
> > > PKTIO: initialized dpdk pktio, use export ODP_PKTIO_DISABLE_DPDK=1 to
> disable.
> > > PKTIO: initialized pcap interface.
> > > PKTIO: initialized ipc interface.
> > > PKTIO: initialized socket mmap, use export
> ODP_PKTIO_DISABLE_SOCKET_MMAP=1 to disable.
> > > PKTIO: initialized socket mmsg,use export
> ODP_PKTIO_DISABLE_SOCKET_MMSG=1 to disable.
> > >
> > > ODP system info
> > > ---
> > > ODP API version: 1.16.0
> > > ODP impl name:   "odp-linux"
> > > CPU model:   Intel(R) Xeon(R) CPU E5-2620 v2
> > > CPU freq (hz):   26
> > > Cache line size: 64
> > > CPU count:   12
> > >
> > >
> > > CPU features supported:
> > > SSE3 PCLMULQDQ DTES64 MONITOR DS_CPL VMX SMX EIST TM2 SSSE3 CMPXCHG16B
> XTPR PDCM PCID DCA SSE4_1 SSE4_2 X2APIC POPCNT TSC_DEADLINE AES XSAVE
> OSXSAVE AVX F16C RDRAND FPU VME DE PSE TSC MSR PAE MCE CX8 APIC SEP MTRR
> PGE MCA CMOV PAT PSE36 CLFSH DS ACPI MMX FXSR SSE SSE2 SS HTT TM PBE
> DIGTEMP ARAT PLN ECMD PTM MPERF_APERF_MSR ENERGY_EFF FSGSBASE BMI2
> LAHF_SAHF SYSCALL XD 1GB_PG RDTSCP EM64T INVTSC
> > >
> > > CPU features NOT supported:
> > > CNXT_ID FMA MOVBE PSN TRBOBST ACNT2 BMI1 HLE AVX2 SMEP ERMS INVPCID
> RTM AVX512F LZCNT
> > >
> > > Running ODP appl: "odp_l2fwd"
> > > -
> > > IF-count:2
> > > Using IFs:   0 1
> > > Mode:PKTIN_DIRECT, PKTOUT_DIRECT
> > >
> > > num worker threads: 10
> > > first CPU:  2
> > > cpu mask:   0xFFC
> > >
> > >
> > > Pool info
> > > -
> > >   pool0
> > >   namepacket pool
> > >   pool type   packet
> > >   pool shm11
> > >   user area shm   0
> > >   num 8192
> > >   align   64
> > >   headroom128
> > >   seg len 8064
> > >   max data len65536
> > >   tailroom0
> > >   block size  8896
> > >   uarea size  0
> > >   shm size73196288
> > >   base addr   0x7f566940
> > >   uarea shm size  0
> > >   uarea base addr (nil)
> > >
> > > EAL: Detected 12 lcore(s)
> > > EAL: No free hugepages reported in hugepages-1048576kB
> > > EAL: Probing VFIO support...
> > > EAL: PCI device :03:00.0 on NUMA socket 0
> > > EAL:   probe driver: 8086:10fb net_ixgbe
> > > EAL: PCI device :03:00.1 on NUMA socket 0
> > > EAL:   probe driver: 8086:10fb net_ixgbe
> > > EAL: PCI device :05:00.0 on NUMA socket 0
> > > EAL:   probe driver: 8086:1528 net_ixgbe
> > > EAL: PCI device :05:00.1 on NUMA socket 0
> > > EAL:   probe driver: 8086:1528 net_ixgbe
> > > EAL: PCI device :0a:00.0 on NUMA socket 0
> > > EAL:   probe driver: 8086:1521 net_e1000_igb
> > > EAL: PCI device :0a:00.1 on NUMA socket 0
> > > EAL: 

Re: [lng-odp] unexpected packet handling behavior with dpdk pktio support

2018-02-07 Thread Elo, Matias (Nokia - FI/Espoo)
I was unable to reproduce this with Intel XL710 (Fortville) but with 82599 
(Niantic) l2fwd operates as you have described. This may be a NIC HW limitation 
since the same issue is also observed with netmap pktio.

-Matias


> On 7 Feb 2018, at 11:14, gyanesh patra  wrote:
> 
> Thanks for the info. I verified this with both odp 1.16 and odp 1.17 with 
> same behavior.
> The traffic consists of diff Mac and ip addresses. 
> Without the busy loop, I could see that all the threads were receiving 
> packets. So i think packet distribution is not an issue. In our case, we are 
> sending packet at line rate of 10G interface. That might be causing this 
> behaviour. 
> If I can provide any other info, let me know.
> 
> Thanks
> 
> Gyanesh
> 
> On Wed, Feb 7, 2018, 05:15 Elo, Matias (Nokia - FI/Espoo) 
>  wrote:
> Hi Gyanesh,
> 
> I tested the patch on my system and everything seems to work as expected. 
> Based on the log you're not running the latest code (v1.17.0) but I doubt 
> that is the issue here.
> 
> What kind of test traffic are you using? The l2fwd example uses IPv4 
> addresses and UDP ports to do the input hashing. If test packets are 
> identical they will all end up in the same input queue, which would explain 
> what you are seeing.
> 
> -Matias
> 
> 
> > On 6 Feb 2018, at 19:00, gyanesh patra  wrote:
> >
> > Hi,
> > I tried with netmap, dpdk and dpdk with zero-copy enabled. All of them have 
> > the same behaviour. I also tried with (200*2048) as packet pool size 
> > without any success.
> > I am attaching the patch for test/performance/odp_l2fwd example here to 
> > demonstrate the behaviour. Also find the output of the example below:
> >
> > root@india:~/pktio/odp_ipv6/test/performance# ./odp_l2fwd -i 0,1
> > HW time counter freq: 2094954892 hz
> >
> > PKTIO: initialized loop interface.
> > PKTIO: initialized dpdk pktio, use export ODP_PKTIO_DISABLE_DPDK=1 to 
> > disable.
> > PKTIO: initialized pcap interface.
> > PKTIO: initialized ipc interface.
> > PKTIO: initialized socket mmap, use export ODP_PKTIO_DISABLE_SOCKET_MMAP=1 
> > to disable.
> > PKTIO: initialized socket mmsg,use export ODP_PKTIO_DISABLE_SOCKET_MMSG=1 
> > to disable.
> >
> > ODP system info
> > ---
> > ODP API version: 1.16.0
> > ODP impl name:   "odp-linux"
> > CPU model:   Intel(R) Xeon(R) CPU E5-2620 v2
> > CPU freq (hz):   26
> > Cache line size: 64
> > CPU count:   12
> >
> >
> > CPU features supported:
> > SSE3 PCLMULQDQ DTES64 MONITOR DS_CPL VMX SMX EIST TM2 SSSE3 CMPXCHG16B XTPR 
> > PDCM PCID DCA SSE4_1 SSE4_2 X2APIC POPCNT TSC_DEADLINE AES XSAVE OSXSAVE 
> > AVX F16C RDRAND FPU VME DE PSE TSC MSR PAE MCE CX8 APIC SEP MTRR PGE MCA 
> > CMOV PAT PSE36 CLFSH DS ACPI MMX FXSR SSE SSE2 SS HTT TM PBE DIGTEMP ARAT 
> > PLN ECMD PTM MPERF_APERF_MSR ENERGY_EFF FSGSBASE BMI2 LAHF_SAHF SYSCALL XD 
> > 1GB_PG RDTSCP EM64T INVTSC
> >
> > CPU features NOT supported:
> > CNXT_ID FMA MOVBE PSN TRBOBST ACNT2 BMI1 HLE AVX2 SMEP ERMS INVPCID RTM 
> > AVX512F LZCNT
> >
> > Running ODP appl: "odp_l2fwd"
> > -
> > IF-count:2
> > Using IFs:   0 1
> > Mode:PKTIN_DIRECT, PKTOUT_DIRECT
> >
> > num worker threads: 10
> > first CPU:  2
> > cpu mask:   0xFFC
> >
> >
> > Pool info
> > -
> >   pool0
> >   namepacket pool
> >   pool type   packet
> >   pool shm11
> >   user area shm   0
> >   num 8192
> >   align   64
> >   headroom128
> >   seg len 8064
> >   max data len65536
> >   tailroom0
> >   block size  8896
> >   uarea size  0
> >   shm size73196288
> >   base addr   0x7f566940
> >   uarea shm size  0
> >   uarea base addr (nil)
> >
> > EAL: Detected 12 lcore(s)
> > EAL: No free hugepages reported in hugepages-1048576kB
> > EAL: Probing VFIO support...
> > EAL: PCI device :03:00.0 on NUMA socket 0
> > EAL:   probe driver: 8086:10fb net_ixgbe
> > EAL: PCI device :03:00.1 on NUMA socket 0
> > EAL:   probe driver: 8086:10fb net_ixgbe
> > EAL: PCI device :05:00.0 on NUMA socket 0
> > EAL:   probe driver: 8086:1528 net_ixgbe
> > EAL: PCI device :05:00.1 on NUMA socket 0
> > EAL:   probe driver: 8086:1528 net_ixgbe
> > EAL: PCI device :0a:00.0 on NUMA socket 0
> > EAL:   probe driver: 8086:1521 net_e1000_igb
> > EAL: PCI device :0a:00.1 on NUMA socket 0
> > EAL:   probe driver: 8086:1521 net_e1000_igb
> > EAL: PCI device :0c:00.0 on NUMA socket 0
> > EAL:   probe driver: 8086:10d3 net_e1000_em
> > created pktio 1, dev: 0, drv: dpdk
> > created 5 input and 5 output queues on (0)
> > created pktio 2, dev: 1, drv: dpdk
> > created 5 input and 5 output queues on (1)
> >
> > Queue binding (indexes)
> > ---
> > worker 0
> >   rx: pktio 0, queue 0
> >   tx: pktio 1, queue 0
> > worker 1
> >   rx: pktio 1, queue 0
> >   tx: pktio 0, 

Re: [lng-odp] unexpected packet handling behavior with dpdk pktio support

2018-02-07 Thread gyanesh patra
Thanks for the info. I verified this with both odp 1.16 and odp 1.17 with
same behavior.
The traffic consists of diff Mac and ip addresses.
Without the busy loop, I could see that all the threads were receiving
packets. So i think packet distribution is not an issue. In our case, we
are sending packet at line rate of 10G interface. That might be causing
this behaviour.

If I can provide any other info, let me know.

Thanks

Gyanesh
On Wed, Feb 7, 2018, 05:15 Elo, Matias (Nokia - FI/Espoo) <
matias@nokia.com> wrote:

> Hi Gyanesh,
>
> I tested the patch on my system and everything seems to work as expected.
> Based on the log you're not running the latest code (v1.17.0) but I doubt
> that is the issue here.
>
> What kind of test traffic are you using? The l2fwd example uses IPv4
> addresses and UDP ports to do the input hashing. If test packets are
> identical they will all end up in the same input queue, which would explain
> what you are seeing.
>
> -Matias
>
>
> > On 6 Feb 2018, at 19:00, gyanesh patra  wrote:
> >
> > Hi,
> > I tried with netmap, dpdk and dpdk with zero-copy enabled. All of them
> have the same behaviour. I also tried with (200*2048) as packet pool size
> without any success.
> > I am attaching the patch for test/performance/odp_l2fwd example here to
> demonstrate the behaviour. Also find the output of the example below:
> >
> > root@india:~/pktio/odp_ipv6/test/performance# ./odp_l2fwd -i 0,1
> > HW time counter freq: 2094954892 hz
> >
> > PKTIO: initialized loop interface.
> > PKTIO: initialized dpdk pktio, use export ODP_PKTIO_DISABLE_DPDK=1 to
> disable.
> > PKTIO: initialized pcap interface.
> > PKTIO: initialized ipc interface.
> > PKTIO: initialized socket mmap, use export
> ODP_PKTIO_DISABLE_SOCKET_MMAP=1 to disable.
> > PKTIO: initialized socket mmsg,use export
> ODP_PKTIO_DISABLE_SOCKET_MMSG=1 to disable.
> >
> > ODP system info
> > ---
> > ODP API version: 1.16.0
> > ODP impl name:   "odp-linux"
> > CPU model:   Intel(R) Xeon(R) CPU E5-2620 v2
> > CPU freq (hz):   26
> > Cache line size: 64
> > CPU count:   12
> >
> >
> > CPU features supported:
> > SSE3 PCLMULQDQ DTES64 MONITOR DS_CPL VMX SMX EIST TM2 SSSE3 CMPXCHG16B
> XTPR PDCM PCID DCA SSE4_1 SSE4_2 X2APIC POPCNT TSC_DEADLINE AES XSAVE
> OSXSAVE AVX F16C RDRAND FPU VME DE PSE TSC MSR PAE MCE CX8 APIC SEP MTRR
> PGE MCA CMOV PAT PSE36 CLFSH DS ACPI MMX FXSR SSE SSE2 SS HTT TM PBE
> DIGTEMP ARAT PLN ECMD PTM MPERF_APERF_MSR ENERGY_EFF FSGSBASE BMI2
> LAHF_SAHF SYSCALL XD 1GB_PG RDTSCP EM64T INVTSC
> >
> > CPU features NOT supported:
> > CNXT_ID FMA MOVBE PSN TRBOBST ACNT2 BMI1 HLE AVX2 SMEP ERMS INVPCID RTM
> AVX512F LZCNT
> >
> > Running ODP appl: "odp_l2fwd"
> > -
> > IF-count:2
> > Using IFs:   0 1
> > Mode:PKTIN_DIRECT, PKTOUT_DIRECT
> >
> > num worker threads: 10
> > first CPU:  2
> > cpu mask:   0xFFC
> >
> >
> > Pool info
> > -
> >   pool0
> >   namepacket pool
> >   pool type   packet
> >   pool shm11
> >   user area shm   0
> >   num 8192
> >   align   64
> >   headroom128
> >   seg len 8064
> >   max data len65536
> >   tailroom0
> >   block size  8896
> >   uarea size  0
> >   shm size73196288
> >   base addr   0x7f566940
> >   uarea shm size  0
> >   uarea base addr (nil)
> >
> > EAL: Detected 12 lcore(s)
> > EAL: No free hugepages reported in hugepages-1048576kB
> > EAL: Probing VFIO support...
> > EAL: PCI device :03:00.0 on NUMA socket 0
> > EAL:   probe driver: 8086:10fb net_ixgbe
> > EAL: PCI device :03:00.1 on NUMA socket 0
> > EAL:   probe driver: 8086:10fb net_ixgbe
> > EAL: PCI device :05:00.0 on NUMA socket 0
> > EAL:   probe driver: 8086:1528 net_ixgbe
> > EAL: PCI device :05:00.1 on NUMA socket 0
> > EAL:   probe driver: 8086:1528 net_ixgbe
> > EAL: PCI device :0a:00.0 on NUMA socket 0
> > EAL:   probe driver: 8086:1521 net_e1000_igb
> > EAL: PCI device :0a:00.1 on NUMA socket 0
> > EAL:   probe driver: 8086:1521 net_e1000_igb
> > EAL: PCI device :0c:00.0 on NUMA socket 0
> > EAL:   probe driver: 8086:10d3 net_e1000_em
> > created pktio 1, dev: 0, drv: dpdk
> > created 5 input and 5 output queues on (0)
> > created pktio 2, dev: 1, drv: dpdk
> > created 5 input and 5 output queues on (1)
> >
> > Queue binding (indexes)
> > ---
> > worker 0
> >   rx: pktio 0, queue 0
> >   tx: pktio 1, queue 0
> > worker 1
> >   rx: pktio 1, queue 0
> >   tx: pktio 0, queue 0
> > worker 2
> >   rx: pktio 0, queue 1
> >   tx: pktio 1, queue 1
> > worker 3
> >   rx: pktio 1, queue 1
> >   tx: pktio 0, queue 1
> > worker 4
> >   rx: pktio 0, queue 2
> >   tx: pktio 1, queue 2
> > worker 5
> >   rx: pktio 1, queue 2
> >   tx: pktio 0, queue 2
> > worker 6
> >   rx: pktio 0, queue 3
> >   tx: pktio 1, queue 3
> > worker 7
> >   rx: pktio 1, queue 3
> 

Re: [lng-odp] unexpected packet handling behavior with dpdk pktio support

2018-02-06 Thread Elo, Matias (Nokia - FI/Espoo)
Hi Gyanesh,

I tested the patch on my system and everything seems to work as expected. Based 
on the log you're not running the latest code (v1.17.0) but I doubt that is the 
issue here.

What kind of test traffic are you using? The l2fwd example uses IPv4 addresses 
and UDP ports to do the input hashing. If test packets are identical they will 
all end up in the same input queue, which would explain what you are seeing.

-Matias


> On 6 Feb 2018, at 19:00, gyanesh patra  wrote:
> 
> Hi,
> I tried with netmap, dpdk and dpdk with zero-copy enabled. All of them have 
> the same behaviour. I also tried with (200*2048) as packet pool size without 
> any success.
> I am attaching the patch for test/performance/odp_l2fwd example here to 
> demonstrate the behaviour. Also find the output of the example below:
> 
> root@india:~/pktio/odp_ipv6/test/performance# ./odp_l2fwd -i 0,1
> HW time counter freq: 2094954892 hz
> 
> PKTIO: initialized loop interface.
> PKTIO: initialized dpdk pktio, use export ODP_PKTIO_DISABLE_DPDK=1 to disable.
> PKTIO: initialized pcap interface.
> PKTIO: initialized ipc interface.
> PKTIO: initialized socket mmap, use export ODP_PKTIO_DISABLE_SOCKET_MMAP=1 to 
> disable.
> PKTIO: initialized socket mmsg,use export ODP_PKTIO_DISABLE_SOCKET_MMSG=1 to 
> disable.
> 
> ODP system info
> ---
> ODP API version: 1.16.0
> ODP impl name:   "odp-linux"
> CPU model:   Intel(R) Xeon(R) CPU E5-2620 v2
> CPU freq (hz):   26
> Cache line size: 64
> CPU count:   12
> 
> 
> CPU features supported:
> SSE3 PCLMULQDQ DTES64 MONITOR DS_CPL VMX SMX EIST TM2 SSSE3 CMPXCHG16B XTPR 
> PDCM PCID DCA SSE4_1 SSE4_2 X2APIC POPCNT TSC_DEADLINE AES XSAVE OSXSAVE AVX 
> F16C RDRAND FPU VME DE PSE TSC MSR PAE MCE CX8 APIC SEP MTRR PGE MCA CMOV PAT 
> PSE36 CLFSH DS ACPI MMX FXSR SSE SSE2 SS HTT TM PBE DIGTEMP ARAT PLN ECMD PTM 
> MPERF_APERF_MSR ENERGY_EFF FSGSBASE BMI2 LAHF_SAHF SYSCALL XD 1GB_PG RDTSCP 
> EM64T INVTSC 
> 
> CPU features NOT supported:
> CNXT_ID FMA MOVBE PSN TRBOBST ACNT2 BMI1 HLE AVX2 SMEP ERMS INVPCID RTM 
> AVX512F LZCNT 
> 
> Running ODP appl: "odp_l2fwd"
> -
> IF-count:2
> Using IFs:   0 1
> Mode:PKTIN_DIRECT, PKTOUT_DIRECT
> 
> num worker threads: 10
> first CPU:  2
> cpu mask:   0xFFC
> 
> 
> Pool info
> -
>   pool0
>   namepacket pool
>   pool type   packet
>   pool shm11
>   user area shm   0
>   num 8192
>   align   64
>   headroom128
>   seg len 8064
>   max data len65536
>   tailroom0
>   block size  8896
>   uarea size  0
>   shm size73196288
>   base addr   0x7f566940
>   uarea shm size  0
>   uarea base addr (nil)
> 
> EAL: Detected 12 lcore(s)
> EAL: No free hugepages reported in hugepages-1048576kB
> EAL: Probing VFIO support...
> EAL: PCI device :03:00.0 on NUMA socket 0
> EAL:   probe driver: 8086:10fb net_ixgbe
> EAL: PCI device :03:00.1 on NUMA socket 0
> EAL:   probe driver: 8086:10fb net_ixgbe
> EAL: PCI device :05:00.0 on NUMA socket 0
> EAL:   probe driver: 8086:1528 net_ixgbe
> EAL: PCI device :05:00.1 on NUMA socket 0
> EAL:   probe driver: 8086:1528 net_ixgbe
> EAL: PCI device :0a:00.0 on NUMA socket 0
> EAL:   probe driver: 8086:1521 net_e1000_igb
> EAL: PCI device :0a:00.1 on NUMA socket 0
> EAL:   probe driver: 8086:1521 net_e1000_igb
> EAL: PCI device :0c:00.0 on NUMA socket 0
> EAL:   probe driver: 8086:10d3 net_e1000_em
> created pktio 1, dev: 0, drv: dpdk
> created 5 input and 5 output queues on (0)
> created pktio 2, dev: 1, drv: dpdk
> created 5 input and 5 output queues on (1)
> 
> Queue binding (indexes)
> ---
> worker 0
>   rx: pktio 0, queue 0
>   tx: pktio 1, queue 0
> worker 1
>   rx: pktio 1, queue 0
>   tx: pktio 0, queue 0
> worker 2
>   rx: pktio 0, queue 1
>   tx: pktio 1, queue 1
> worker 3
>   rx: pktio 1, queue 1
>   tx: pktio 0, queue 1
> worker 4
>   rx: pktio 0, queue 2
>   tx: pktio 1, queue 2
> worker 5
>   rx: pktio 1, queue 2
>   tx: pktio 0, queue 2
> worker 6
>   rx: pktio 0, queue 3
>   tx: pktio 1, queue 3
> worker 7
>   rx: pktio 1, queue 3
>   tx: pktio 0, queue 3
> worker 8
>   rx: pktio 0, queue 4
>   tx: pktio 1, queue 4
> worker 9
>   rx: pktio 1, queue 4
>   tx: pktio 0, queue 4
> 
> 
> Port config
> 
> Port 0 (0)
>   rx workers 5
>   tx workers 5
>   rx queues 5
>   tx queues 5
> Port 1 (1)
>   rx workers 5
>   tx workers 5
>   rx queues 5
>   tx queues 5
> 
> [01] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
> [02] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
> [03] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
> [04] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
> [05] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
> [06] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
> [07] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
> [08] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT

Re: [lng-odp] unexpected packet handling behavior with dpdk pktio support

2018-02-06 Thread gyanesh patra
We are using Intel NICs : X540-AT2 (10G)



P Gyanesh Kumar Patra

On Tue, Feb 6, 2018 at 3:08 PM, Ilias Apalodimas <
ilias.apalodi...@linaro.org> wrote:

> Hello,
>
> Haven't seen any reference to the hardware you are using, sorry if i
> missed it. What kind of NIC are you using for the tests ?
>
> Regards
> Ilias
>
> On 6 February 2018 at 19:00, gyanesh patra 
> wrote:
> > Hi,
> > I tried with netmap, dpdk and dpdk with zero-copy enabled. All of them
> have
> > the same behaviour. I also tried with (200*2048) as packet pool size
> > without any success.
> > I am attaching the patch for test/performance/odp_l2fwd example here to
> > demonstrate the behaviour. Also find the output of the example below:
> >
> > root@india:~/pktio/odp_ipv6/test/performance# ./odp_l2fwd -i 0,1
> > HW time counter freq: 2094954892 hz
> >
> > PKTIO: initialized loop interface.
> > PKTIO: initialized dpdk pktio, use export ODP_PKTIO_DISABLE_DPDK=1 to
> > disable.
> > PKTIO: initialized pcap interface.
> > PKTIO: initialized ipc interface.
> > PKTIO: initialized socket mmap, use export ODP_PKTIO_DISABLE_SOCKET_MMAP=
> 1
> > to disable.
> > PKTIO: initialized socket mmsg,use export ODP_PKTIO_DISABLE_SOCKET_MMSG=
> 1
> > to disable.
> >
> > ODP system info
> > ---
> > ODP API version: 1.16.0
> > ODP impl name:   "odp-linux"
> > CPU model:   Intel(R) Xeon(R) CPU E5-2620 v2
> > CPU freq (hz):   26
> > Cache line size: 64
> > CPU count:   12
> >
> >
> > CPU features supported:
> > SSE3 PCLMULQDQ DTES64 MONITOR DS_CPL VMX SMX EIST TM2 SSSE3 CMPXCHG16B
> XTPR
> > PDCM PCID DCA SSE4_1 SSE4_2 X2APIC POPCNT TSC_DEADLINE AES XSAVE OSXSAVE
> > AVX F16C RDRAND FPU VME DE PSE TSC MSR PAE MCE CX8 APIC SEP MTRR PGE MCA
> > CMOV PAT PSE36 CLFSH DS ACPI MMX FXSR SSE SSE2 SS HTT TM PBE DIGTEMP ARAT
> > PLN ECMD PTM MPERF_APERF_MSR ENERGY_EFF FSGSBASE BMI2 LAHF_SAHF SYSCALL
> XD
> > 1GB_PG RDTSCP EM64T INVTSC
> >
> > CPU features NOT supported:
> > CNXT_ID FMA MOVBE PSN TRBOBST ACNT2 BMI1 HLE AVX2 SMEP ERMS INVPCID RTM
> > AVX512F LZCNT
> >
> > Running ODP appl: "odp_l2fwd"
> > -
> > IF-count:2
> > Using IFs:   0 1
> > Mode:PKTIN_DIRECT, PKTOUT_DIRECT
> >
> > num worker threads: 10
> > first CPU:  2
> > cpu mask:   0xFFC
> >
> >
> > Pool info
> > -
> >   pool0
> >   namepacket pool
> >   pool type   packet
> >   pool shm11
> >   user area shm   0
> >   num 8192
> >   align   64
> >   headroom128
> >   seg len 8064
> >   max data len65536
> >   tailroom0
> >   block size  8896
> >   uarea size  0
> >   shm size73196288
> >   base addr   0x7f566940
> >   uarea shm size  0
> >   uarea base addr (nil)
> >
> > EAL: Detected 12 lcore(s)
> > EAL: No free hugepages reported in hugepages-1048576kB
> > EAL: Probing VFIO support...
> > EAL: PCI device :03:00.0 on NUMA socket 0
> > EAL:   probe driver: 8086:10fb net_ixgbe
> > EAL: PCI device :03:00.1 on NUMA socket 0
> > EAL:   probe driver: 8086:10fb net_ixgbe
> > EAL: PCI device :05:00.0 on NUMA socket 0
> > EAL:   probe driver: 8086:1528 net_ixgbe
> > EAL: PCI device :05:00.1 on NUMA socket 0
> > EAL:   probe driver: 8086:1528 net_ixgbe
> > EAL: PCI device :0a:00.0 on NUMA socket 0
> > EAL:   probe driver: 8086:1521 net_e1000_igb
> > EAL: PCI device :0a:00.1 on NUMA socket 0
> > EAL:   probe driver: 8086:1521 net_e1000_igb
> > EAL: PCI device :0c:00.0 on NUMA socket 0
> > EAL:   probe driver: 8086:10d3 net_e1000_em
> > created pktio 1, dev: 0, drv: dpdk
> > created 5 input and 5 output queues on (0)
> > created pktio 2, dev: 1, drv: dpdk
> > created 5 input and 5 output queues on (1)
> >
> > Queue binding (indexes)
> > ---
> > worker 0
> >   rx: pktio 0, queue 0
> >   tx: pktio 1, queue 0
> > worker 1
> >   rx: pktio 1, queue 0
> >   tx: pktio 0, queue 0
> > worker 2
> >   rx: pktio 0, queue 1
> >   tx: pktio 1, queue 1
> > worker 3
> >   rx: pktio 1, queue 1
> >   tx: pktio 0, queue 1
> > worker 4
> >   rx: pktio 0, queue 2
> >   tx: pktio 1, queue 2
> > worker 5
> >   rx: pktio 1, queue 2
> >   tx: pktio 0, queue 2
> > worker 6
> >   rx: pktio 0, queue 3
> >   tx: pktio 1, queue 3
> > worker 7
> >   rx: pktio 1, queue 3
> >   tx: pktio 0, queue 3
> > worker 8
> >   rx: pktio 0, queue 4
> >   tx: pktio 1, queue 4
> > worker 9
> >   rx: pktio 1, queue 4
> >   tx: pktio 0, queue 4
> >
> >
> > Port config
> > 
> > Port 0 (0)
> >   rx workers 5
> >   tx workers 5
> >   rx queues 5
> >   tx queues 5
> > Port 1 (1)
> >   rx workers 5
> >   tx workers 5
> >   rx queues 5
> >   tx queues 5
> >
> > [01] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
> > [02] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
> > [03] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
> > [04] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
> > [05] num pktios 1, 

Re: [lng-odp] unexpected packet handling behavior with dpdk pktio support

2018-02-06 Thread Ilias Apalodimas
Hello,

Haven't seen any reference to the hardware you are using, sorry if i
missed it. What kind of NIC are you using for the tests ?

Regards
Ilias

On 6 February 2018 at 19:00, gyanesh patra  wrote:
> Hi,
> I tried with netmap, dpdk and dpdk with zero-copy enabled. All of them have
> the same behaviour. I also tried with (200*2048) as packet pool size
> without any success.
> I am attaching the patch for test/performance/odp_l2fwd example here to
> demonstrate the behaviour. Also find the output of the example below:
>
> root@india:~/pktio/odp_ipv6/test/performance# ./odp_l2fwd -i 0,1
> HW time counter freq: 2094954892 hz
>
> PKTIO: initialized loop interface.
> PKTIO: initialized dpdk pktio, use export ODP_PKTIO_DISABLE_DPDK=1 to
> disable.
> PKTIO: initialized pcap interface.
> PKTIO: initialized ipc interface.
> PKTIO: initialized socket mmap, use export ODP_PKTIO_DISABLE_SOCKET_MMAP=1
> to disable.
> PKTIO: initialized socket mmsg,use export ODP_PKTIO_DISABLE_SOCKET_MMSG=1
> to disable.
>
> ODP system info
> ---
> ODP API version: 1.16.0
> ODP impl name:   "odp-linux"
> CPU model:   Intel(R) Xeon(R) CPU E5-2620 v2
> CPU freq (hz):   26
> Cache line size: 64
> CPU count:   12
>
>
> CPU features supported:
> SSE3 PCLMULQDQ DTES64 MONITOR DS_CPL VMX SMX EIST TM2 SSSE3 CMPXCHG16B XTPR
> PDCM PCID DCA SSE4_1 SSE4_2 X2APIC POPCNT TSC_DEADLINE AES XSAVE OSXSAVE
> AVX F16C RDRAND FPU VME DE PSE TSC MSR PAE MCE CX8 APIC SEP MTRR PGE MCA
> CMOV PAT PSE36 CLFSH DS ACPI MMX FXSR SSE SSE2 SS HTT TM PBE DIGTEMP ARAT
> PLN ECMD PTM MPERF_APERF_MSR ENERGY_EFF FSGSBASE BMI2 LAHF_SAHF SYSCALL XD
> 1GB_PG RDTSCP EM64T INVTSC
>
> CPU features NOT supported:
> CNXT_ID FMA MOVBE PSN TRBOBST ACNT2 BMI1 HLE AVX2 SMEP ERMS INVPCID RTM
> AVX512F LZCNT
>
> Running ODP appl: "odp_l2fwd"
> -
> IF-count:2
> Using IFs:   0 1
> Mode:PKTIN_DIRECT, PKTOUT_DIRECT
>
> num worker threads: 10
> first CPU:  2
> cpu mask:   0xFFC
>
>
> Pool info
> -
>   pool0
>   namepacket pool
>   pool type   packet
>   pool shm11
>   user area shm   0
>   num 8192
>   align   64
>   headroom128
>   seg len 8064
>   max data len65536
>   tailroom0
>   block size  8896
>   uarea size  0
>   shm size73196288
>   base addr   0x7f566940
>   uarea shm size  0
>   uarea base addr (nil)
>
> EAL: Detected 12 lcore(s)
> EAL: No free hugepages reported in hugepages-1048576kB
> EAL: Probing VFIO support...
> EAL: PCI device :03:00.0 on NUMA socket 0
> EAL:   probe driver: 8086:10fb net_ixgbe
> EAL: PCI device :03:00.1 on NUMA socket 0
> EAL:   probe driver: 8086:10fb net_ixgbe
> EAL: PCI device :05:00.0 on NUMA socket 0
> EAL:   probe driver: 8086:1528 net_ixgbe
> EAL: PCI device :05:00.1 on NUMA socket 0
> EAL:   probe driver: 8086:1528 net_ixgbe
> EAL: PCI device :0a:00.0 on NUMA socket 0
> EAL:   probe driver: 8086:1521 net_e1000_igb
> EAL: PCI device :0a:00.1 on NUMA socket 0
> EAL:   probe driver: 8086:1521 net_e1000_igb
> EAL: PCI device :0c:00.0 on NUMA socket 0
> EAL:   probe driver: 8086:10d3 net_e1000_em
> created pktio 1, dev: 0, drv: dpdk
> created 5 input and 5 output queues on (0)
> created pktio 2, dev: 1, drv: dpdk
> created 5 input and 5 output queues on (1)
>
> Queue binding (indexes)
> ---
> worker 0
>   rx: pktio 0, queue 0
>   tx: pktio 1, queue 0
> worker 1
>   rx: pktio 1, queue 0
>   tx: pktio 0, queue 0
> worker 2
>   rx: pktio 0, queue 1
>   tx: pktio 1, queue 1
> worker 3
>   rx: pktio 1, queue 1
>   tx: pktio 0, queue 1
> worker 4
>   rx: pktio 0, queue 2
>   tx: pktio 1, queue 2
> worker 5
>   rx: pktio 1, queue 2
>   tx: pktio 0, queue 2
> worker 6
>   rx: pktio 0, queue 3
>   tx: pktio 1, queue 3
> worker 7
>   rx: pktio 1, queue 3
>   tx: pktio 0, queue 3
> worker 8
>   rx: pktio 0, queue 4
>   tx: pktio 1, queue 4
> worker 9
>   rx: pktio 1, queue 4
>   tx: pktio 0, queue 4
>
>
> Port config
> 
> Port 0 (0)
>   rx workers 5
>   tx workers 5
>   rx queues 5
>   tx queues 5
> Port 1 (1)
>   rx workers 5
>   tx workers 5
>   rx queues 5
>   tx queues 5
>
> [01] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
> [02] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
> [03] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
> [04] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
> [05] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
> [06] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
> [07] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
> [08] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
> [09] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
> [10] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
> 0 pps, 0 max pps,  0 rx drops, 0 tx drops
> 0 pps, 0 max pps,  0 rx drops, 0 tx drops
> 0 pps, 0 max pps,  0 rx drops, 0 tx drops
> 0 pps, 0 max pps,  0 rx drops, 0 tx drops
> 0 pps, 0 max pps,  0 rx drops, 0 tx 

Re: [lng-odp] unexpected packet handling behavior with dpdk pktio support

2018-02-06 Thread gyanesh patra
Hi,
I tried with netmap, dpdk and dpdk with zero-copy enabled. All of them have
the same behaviour. I also tried with (200*2048) as packet pool size
without any success.
I am attaching the patch for test/performance/odp_l2fwd example here to
demonstrate the behaviour. Also find the output of the example below:

root@india:~/pktio/odp_ipv6/test/performance# ./odp_l2fwd -i 0,1
HW time counter freq: 2094954892 hz

PKTIO: initialized loop interface.
PKTIO: initialized dpdk pktio, use export ODP_PKTIO_DISABLE_DPDK=1 to
disable.
PKTIO: initialized pcap interface.
PKTIO: initialized ipc interface.
PKTIO: initialized socket mmap, use export ODP_PKTIO_DISABLE_SOCKET_MMAP=1
to disable.
PKTIO: initialized socket mmsg,use export ODP_PKTIO_DISABLE_SOCKET_MMSG=1
to disable.

ODP system info
---
ODP API version: 1.16.0
ODP impl name:   "odp-linux"
CPU model:   Intel(R) Xeon(R) CPU E5-2620 v2
CPU freq (hz):   26
Cache line size: 64
CPU count:   12


CPU features supported:
SSE3 PCLMULQDQ DTES64 MONITOR DS_CPL VMX SMX EIST TM2 SSSE3 CMPXCHG16B XTPR
PDCM PCID DCA SSE4_1 SSE4_2 X2APIC POPCNT TSC_DEADLINE AES XSAVE OSXSAVE
AVX F16C RDRAND FPU VME DE PSE TSC MSR PAE MCE CX8 APIC SEP MTRR PGE MCA
CMOV PAT PSE36 CLFSH DS ACPI MMX FXSR SSE SSE2 SS HTT TM PBE DIGTEMP ARAT
PLN ECMD PTM MPERF_APERF_MSR ENERGY_EFF FSGSBASE BMI2 LAHF_SAHF SYSCALL XD
1GB_PG RDTSCP EM64T INVTSC

CPU features NOT supported:
CNXT_ID FMA MOVBE PSN TRBOBST ACNT2 BMI1 HLE AVX2 SMEP ERMS INVPCID RTM
AVX512F LZCNT

Running ODP appl: "odp_l2fwd"
-
IF-count:2
Using IFs:   0 1
Mode:PKTIN_DIRECT, PKTOUT_DIRECT

num worker threads: 10
first CPU:  2
cpu mask:   0xFFC


Pool info
-
  pool0
  namepacket pool
  pool type   packet
  pool shm11
  user area shm   0
  num 8192
  align   64
  headroom128
  seg len 8064
  max data len65536
  tailroom0
  block size  8896
  uarea size  0
  shm size73196288
  base addr   0x7f566940
  uarea shm size  0
  uarea base addr (nil)

EAL: Detected 12 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: PCI device :03:00.0 on NUMA socket 0
EAL:   probe driver: 8086:10fb net_ixgbe
EAL: PCI device :03:00.1 on NUMA socket 0
EAL:   probe driver: 8086:10fb net_ixgbe
EAL: PCI device :05:00.0 on NUMA socket 0
EAL:   probe driver: 8086:1528 net_ixgbe
EAL: PCI device :05:00.1 on NUMA socket 0
EAL:   probe driver: 8086:1528 net_ixgbe
EAL: PCI device :0a:00.0 on NUMA socket 0
EAL:   probe driver: 8086:1521 net_e1000_igb
EAL: PCI device :0a:00.1 on NUMA socket 0
EAL:   probe driver: 8086:1521 net_e1000_igb
EAL: PCI device :0c:00.0 on NUMA socket 0
EAL:   probe driver: 8086:10d3 net_e1000_em
created pktio 1, dev: 0, drv: dpdk
created 5 input and 5 output queues on (0)
created pktio 2, dev: 1, drv: dpdk
created 5 input and 5 output queues on (1)

Queue binding (indexes)
---
worker 0
  rx: pktio 0, queue 0
  tx: pktio 1, queue 0
worker 1
  rx: pktio 1, queue 0
  tx: pktio 0, queue 0
worker 2
  rx: pktio 0, queue 1
  tx: pktio 1, queue 1
worker 3
  rx: pktio 1, queue 1
  tx: pktio 0, queue 1
worker 4
  rx: pktio 0, queue 2
  tx: pktio 1, queue 2
worker 5
  rx: pktio 1, queue 2
  tx: pktio 0, queue 2
worker 6
  rx: pktio 0, queue 3
  tx: pktio 1, queue 3
worker 7
  rx: pktio 1, queue 3
  tx: pktio 0, queue 3
worker 8
  rx: pktio 0, queue 4
  tx: pktio 1, queue 4
worker 9
  rx: pktio 1, queue 4
  tx: pktio 0, queue 4


Port config

Port 0 (0)
  rx workers 5
  tx workers 5
  rx queues 5
  tx queues 5
Port 1 (1)
  rx workers 5
  tx workers 5
  rx queues 5
  tx queues 5

[01] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
[02] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
[03] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
[04] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
[05] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
[06] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
[07] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
[08] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
[09] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
[10] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
0 pps, 0 max pps,  0 rx drops, 0 tx drops
0 pps, 0 max pps,  0 rx drops, 0 tx drops
0 pps, 0 max pps,  0 rx drops, 0 tx drops
0 pps, 0 max pps,  0 rx drops, 0 tx drops
0 pps, 0 max pps,  0 rx drops, 0 tx drops
1396 pps, 1396 max pps,  0 rx drops, 0 tx drops
0 pps, 1396 max pps,  0 rx drops, 0 tx drops
0 pps, 1396 max pps,  0 rx drops, 0 tx drops
0 pps, 1396 max pps,  0 rx drops, 0 tx drops
0 pps, 1396 max pps,  0 rx drops, 0 tx drops
0 pps, 1396 max pps,  0 rx drops, 0 tx drops
0 pps, 1396 max pps,  0 rx drops, 0 tx drops
0 pps, 1396 max pps,  0 rx drops, 0 tx drops
0 pps, 1396 max pps,  0 rx drops, 0 tx drops
0 pps, 1396 max pps,  0 rx drops, 0 tx drops
^C0 pps, 1396 max pps,  0 rx drops, 0 tx drops
TEST RESULT: 1396 

Re: [lng-odp] unexpected packet handling behavior with dpdk pktio support

2018-02-06 Thread gyanesh patra
Hi Bogdan,
Yes, I agree. It looks like that. But i thought if we don't recv pkt
(odp_pktin_recv), then the packets will be dropped at NIC queues as the rx
buffer will be full. In that scenario, the other queue should continue to
work. Maybe i am not aware of the ODP side of the implementation.

In any case, is it an expected behaviour?

Can we disable Rx or Tx on a specific queue instead of the whole PKTIO?
More importantly, how much we can do at run time instead of bringing down
the pktio entirely?

Thanks,

P Gyanesh Kumar Patra

On Tue, Feb 6, 2018 at 11:21 AM, Bogdan Pricope 
wrote:

> Explanation may be related to RSS.
>
> Dpdk pktio is using RSS - traffic is hashed and sent to a specific
> queue. You have two RX queues (pktin) that are polled with
> odp_pktin_recv(). If you stop polling on one queue (put one of the
> threads in busy loop or sleep()), it will not mean that the other will
> take entire traffic: I do not know dpdk so well but I suspect that a
> number of packets are hold on that pktin and pool is exhausted.
>
> /B
>
> On 6 February 2018 at 14:10, Elo, Matias (Nokia - FI/Espoo)
>  wrote:
> >
> >
> >> On 6 Feb 2018, at 13:55, Elo, Matias (Nokia - FI/Espoo) <
> matias@nokia.com> wrote:
> >>
> >>
> >>
> >>> On 5 Feb 2018, at 19:42, Bill Fischofer 
> wrote:
> >>>
> >>> Thanks, Gyanesh, that does sound like a bug. +cc Matias: Can you
> comment on this?
> >>>
> >>> On Mon, Feb 5, 2018 at 5:09 AM, gyanesh patra <
> pgyanesh.pa...@gmail.com> wrote:
> >>> I am testing an l2fwd use-case. I am executing the use-case with two
> >>> CPUs & two interfaces.
> >>> One interface with 2 Rx queues receives pkts using 2 threads with 2
> >>> associated CPUs. Both the
> >>> threads can forward the packet over the 2nd interface which also has 2
> Tx
> >>> queues mapped to
> >>> 2 CPUs. I am sending packets from an external packet generator and
> >>> confirmed that both
> >>> queues are receiving packets.
> >>> *When I run odp_pktin_recv() on both the queues, the packet*
> >>> * forwarding works fine. But if I put a sleep() or add a busy loop
> instead
> >>> of odp_pktin_recv() *
> >>> *on one thread, then the other thread stops receiving packets. If I
> >>> replace the sleep with odp_pktin_recv(), both the queues start
> receiving
> >>> packets again. *I encountered this problem on the DPDK pktio support on
> >>> ODP 1.16 and ODP 1.17.
> >>> On socket-mmap it works fine. Is it expected behavior or a potential
> bug?
> >>>
> >>
> >>
> >> Hi Gyanesh,
> >>
> >> Could you please share an example code which produces this issue? Does
> this happen also if you enable zero-copy dpdk pktio
> (--enable-dpdk-zero-copy)?
> >>
> >> Socket-mmap pktio doesn't support MQ, so comparison to that doesn't
> make much sense. Netmap pktio supports MQ.
> >>
> >> Regards,
> >> Matias
> >>
> >
> > Using too small packet pool can also cause symptoms like this, so you
> could try increasing packet pool size.
> >
> > -Matias
> >
>


Re: [lng-odp] unexpected packet handling behavior with dpdk pktio support

2018-02-06 Thread Bogdan Pricope
Explanation may be related to RSS.

Dpdk pktio is using RSS - traffic is hashed and sent to a specific
queue. You have two RX queues (pktin) that are polled with
odp_pktin_recv(). If you stop polling on one queue (put one of the
threads in busy loop or sleep()), it will not mean that the other will
take entire traffic: I do not know dpdk so well but I suspect that a
number of packets are hold on that pktin and pool is exhausted.

/B

On 6 February 2018 at 14:10, Elo, Matias (Nokia - FI/Espoo)
 wrote:
>
>
>> On 6 Feb 2018, at 13:55, Elo, Matias (Nokia - FI/Espoo) 
>>  wrote:
>>
>>
>>
>>> On 5 Feb 2018, at 19:42, Bill Fischofer  wrote:
>>>
>>> Thanks, Gyanesh, that does sound like a bug. +cc Matias: Can you comment on 
>>> this?
>>>
>>> On Mon, Feb 5, 2018 at 5:09 AM, gyanesh patra  
>>> wrote:
>>> I am testing an l2fwd use-case. I am executing the use-case with two
>>> CPUs & two interfaces.
>>> One interface with 2 Rx queues receives pkts using 2 threads with 2
>>> associated CPUs. Both the
>>> threads can forward the packet over the 2nd interface which also has 2 Tx
>>> queues mapped to
>>> 2 CPUs. I am sending packets from an external packet generator and
>>> confirmed that both
>>> queues are receiving packets.
>>> *When I run odp_pktin_recv() on both the queues, the packet*
>>> * forwarding works fine. But if I put a sleep() or add a busy loop instead
>>> of odp_pktin_recv() *
>>> *on one thread, then the other thread stops receiving packets. If I
>>> replace the sleep with odp_pktin_recv(), both the queues start receiving
>>> packets again. *I encountered this problem on the DPDK pktio support on
>>> ODP 1.16 and ODP 1.17.
>>> On socket-mmap it works fine. Is it expected behavior or a potential bug?
>>>
>>
>>
>> Hi Gyanesh,
>>
>> Could you please share an example code which produces this issue? Does this 
>> happen also if you enable zero-copy dpdk pktio (--enable-dpdk-zero-copy)?
>>
>> Socket-mmap pktio doesn't support MQ, so comparison to that doesn't make 
>> much sense. Netmap pktio supports MQ.
>>
>> Regards,
>> Matias
>>
>
> Using too small packet pool can also cause symptoms like this, so you could 
> try increasing packet pool size.
>
> -Matias
>


Re: [lng-odp] unexpected packet handling behavior with dpdk pktio support

2018-02-06 Thread Elo, Matias (Nokia - FI/Espoo)


> On 6 Feb 2018, at 13:55, Elo, Matias (Nokia - FI/Espoo) 
>  wrote:
> 
> 
> 
>> On 5 Feb 2018, at 19:42, Bill Fischofer  wrote:
>> 
>> Thanks, Gyanesh, that does sound like a bug. +cc Matias: Can you comment on 
>> this?
>> 
>> On Mon, Feb 5, 2018 at 5:09 AM, gyanesh patra  
>> wrote:
>> ​I am testing an l2fwd use-case​. I am executing the use-case with two
>> CPUs​ & two interfaces​.
>> One interface ​with 2 Rx ​queues receives pkts using 2 threads with 2
>> associated CPUs. Both the
>> threads can forward the packet over the 2nd interface which also has 2 Tx
>> queues ​mapped to
>> 2 CPUs. I am sending packets from an external packet generator and
>> ​confirmed that ​both
>> queues are receiving packets.
>> *When I run odp_pktin_recv() on both the queues, the packet*
>> * forwarding works fine. But if I put a sleep() or add a busy loop ​instead
>> of odp_pktin_recv() *
>> *on one ​thread, then the​ other ​thread stops receiving packets. If I
>> replace ​the sleep with odp_pktin_recv(), both the queues start receiving
>> packets again. *I encountered this problem on the DPDK pktio support​ on
>> ODP 1.16 and ODP 1.17​.
>> On socket-mmap it works fine. Is it expected behavior or a potential bug?
>> 
> 
> 
> Hi Gyanesh,
> 
> Could you please share an example code which produces this issue? Does this 
> happen also if you enable zero-copy dpdk pktio (--enable-dpdk-zero-copy)? 
> 
> Socket-mmap pktio doesn't support MQ, so comparison to that doesn't make much 
> sense. Netmap pktio supports MQ.
> 
> Regards,
> Matias
> 

Using too small packet pool can also cause symptoms like this, so you could try 
increasing packet pool size.

-Matias
 

Re: [lng-odp] unexpected packet handling behavior with dpdk pktio support

2018-02-06 Thread Elo, Matias (Nokia - FI/Espoo)


> On 5 Feb 2018, at 19:42, Bill Fischofer  wrote:
> 
> Thanks, Gyanesh, that does sound like a bug. +cc Matias: Can you comment on 
> this?
> 
> On Mon, Feb 5, 2018 at 5:09 AM, gyanesh patra  
> wrote:
> ​I am testing an l2fwd use-case​. I am executing the use-case with two
> CPUs​ & two interfaces​.
> One interface ​with 2 Rx ​queues receives pkts using 2 threads with 2
> associated CPUs. Both the
> threads can forward the packet over the 2nd interface which also has 2 Tx
> queues ​mapped to
> 2 CPUs. I am sending packets from an external packet generator and
> ​confirmed that ​both
> queues are receiving packets.
> *When I run odp_pktin_recv() on both the queues, the packet*
> * forwarding works fine. But if I put a sleep() or add a busy loop ​instead
> of odp_pktin_recv() *
> *on one ​thread, then the​ other ​thread stops receiving packets. If I
> replace ​the sleep with odp_pktin_recv(), both the queues start receiving
> packets again. *I encountered this problem on the DPDK pktio support​ on
> ODP 1.16 and ODP 1.17​.
> On socket-mmap it works fine. Is it expected behavior or a potential bug?
> 


Hi Gyanesh,

Could you please share an example code which produces this issue? Does this 
happen also if you enable zero-copy dpdk pktio (--enable-dpdk-zero-copy)? 

Socket-mmap pktio doesn't support MQ, so comparison to that doesn't make much 
sense. Netmap pktio supports MQ.

Regards,
Matias



Re: [lng-odp] unexpected packet handling behavior with dpdk pktio support

2018-02-05 Thread Bill Fischofer
Thanks, Gyanesh, that does sound like a bug. +cc Matias: Can you comment on
this?

On Mon, Feb 5, 2018 at 5:09 AM, gyanesh patra 
wrote:

> ​I am testing an l2fwd use-case​. I am executing the use-case with two
> CPUs​ & two interfaces​.
> One interface ​with 2 Rx ​queues receives pkts using 2 threads with 2
> associated CPUs. Both the
> threads can forward the packet over the 2nd interface which also has 2 Tx
> queues ​mapped to
> 2 CPUs. I am sending packets from an external packet generator and
> ​confirmed that ​both
> queues are receiving packets.
> *When I run odp_pktin_recv() on both the queues, the packet*
> * forwarding works fine. But if I put a sleep() or add a busy loop ​instead
> of odp_pktin_recv() *
> *on one ​thread, then the​ other ​thread stops receiving packets. If I
> replace ​the sleep with odp_pktin_recv(), both the queues start receiving
> packets again. *I encountered this problem on the DPDK pktio support​ on
> ODP 1.16 and ODP 1.17​.
> On socket-mmap it works fine. Is it expected behavior or a potential bug?
>
> Thanks & Regards,
> Gyanesh Patra
> PhD Candidate
> Unicamp University
>


[lng-odp] unexpected packet handling behavior with dpdk pktio support

2018-02-05 Thread gyanesh patra
​I am testing an l2fwd use-case​. I am executing the use-case with two
CPUs​ & two interfaces​.
One interface ​with 2 Rx ​queues receives pkts using 2 threads with 2
associated CPUs. Both the
threads can forward the packet over the 2nd interface which also has 2 Tx
queues ​mapped to
2 CPUs. I am sending packets from an external packet generator and
​confirmed that ​both
queues are receiving packets.
*When I run odp_pktin_recv() on both the queues, the packet*
* forwarding works fine. But if I put a sleep() or add a busy loop ​instead
of odp_pktin_recv() *
*on one ​thread, then the​ other ​thread stops receiving packets. If I
replace ​the sleep with odp_pktin_recv(), both the queues start receiving
packets again. *I encountered this problem on the DPDK pktio support​ on
ODP 1.16 and ODP 1.17​.
On socket-mmap it works fine. Is it expected behavior or a potential bug?

Thanks & Regards,
Gyanesh Patra
PhD Candidate
Unicamp University