Re: [lng-odp] Suspected SPAM - Re: odp with dpdk pktio gives error with larger packets - 'Segmented buffers not supported'

2018-10-18 Thread Elo, Matias (Nokia - FI/Espoo)
Thanks! I think I figured out the problem. Some DPDK NICs require that the 
buffer length is at least 2kB + headroom to not segment standard ethernet 
frames. This PR should fix the issue: https://github.com/Linaro/odp/pull/731 . 
Please let me know if this fixed your problem.

-Matias


On 17 Oct 2018, at 10:23, Elo, Matias (Nokia - FI/Espoo) 
mailto:matias@nokia.com>> wrote:

Hi Gyanesh,

Could you please provide some additional information about your system (ODP & 
DPDK versions, NICs)? I’m using DPDK (v17.11.4) zero-copy pktio with the latest 
ODP master branch code and I’m unable to reproduce this issue.

Regards,
Matias

On 16 Oct 2018, at 23:44, gyanesh patra 
mailto:pgyanesh.pa...@gmail.com>> wrote:

Hi Maxim,
Increasing the POOL_SEG_LEN worked. But i am not sure how to calculate the
necessary value to use?
I was using the values from the odp_l2fwd example before. But now i
required to increase it upto 2200 for it to work.
Is there any guideline how to calculate this value? And also does it have
any impact on performance?

Regarding the examples, i tried with odp_l2fwd_simple and odp_switch and
faced the same problem. But in my case "odp_l2fwd" example never recieves
any packets. Hence i have not been able to test that.  If you can give any
input regarding this, it will be helpful too.
Thanks for your help.

Regards,
P Gyanesh Kumar Patra


On Tue, Oct 16, 2018 at 3:36 PM Maxim Uvarov 
mailto:maxim.uva...@linaro.org>>
wrote:

DPDK as ODP can have packets which are not in physacally continius memory.
I.e. packet can be split on several memory segments. That is not supported
by current code and you have this warning. I think that we have dpdk pkio
validation test and it works with large packets. But to do that you need to
be sure that you created pool with right parameters. In your case
POOL_SEG_LEN has to be increased.

Also you can try more featured example: ./test/performance/odp_l2fwd

Best Regards,
Maxim.


On Tue, 16 Oct 2018 at 20:49, gyanesh patra 
mailto:pgyanesh.pa...@gmail.com>>
wrote:

Hi,
I am facing problem while using ODP master branch with DPDK pktio &
zero-pkt-copy as below:

ODP/bin/# ./odp_l2fwd_simple ./odp_l2fwd_simple

pktio/dpdk.c:851:mbuf_to_pkt_zero():Segmented buffers not supported
pktio/dpdk.c:851:mbuf_to_pkt_zero():Segmented buffers not supported
pktio/dpdk.c:851:mbuf_to_pkt_zero():Segmented buffers not supported
pktio/dpdk.c:851:mbuf_to_pkt_zero():Segmented buffers not supported

This error is present for dpdk pktio only. It appears with larger packet
sizes like 1518bytes, 1280bytes. But everything works fine with 1024bytes
and smaller packets.

I have verified that the packets have IP-don't fragment flag set. And
Wireshark doesn't show any abnormality with the pcap.
Is it broken or we need to specify some extra flags?

I am on:
commit 570758a22fd0d6e2b2a73eb8ed0a8360a5b0ef32
Author: Matias Elo mailto:matias@nokia.com>>
Date:   Tue Oct 2 14:13:35 2018 +0300
 linux-gen: ring: allocate global data from shm


Thanks,
P Gyanesh Kumar Patra






Re: [lng-odp] odp with dpdk pktio gives error with larger packets - 'Segmented buffers not supported'

2018-10-17 Thread Elo, Matias (Nokia - FI/Espoo)
Hi Gyanesh,

Could you please provide some additional information about your system (ODP & 
DPDK versions, NICs)? I’m using DPDK (v17.11.4) zero-copy pktio with the latest 
ODP master branch code and I’m unable to reproduce this issue.

Regards,
Matias

> On 16 Oct 2018, at 23:44, gyanesh patra  wrote:
> 
> Hi Maxim,
> Increasing the POOL_SEG_LEN worked. But i am not sure how to calculate the
> necessary value to use?
> I was using the values from the odp_l2fwd example before. But now i
> required to increase it upto 2200 for it to work.
> Is there any guideline how to calculate this value? And also does it have
> any impact on performance?
> 
> Regarding the examples, i tried with odp_l2fwd_simple and odp_switch and
> faced the same problem. But in my case "odp_l2fwd" example never recieves
> any packets. Hence i have not been able to test that.  If you can give any
> input regarding this, it will be helpful too.
> Thanks for your help.
> 
> Regards,
> P Gyanesh Kumar Patra
> 
> 
> On Tue, Oct 16, 2018 at 3:36 PM Maxim Uvarov 
> wrote:
> 
>> DPDK as ODP can have packets which are not in physacally continius memory.
>> I.e. packet can be split on several memory segments. That is not supported
>> by current code and you have this warning. I think that we have dpdk pkio
>> validation test and it works with large packets. But to do that you need to
>> be sure that you created pool with right parameters. In your case
>> POOL_SEG_LEN has to be increased.
>> 
>> Also you can try more featured example: ./test/performance/odp_l2fwd
>> 
>> Best Regards,
>> Maxim.
>> 
>> 
>> On Tue, 16 Oct 2018 at 20:49, gyanesh patra 
>> wrote:
>> 
>>> Hi,
>>> I am facing problem while using ODP master branch with DPDK pktio &
>>> zero-pkt-copy as below:
>>> 
>>> ODP/bin/# ./odp_l2fwd_simple ./odp_l2fwd_simple
>>> 
>>> pktio/dpdk.c:851:mbuf_to_pkt_zero():Segmented buffers not supported
>>> pktio/dpdk.c:851:mbuf_to_pkt_zero():Segmented buffers not supported
>>> pktio/dpdk.c:851:mbuf_to_pkt_zero():Segmented buffers not supported
>>> pktio/dpdk.c:851:mbuf_to_pkt_zero():Segmented buffers not supported
>>> 
>>> This error is present for dpdk pktio only. It appears with larger packet
>>> sizes like 1518bytes, 1280bytes. But everything works fine with 1024bytes
>>> and smaller packets.
>>> 
>>> I have verified that the packets have IP-don't fragment flag set. And
>>> Wireshark doesn't show any abnormality with the pcap.
>>> Is it broken or we need to specify some extra flags?
>>> 
>>> I am on:
>>> commit 570758a22fd0d6e2b2a73eb8ed0a8360a5b0ef32
>>> Author: Matias Elo 
>>> Date:   Tue Oct 2 14:13:35 2018 +0300
>>>   linux-gen: ring: allocate global data from shm
>>> 
>>> 
>>> Thanks,
>>> P Gyanesh Kumar Patra
>>> 
>> 



Re: [lng-odp] Get Physical address

2018-10-15 Thread Elo, Matias (Nokia - FI/Espoo)
Hi Liron,


Is there an external or internal API to retrieve the physical address of a 
packet.

At least for now we haven’t seen a use case for such a function.

I would like to use the linux-generic memory management implementation with 
hugepages.
In DPDK there is :
/**
* A macro that returns the IO address that points to the start of the
* data in the mbuf
*
*/
#define rte_pktmbuf_iova(m)

rte_pktmbuf_iova() macro returns an address based on predefined iova pointer + 
offset. The function for doing the actual virt to phy conversion is 
rte_mem_virt2iova() (or rte_mem_virt2phy()). At a quick glance this function 
seems quite generic and you could potentially port it to your code.

Regards,
Matias



Re: [lng-odp] odph_cuckoo_table_create not working with more than 8192 capacity

2018-08-03 Thread Elo, Matias (Nokia - FI/Espoo)



> On 3 Aug 2018, at 13:39, Daniel Feferman  wrote:
> 
> Hi all,
> 
> Thanks for the feedback, while 8192 seems a small amount I think 1 million 
> should be sufficient enough for my application. What is the effect expected 
> by setting this to the max value? Like some consequence on memory 
> requirements, performance, etc?

Hi,

ODP linux-generic queues are implemented using rings which store 32-bit values. 
By
default there are 1024 queues (ODP_CONFIG_QUEUES).

So if my math is correct: 
1024 * 8192 * 4 ~= 33.6MB
1024 * 1 048 576 * 4 ~= 4.3GB

Assuming you have enough memory the performance impact should be negligible.

-Matias



Re: [lng-odp] odph_cuckoo_table_create not working with more than 8192 capacity

2018-08-03 Thread Elo, Matias (Nokia - FI/Espoo)
Hi Daniel,

The cuckoo table implementation is internally using plain queues, which have by 
default
a limited size of 8192 as you have noticed. You can increase this by changing
' queue_basic.max_queue_size' in config/odp-linux-generic.conf. The maximum 
supported
value is currently 1 048 576. After modifying the config you have to either set
ODP_CONFIG_FILE environment variable or do 'make clean && make'.

Would this table be large enough for your use case? If not, we have to think 
about updating
the cuckoo table implementation.

Regards,
Matias


> On 2 Aug 2018, at 21:31, Daniel Feferman  wrote:
> 
> Hi all,
> 
> I was using odph_cuckoo_table_create on version 19 and it seems the
> capacity field can only take up to 8192, after that the function return
> NULL (not able to create). Since capacity is set to a type uint32_t I was
> not expecting this behavior, am I doing something wrong? Or the function is
> really set to handle just 8192?
> 
> Best,
> Daniel



Re: [lng-odp] Suspected SPAM - Re: latency calulation with netmap pkt i/o fails with oversized packet debug msg

2018-07-27 Thread Elo, Matias (Nokia - FI/Espoo)


>> On 26 Jul 2018, at 21:24, gyanesh patra  wrote:
>> 
>> I verified the throughput over the link with/without this debug message.
>> With DEBUG message: 10-15 Mbps
>> without DEBUG message: 1500 Mbps
>> 

This number seems still quite low. I ran a quick test on my development server 
(Xeon E5-2697v3@ 2.60GHz, XL710 NICs) and measured 3.8Gbps.

For optimal performance you should build ODP without ABI compatibility 
(--disable-abi-compat) to enable inlining. In case of netmap pktio, both netmap 
module and modified driver should be loaded, and NIC flow control should be 
disabled.

Regards,
Matias



Re: [lng-odp] latency calulation with netmap pkt i/o fails with oversized packet debug msg

2018-07-27 Thread Elo, Matias (Nokia - FI/Espoo)



> On 26 Jul 2018, at 21:24, gyanesh patra  wrote:
> 
> I verified the throughput over the link with/without this debug message.
> With DEBUG message: 10-15 Mbps
> without DEBUG message: 1500 Mbps
> 
> Due to this debug message to the stdout, the throughput drops to the minimum 
> and the latency can't be calculated properly too.
> Should i just remove the debug message from the netmap.c file? Does it serve 
> any purpose?
> 

Now when I look at it, this debug message is definitely in the wrong place. For 
your testing you can simply remove the line. I'll post a patch fixing this. 
Thanks for reporting this!


-Matias



Re: [lng-odp] latency calulation with netmap pkt i/o fails with oversized packet debug msg

2018-07-26 Thread Elo, Matias (Nokia - FI/Espoo)



> On 25 Jul 2018, at 17:11, Maxim Uvarov  wrote:
> 
> For quick look it looks like mtu is not set correctly on open(). Can you try 
> this patch:
> 
> diff --git a/platform/linux-generic/pktio/netmap.c 
> b/platform/linux-generic/pktio/netmap.c
> index 0da2b7a..d4db0af 100644
> --- a/platform/linux-generic/pktio/netmap.c
> +++ b/platform/linux-generic/pktio/netmap.c
> @@ -539,6 +539,7 @@ static int netmap_open(odp_pktio_t id ODP_UNUSED, 
> pktio_entry_t *pktio_entry,
> goto error;
> }
> pkt_nm->mtu = (mtu < buf_size) ? mtu : buf_size;
> +   pkt_priv(pktio_entry)->mtu = pkt_nm->mtu;


pkt_netmap_t *pkt_nm = pkt_priv(pktio_entry), so this is unnecessary.


>> 
>> 
>> Is this a know issue or am i missing something?
>> 


As far as I can see the problem is caused by reading interface MTU incorrectly 
or netmap using unusually small buffers (assuming moongen sends packets smaller 
than MTU). The following patch should help debug the issue.

-Matias

diff --git a/platform/linux-generic/pktio/netmap.c 
b/platform/linux-generic/pktio/netmap.c
index 0da2b7afd..3e0a17542 100644
--- a/platform/linux-generic/pktio/netmap.c
+++ b/platform/linux-generic/pktio/netmap.c
@@ -538,6 +538,10 @@ static int netmap_open(odp_pktio_t id ODP_UNUSED, 
pktio_entry_t *pktio_entry,
ODP_ERR("Unable to read interface MTU\n");
goto error;
}
+
+   ODP_DBG("MTU: %" PRIu32 "\n", mtu);
+   ODP_DBG("NM buf_size: %" PRIu32 "\n", buf_size);
+
pkt_nm->mtu = (mtu < buf_size) ? mtu : buf_size;
 
/* Check if RSS is supported. If not, set 'max_input_queues' to 1. */




Re: [lng-odp] Bug 3657

2018-04-12 Thread Elo, Matias (Nokia - FI/Espoo)


> On 12 Apr 2018, at 16:12, gyanesh patra  wrote:
> 
> Thanks for helping with this issue. It would be a good idea if we can mention 
> this somewhere in Readme or DEPENDENCY file.

PRs are welcome ;)

> Also for ​ODP_PKTIO_DPDK_PARAMS, "-m" or "--socket-mem" should be used going 
> forward?
> 

DPDK documentation states that '--socket-mem' should be preferred, so I'll go 
with that.

-Matias



Re: [lng-odp] Bug 3657

2018-04-12 Thread Elo, Matias (Nokia - FI/Espoo)
Thanks for testing this! You can use ODP_PKTIO_DPDK_PARAMS to override the 
default options. The patch still needs some fixes for arm platforms, but it 
should be merged to master repo soon. There should be no performance impact.

-Matias

> On 12 Apr 2018, at 15:55, gyanesh patra <pgyanesh.pa...@gmail.com> wrote:
> 
> I tried this trick and it worked on the odp-dpdk repository.
> 
> What will be the preferred method? 
>  - ODP_PKTIO_DPDK_PARAMS="-m 512,512" 
>  - the patch you mentioned.
> 
> Thanks & Regards,
> 
> P Gyanesh Kumar Patra
> 
> On Thu, Apr 12, 2018 at 4:42 AM, Elo, Matias (Nokia - FI/Espoo) 
> <matias@nokia.com> wrote:
> Hi,
> 
> I may have figured out the issue here. Currently, the ODP DPDK pktio 
> implementation configures DPDK to allocated memory only for socket 0.
> 
> Could you please try running ODP again with environment variable 
> ODP_PKTIO_DPDK_PARAMS="-m 512,512" set.
> 
> E.g.
> sudo ODP_PKTIO_DPDK_PARAMS="-m 512,512"  ./odp_l2fwd -c 1 -i 0,1
> 
> 
> If this doesn't help you could test this code change:
> 
> diff --git a/platform/linux-generic/pktio/dpdk.c 
> b/platform/linux-generic/pktio/dpdk.c
> index 7bccab8..2b8b8e4 100644
> --- a/platform/linux-generic/pktio/dpdk.c
> +++ b/platform/linux-generic/pktio/dpdk.c
> @@ -1120,7 +1120,8 @@ static int dpdk_pktio_init(void)
> return -1;
> }
> 
> -   mem_str_len = snprintf(NULL, 0, "%d", DPDK_MEMORY_MB);
> +   mem_str_len = snprintf(NULL, 0, "%d,%d", DPDK_MEMORY_MB,
> +  DPDK_MEMORY_MB);
> 
> cmdline = getenv("ODP_PKTIO_DPDK_PARAMS");
> if (cmdline == NULL)
> @@ -1133,8 +1134,8 @@ static int dpdk_pktio_init(void)
> char full_cmd[cmd_len];
> 
> /* first argument is facility log, simply bind it to odpdpdk for 
> now.*/
> -   cmd_len = snprintf(full_cmd, cmd_len, "odpdpdk -c %s -m %d %s",
> -  mask_str, DPDK_MEMORY_MB, cmdline);
> +   cmd_len = snprintf(full_cmd, cmd_len, "odpdpdk -c %s -m %d,%d %s",
> +  mask_str, DPDK_MEMORY_MB, DPDK_MEMORY_MB, cmdline);
> 
> for (i = 0, dpdk_argc = 1; i < cmd_len; ++i) {
> if (isspace(full_cmd[i]))
> 
> 
> -Matias
> 
> 
> > On 10 Apr 2018, at 21:37, gyanesh patra <pgyanesh.pa...@gmail.com> wrote:
> >
> > Hi Matias,
> >
> > The Mellanox interfaces are mapped to Numa Node 1. (device id: 81:00.x)
> > We have free hugepages on both Node0 and Node1 as identified below.
> >
> >   ​root# cat 
> > /sys/devices/system/node/node0/hugepages/hugepages-1048576kB/free_hugepages
> >77
> >   root# cat 
> > /sys/devices/system/node/node1/hugepages/hugepages-1048576kB/free_hugepages
> >83
> >
> > The ODP application is using CPU/lcore associated with numa Node1 too.
> > I have tried with the dpdk-17.11.1 version too without success.
> > The issue may be somewhere else.
> >
> > Regarding the usage of 2M pages ​ (1024 x 2M pages):
> >  - I unmounted the 1G hugepages and then set 1024x2M pages using 
> > dpdk-setup.sh scripts.
> >  - But with this setup failed with the same error as before.
> >
> > Let me know if there is any other option we can try.
> >
> > ​Thanks,​
> > P Gyanesh Kumar Patra
> >
> > On Thu, Mar 29, 2018 at 4:46 AM, Elo, Matias (Nokia - FI/Espoo) 
> > <matias@nokia.com> wrote:
> > A second thing to try. Since you seem to have a NUMA  system, the ODP 
> > application should be run on the same NUMA socket as the NIC (e.g. using 
> > taskset if necessary). In case of different sockets, both sockets should 
> > have huge pages mapped.
> >
> > -Matias
> >
> > > On 29 Mar 2018, at 10:00, Elo, Matias (Nokia - FI/Espoo) 
> > > <matias@nokia.com> wrote:
> > >
> > > Hi Gyanesh,
> > >
> > > It seems you are using 1G huge pages. Have you tried using 2M pages​​ 
> > > (1024 x 2M pages should be enough)? As Bill noted, this seems like a 
> > > memory related issue.
> > >
> > > -Matias
> > >
> > >
> > >> On 28 Mar 2018, at 18:15, gyanesh patra <pgyanesh.pa...@gmail.com> wrote:
> > >>
> > >> Yes, it is.
> > >> The error is the same. I did replied that the only difference I see is 
> > >> with Ubuntu version and different minor version of mellanox driver.
> > >>
> > >> On Wed, Mar 28, 2018, 07:29 Bill Fischofer <bi

Re: [lng-odp] Bug 3657

2018-04-12 Thread Elo, Matias (Nokia - FI/Espoo)

This patch should hopefully fix the bug: 
https://github.com/matiaselo/odp/commit/c32baeb1796636adfd12fd3f785e10929984ccc3

It would be great if you could verify that the patch works since I cannot 
repeat the original issue on my test system.

-Matias


> On 12 Apr 2018, at 10:53, Elo, Matias (Nokia - FI/Espoo) 
> <matias@nokia.com> wrote:
> 
> Still one more thing, the argument '-m' should be replaced with 
> '--socket-mem'.
> 
> 
>> On 12 Apr 2018, at 10:42, Elo, Matias (Nokia - FI/Espoo) 
>> <matias@nokia.com> wrote:
>> 
>> Hi,
>> 
>> I may have figured out the issue here. Currently, the ODP DPDK pktio 
>> implementation configures DPDK to allocated memory only for socket 0. 
>> 
>> Could you please try running ODP again with environment variable 
>> ODP_PKTIO_DPDK_PARAMS="-m 512,512" set.
>> 
>> E.g.
>> sudo ODP_PKTIO_DPDK_PARAMS="-m 512,512"  ./odp_l2fwd -c 1 -i 0,1
>> 
>> 
>> If this doesn't help you could test this code change:
>> 
>> diff --git a/platform/linux-generic/pktio/dpdk.c 
>> b/platform/linux-generic/pktio/dpdk.c
>> index 7bccab8..2b8b8e4 100644
>> --- a/platform/linux-generic/pktio/dpdk.c
>> +++ b/platform/linux-generic/pktio/dpdk.c
>> @@ -1120,7 +1120,8 @@ static int dpdk_pktio_init(void)
>>   return -1;
>>   }
>> 
>> -   mem_str_len = snprintf(NULL, 0, "%d", DPDK_MEMORY_MB);
>> +   mem_str_len = snprintf(NULL, 0, "%d,%d", DPDK_MEMORY_MB,
>> +  DPDK_MEMORY_MB);
>> 
>>   cmdline = getenv("ODP_PKTIO_DPDK_PARAMS");
>>   if (cmdline == NULL)
>> @@ -1133,8 +1134,8 @@ static int dpdk_pktio_init(void)
>>   char full_cmd[cmd_len];
>> 
>>   /* first argument is facility log, simply bind it to odpdpdk for now.*/
>> -   cmd_len = snprintf(full_cmd, cmd_len, "odpdpdk -c %s -m %d %s",
>> -  mask_str, DPDK_MEMORY_MB, cmdline);
>> +   cmd_len = snprintf(full_cmd, cmd_len, "odpdpdk -c %s -m %d,%d %s",
>> +  mask_str, DPDK_MEMORY_MB, DPDK_MEMORY_MB, 
>> cmdline);
>> 
>>   for (i = 0, dpdk_argc = 1; i < cmd_len; ++i) {
>>   if (isspace(full_cmd[i]))
>> 
>> 
>> -Matias
>> 
>> 
>>> On 10 Apr 2018, at 21:37, gyanesh patra <pgyanesh.pa...@gmail.com> wrote:
>>> 
>>> Hi Matias,
>>> 
>>> The Mellanox interfaces are mapped to Numa Node 1. (device id: 81:00.x)
>>> We have free hugepages on both Node0 and Node1 as identified below.
>>> 
>>> ​root# cat 
>>> /sys/devices/system/node/node0/hugepages/hugepages-1048576kB/free_hugepages 
>>>  77
>>> root# cat 
>>> /sys/devices/system/node/node1/hugepages/hugepages-1048576kB/free_hugepages 
>>>  83
>>> 
>>> The ODP application is using CPU/lcore associated with numa Node1 too.
>>> I have tried with the dpdk-17.11.1 version too without success.
>>> The issue may be somewhere else.
>>> 
>>> Regarding the usage of 2M pages ​ (1024 x 2M pages):
>>> - I unmounted the 1G hugepages and then set 1024x2M pages using 
>>> dpdk-setup.sh scripts.
>>> - But with this setup failed with the same error as before.
>>> 
>>> Let me know if there is any other option we can try.
>>> 
>>> ​Thanks,​
>>> P Gyanesh Kumar Patra
>>> 
>>> On Thu, Mar 29, 2018 at 4:46 AM, Elo, Matias (Nokia - FI/Espoo) 
>>> <matias@nokia.com> wrote:
>>> A second thing to try. Since you seem to have a NUMA  system, the ODP 
>>> application should be run on the same NUMA socket as the NIC (e.g. using 
>>> taskset if necessary). In case of different sockets, both sockets should 
>>> have huge pages mapped.
>>> 
>>> -Matias
>>> 
>>>> On 29 Mar 2018, at 10:00, Elo, Matias (Nokia - FI/Espoo) 
>>>> <matias@nokia.com> wrote:
>>>> 
>>>> Hi Gyanesh,
>>>> 
>>>> It seems you are using 1G huge pages. Have you tried using 2M pages​​ 
>>>> (1024 x 2M pages should be enough)? As Bill noted, this seems like a 
>>>> memory related issue.
>>>> 
>>>> -Matias
>>>> 
>>>> 
>>>>> On 28 Mar 2018, at 18:15, gyanesh patra <pgyanesh.pa...@gmail.com> wrote:
>>>>> 
>>>>> Yes, it is.
>>>>> The error is the same. I did replied th

Re: [lng-odp] Suspected SPAM - Re: Suspected SPAM - Re: Bug 3657

2018-04-12 Thread Elo, Matias (Nokia - FI/Espoo)
Still one more thing, the argument '-m' should be replaced with '--socket-mem'.


> On 12 Apr 2018, at 10:42, Elo, Matias (Nokia - FI/Espoo) 
> <matias@nokia.com> wrote:
> 
> Hi,
> 
> I may have figured out the issue here. Currently, the ODP DPDK pktio 
> implementation configures DPDK to allocated memory only for socket 0. 
> 
> Could you please try running ODP again with environment variable 
> ODP_PKTIO_DPDK_PARAMS="-m 512,512" set.
> 
> E.g.
> sudo ODP_PKTIO_DPDK_PARAMS="-m 512,512"  ./odp_l2fwd -c 1 -i 0,1
> 
> 
> If this doesn't help you could test this code change:
> 
> diff --git a/platform/linux-generic/pktio/dpdk.c 
> b/platform/linux-generic/pktio/dpdk.c
> index 7bccab8..2b8b8e4 100644
> --- a/platform/linux-generic/pktio/dpdk.c
> +++ b/platform/linux-generic/pktio/dpdk.c
> @@ -1120,7 +1120,8 @@ static int dpdk_pktio_init(void)
>return -1;
>}
> 
> -   mem_str_len = snprintf(NULL, 0, "%d", DPDK_MEMORY_MB);
> +   mem_str_len = snprintf(NULL, 0, "%d,%d", DPDK_MEMORY_MB,
> +  DPDK_MEMORY_MB);
> 
>cmdline = getenv("ODP_PKTIO_DPDK_PARAMS");
>if (cmdline == NULL)
> @@ -1133,8 +1134,8 @@ static int dpdk_pktio_init(void)
>char full_cmd[cmd_len];
> 
>/* first argument is facility log, simply bind it to odpdpdk for now.*/
> -   cmd_len = snprintf(full_cmd, cmd_len, "odpdpdk -c %s -m %d %s",
> -  mask_str, DPDK_MEMORY_MB, cmdline);
> +   cmd_len = snprintf(full_cmd, cmd_len, "odpdpdk -c %s -m %d,%d %s",
> +  mask_str, DPDK_MEMORY_MB, DPDK_MEMORY_MB, cmdline);
> 
>for (i = 0, dpdk_argc = 1; i < cmd_len; ++i) {
>if (isspace(full_cmd[i]))
> 
> 
> -Matias
> 
> 
>> On 10 Apr 2018, at 21:37, gyanesh patra <pgyanesh.pa...@gmail.com> wrote:
>> 
>> Hi Matias,
>> 
>> The Mellanox interfaces are mapped to Numa Node 1. (device id: 81:00.x)
>> We have free hugepages on both Node0 and Node1 as identified below.
>> 
>>  ​root# cat 
>> /sys/devices/system/node/node0/hugepages/hugepages-1048576kB/free_hugepages 
>>   77
>>  root# cat 
>> /sys/devices/system/node/node1/hugepages/hugepages-1048576kB/free_hugepages 
>>   83
>> 
>> The ODP application is using CPU/lcore associated with numa Node1 too.
>> I have tried with the dpdk-17.11.1 version too without success.
>> The issue may be somewhere else.
>> 
>> Regarding the usage of 2M pages ​ (1024 x 2M pages):
>> - I unmounted the 1G hugepages and then set 1024x2M pages using 
>> dpdk-setup.sh scripts.
>> - But with this setup failed with the same error as before.
>> 
>> Let me know if there is any other option we can try.
>> 
>> ​Thanks,​
>> P Gyanesh Kumar Patra
>> 
>> On Thu, Mar 29, 2018 at 4:46 AM, Elo, Matias (Nokia - FI/Espoo) 
>> <matias@nokia.com> wrote:
>> A second thing to try. Since you seem to have a NUMA  system, the ODP 
>> application should be run on the same NUMA socket as the NIC (e.g. using 
>> taskset if necessary). In case of different sockets, both sockets should 
>> have huge pages mapped.
>> 
>> -Matias
>> 
>>> On 29 Mar 2018, at 10:00, Elo, Matias (Nokia - FI/Espoo) 
>>> <matias@nokia.com> wrote:
>>> 
>>> Hi Gyanesh,
>>> 
>>> It seems you are using 1G huge pages. Have you tried using 2M pages​​ (1024 
>>> x 2M pages should be enough)? As Bill noted, this seems like a memory 
>>> related issue.
>>> 
>>> -Matias
>>> 
>>> 
>>>> On 28 Mar 2018, at 18:15, gyanesh patra <pgyanesh.pa...@gmail.com> wrote:
>>>> 
>>>> Yes, it is.
>>>> The error is the same. I did replied that the only difference I see is 
>>>> with Ubuntu version and different minor version of mellanox driver.
>>>> 
>>>> On Wed, Mar 28, 2018, 07:29 Bill Fischofer <bill.fischo...@linaro.org> 
>>>> wrote:
>>>> Thanks for the update. Sounds like you're already using DPDK 17.11?
>>>> What about Mellanox driver level? Is the failure the same as you
>>>> originally reported?
>>>> 
>>>> From the reported error:
>>>> 
>>>> pktio/dpdk.c:1538:dpdk_start():Queue setup failed: err=-12, port=0
>>>> odp_l2fwd.c:1671:main():Error: unable to start 0
>>>> 
>>>> This is a DPDK PMD driver error reported by rte_eth_rx_qu

Re: [lng-odp] Suspected SPAM - Re: Bug 3657

2018-04-12 Thread Elo, Matias (Nokia - FI/Espoo)
Hi,

I may have figured out the issue here. Currently, the ODP DPDK pktio 
implementation configures DPDK to allocated memory only for socket 0. 

Could you please try running ODP again with environment variable 
ODP_PKTIO_DPDK_PARAMS="-m 512,512" set.

E.g.
sudo ODP_PKTIO_DPDK_PARAMS="-m 512,512"  ./odp_l2fwd -c 1 -i 0,1


If this doesn't help you could test this code change:

diff --git a/platform/linux-generic/pktio/dpdk.c 
b/platform/linux-generic/pktio/dpdk.c
index 7bccab8..2b8b8e4 100644
--- a/platform/linux-generic/pktio/dpdk.c
+++ b/platform/linux-generic/pktio/dpdk.c
@@ -1120,7 +1120,8 @@ static int dpdk_pktio_init(void)
return -1;
}
 
-   mem_str_len = snprintf(NULL, 0, "%d", DPDK_MEMORY_MB);
+   mem_str_len = snprintf(NULL, 0, "%d,%d", DPDK_MEMORY_MB,
+  DPDK_MEMORY_MB);
 
cmdline = getenv("ODP_PKTIO_DPDK_PARAMS");
if (cmdline == NULL)
@@ -1133,8 +1134,8 @@ static int dpdk_pktio_init(void)
char full_cmd[cmd_len];
 
/* first argument is facility log, simply bind it to odpdpdk for now.*/
-   cmd_len = snprintf(full_cmd, cmd_len, "odpdpdk -c %s -m %d %s",
-  mask_str, DPDK_MEMORY_MB, cmdline);
+   cmd_len = snprintf(full_cmd, cmd_len, "odpdpdk -c %s -m %d,%d %s",
+  mask_str, DPDK_MEMORY_MB, DPDK_MEMORY_MB, cmdline);
 
for (i = 0, dpdk_argc = 1; i < cmd_len; ++i) {
if (isspace(full_cmd[i]))


-Matias


> On 10 Apr 2018, at 21:37, gyanesh patra <pgyanesh.pa...@gmail.com> wrote:
> 
> Hi Matias,
> 
> The Mellanox interfaces are mapped to Numa Node 1. (device id: 81:00.x)
> We have free hugepages on both Node0 and Node1 as identified below.
> 
>   ​root# cat 
> /sys/devices/system/node/node0/hugepages/hugepages-1048576kB/free_hugepages 
>77
>   root# cat 
> /sys/devices/system/node/node1/hugepages/hugepages-1048576kB/free_hugepages 
>83
> 
> The ODP application is using CPU/lcore associated with numa Node1 too.
> I have tried with the dpdk-17.11.1 version too without success.
> The issue may be somewhere else.
> 
> Regarding the usage of 2M pages ​ (1024 x 2M pages):
>  - I unmounted the 1G hugepages and then set 1024x2M pages using 
> dpdk-setup.sh scripts.
>  - But with this setup failed with the same error as before.
> 
> Let me know if there is any other option we can try.
> 
> ​Thanks,​
> P Gyanesh Kumar Patra
> 
> On Thu, Mar 29, 2018 at 4:46 AM, Elo, Matias (Nokia - FI/Espoo) 
> <matias@nokia.com> wrote:
> A second thing to try. Since you seem to have a NUMA  system, the ODP 
> application should be run on the same NUMA socket as the NIC (e.g. using 
> taskset if necessary). In case of different sockets, both sockets should have 
> huge pages mapped.
> 
> -Matias
> 
> > On 29 Mar 2018, at 10:00, Elo, Matias (Nokia - FI/Espoo) 
> > <matias@nokia.com> wrote:
> >
> > Hi Gyanesh,
> >
> > It seems you are using 1G huge pages. Have you tried using 2M pages​​ (1024 
> > x 2M pages should be enough)? As Bill noted, this seems like a memory 
> > related issue.
> >
> > -Matias
> >
> >
> >> On 28 Mar 2018, at 18:15, gyanesh patra <pgyanesh.pa...@gmail.com> wrote:
> >>
> >> Yes, it is.
> >> The error is the same. I did replied that the only difference I see is 
> >> with Ubuntu version and different minor version of mellanox driver.
> >>
> >> On Wed, Mar 28, 2018, 07:29 Bill Fischofer <bill.fischo...@linaro.org> 
> >> wrote:
> >> Thanks for the update. Sounds like you're already using DPDK 17.11?
> >> What about Mellanox driver level? Is the failure the same as you
> >> originally reported?
> >>
> >> From the reported error:
> >>
> >> pktio/dpdk.c:1538:dpdk_start():Queue setup failed: err=-12, port=0
> >> odp_l2fwd.c:1671:main():Error: unable to start 0
> >>
> >> This is a DPDK PMD driver error reported by rte_eth_rx_queue_setup().
> >> In the Mellanox PMD (drivers/net/mlx5/mlx5_rxq.c) this is the
> >> mlx5_rx_queue_setup() routine. The relevant code seems to be this:
> >>
> >> if (rxq != NULL) {
> >>DEBUG("%p: reusing already allocated queue index %u (%p)",
> >>  (void *)dev, idx, (void *)rxq);
> >>if (priv->started) {
> >>priv_unlock(priv);
> >>return -EEXIST;
> >>}
> >>(*priv->rxqs)[idx] = NULL;
> >>rxq_cleanup(rxq_ctrl);
> 

Re: [lng-odp] Bug 3657

2018-04-12 Thread Elo, Matias (Nokia - FI/Espoo)
Hi,

Have you tested the latest odp-dpdk code? It uses different shm implementation, 
so at least we could rule that one out.

-Matias


> On 10 Apr 2018, at 21:37, gyanesh patra <pgyanesh.pa...@gmail.com> wrote:
> 
> Hi Matias,
> 
> The Mellanox interfaces are mapped to Numa Node 1. (device id: 81:00.x)
> We have free hugepages on both Node0 and Node1 as identified below.
> 
>   ​root# cat 
> /sys/devices/system/node/node0/hugepages/hugepages-1048576kB/free_hugepages 
>77
>   root# cat 
> /sys/devices/system/node/node1/hugepages/hugepages-1048576kB/free_hugepages 
>83
> 
> The ODP application is using CPU/lcore associated with numa Node1 too.
> I have tried with the dpdk-17.11.1 version too without success.
> The issue may be somewhere else.
> 
> Regarding the usage of 2M pages ​ (1024 x 2M pages):
>  - I unmounted the 1G hugepages and then set 1024x2M pages using 
> dpdk-setup.sh scripts.
>  - But with this setup failed with the same error as before.
> 
> Let me know if there is any other option we can try.
> 
> ​Thanks,​
> P Gyanesh Kumar Patra
> 
> On Thu, Mar 29, 2018 at 4:46 AM, Elo, Matias (Nokia - FI/Espoo) 
> <matias@nokia.com> wrote:
> A second thing to try. Since you seem to have a NUMA  system, the ODP 
> application should be run on the same NUMA socket as the NIC (e.g. using 
> taskset if necessary). In case of different sockets, both sockets should have 
> huge pages mapped.
> 
> -Matias
> 
> > On 29 Mar 2018, at 10:00, Elo, Matias (Nokia - FI/Espoo) 
> > <matias@nokia.com> wrote:
> >
> > Hi Gyanesh,
> >
> > It seems you are using 1G huge pages. Have you tried using 2M pages​​ (1024 
> > x 2M pages should be enough)? As Bill noted, this seems like a memory 
> > related issue.
> >
> > -Matias
> >
> >
> >> On 28 Mar 2018, at 18:15, gyanesh patra <pgyanesh.pa...@gmail.com> wrote:
> >>
> >> Yes, it is.
> >> The error is the same. I did replied that the only difference I see is 
> >> with Ubuntu version and different minor version of mellanox driver.
> >>
> >> On Wed, Mar 28, 2018, 07:29 Bill Fischofer <bill.fischo...@linaro.org> 
> >> wrote:
> >> Thanks for the update. Sounds like you're already using DPDK 17.11?
> >> What about Mellanox driver level? Is the failure the same as you
> >> originally reported?
> >>
> >> From the reported error:
> >>
> >> pktio/dpdk.c:1538:dpdk_start():Queue setup failed: err=-12, port=0
> >> odp_l2fwd.c:1671:main():Error: unable to start 0
> >>
> >> This is a DPDK PMD driver error reported by rte_eth_rx_queue_setup().
> >> In the Mellanox PMD (drivers/net/mlx5/mlx5_rxq.c) this is the
> >> mlx5_rx_queue_setup() routine. The relevant code seems to be this:
> >>
> >> if (rxq != NULL) {
> >>DEBUG("%p: reusing already allocated queue index %u (%p)",
> >>  (void *)dev, idx, (void *)rxq);
> >>if (priv->started) {
> >>priv_unlock(priv);
> >>return -EEXIST;
> >>}
> >>(*priv->rxqs)[idx] = NULL;
> >>rxq_cleanup(rxq_ctrl);
> >>/* Resize if rxq size is changed. */
> >>if (rxq_ctrl->rxq.elts_n != log2above(desc)) {
> >>rxq_ctrl = rte_realloc(rxq_ctrl,
> >>  sizeof(*rxq_ctrl) +
> >>  (desc + desc_pad) *
> >>  sizeof(struct rte_mbuf *),
> >>  RTE_CACHE_LINE_SIZE);
> >>if (!rxq_ctrl) {
> >>ERROR("%p: unable to reallocate queue index %u",
> >>  (void *)dev, idx);
> >>  priv_unlock(priv);
> >>  return -ENOMEM;
> >>   }
> >>}
> >> } else {
> >>rxq_ctrl = rte_calloc_socket("RXQ", 1, sizeof(*rxq_ctrl) +
> >>(desc + desc_pad) *
> >> sizeof(struct rte_mbuf 
> >> *),
> >> 0, socket);
> >>if (rxq_ctrl == NULL) {
> >> ERROR("%p: unable to allocate queue index %u",
> >>  

Re: [lng-odp] Suspected SPAM - Re: Bug 3657

2018-03-29 Thread Elo, Matias (Nokia - FI/Espoo)
A second thing to try. Since you seem to have a NUMA  system, the ODP 
application should be run on the same NUMA socket as the NIC (e.g. using 
taskset if necessary). In case of different sockets, both sockets should have 
huge pages mapped.

-Matias

> On 29 Mar 2018, at 10:00, Elo, Matias (Nokia - FI/Espoo) 
> <matias@nokia.com> wrote:
> 
> Hi Gyanesh,
> 
> It seems you are using 1G huge pages. Have you tried using 2M pages (1024 x 
> 2M pages should be enough)? As Bill noted, this seems like a memory related 
> issue.
> 
> -Matias
> 
> 
>> On 28 Mar 2018, at 18:15, gyanesh patra <pgyanesh.pa...@gmail.com> wrote:
>> 
>> Yes, it is.
>> The error is the same. I did replied that the only difference I see is with 
>> Ubuntu version and different minor version of mellanox driver. 
>> 
>> On Wed, Mar 28, 2018, 07:29 Bill Fischofer <bill.fischo...@linaro.org> wrote:
>> Thanks for the update. Sounds like you're already using DPDK 17.11?
>> What about Mellanox driver level? Is the failure the same as you
>> originally reported?
>> 
>> From the reported error:
>> 
>> pktio/dpdk.c:1538:dpdk_start():Queue setup failed: err=-12, port=0
>> odp_l2fwd.c:1671:main():Error: unable to start 0
>> 
>> This is a DPDK PMD driver error reported by rte_eth_rx_queue_setup().
>> In the Mellanox PMD (drivers/net/mlx5/mlx5_rxq.c) this is the
>> mlx5_rx_queue_setup() routine. The relevant code seems to be this:
>> 
>> if (rxq != NULL) {
>>DEBUG("%p: reusing already allocated queue index %u (%p)",
>>  (void *)dev, idx, (void *)rxq);
>>if (priv->started) {
>>priv_unlock(priv);
>>return -EEXIST;
>>}
>>(*priv->rxqs)[idx] = NULL;
>>rxq_cleanup(rxq_ctrl);
>>/* Resize if rxq size is changed. */
>>if (rxq_ctrl->rxq.elts_n != log2above(desc)) {
>>rxq_ctrl = rte_realloc(rxq_ctrl,
>>  sizeof(*rxq_ctrl) +
>>  (desc + desc_pad) *
>>  sizeof(struct rte_mbuf *),
>>  RTE_CACHE_LINE_SIZE);
>>if (!rxq_ctrl) {
>>ERROR("%p: unable to reallocate queue index %u",
>>  (void *)dev, idx);
>>  priv_unlock(priv);
>>  return -ENOMEM;
>>   }
>>}
>> } else {
>>rxq_ctrl = rte_calloc_socket("RXQ", 1, sizeof(*rxq_ctrl) +
>>(desc + desc_pad) *
>> sizeof(struct rte_mbuf 
>> *),
>> 0, socket);
>>if (rxq_ctrl == NULL) {
>> ERROR("%p: unable to allocate queue index %u",
>>   (void *)dev, idx);
>>   priv_unlock(priv);
>>return -ENOMEM;
>>}
>> }
>> 
>> The reported -12 error code is -ENOMEM so I'd say the issue is some
>> sort of memory allocation failure.
>> 
>> 
>> On Wed, Mar 28, 2018 at 8:43 AM, gyanesh patra <pgyanesh.pa...@gmail.com> 
>> wrote:
>>> Hi Bill,
>>> I tried with Matias' suggestions but without success.
>>> 
>>> P Gyanesh Kumar Patra
>>> 
>>> On Mon, Mar 26, 2018 at 4:16 PM, Bill Fischofer <bill.fischo...@linaro.org>
>>> wrote:
>>>> 
>>>> Hi Gyanesh,
>>>> 
>>>> Have you had a chance to look at
>>>> https://bugs.linaro.org/show_bug.cgi?id=3657 and see if Matias' suggestions
>>>> are helpful to you?
>>>> 
>>>> Thanks,
>>>> 
>>>> Regards,
>>>> Bill
>>> 
>>> 
> 



Re: [lng-odp] Bug 3657

2018-03-29 Thread Elo, Matias (Nokia - FI/Espoo)
Hi Gyanesh,

It seems you are using 1G huge pages. Have you tried using 2M pages (1024 x 2M 
pages should be enough)? As Bill noted, this seems like a memory related issue.

-Matias


> On 28 Mar 2018, at 18:15, gyanesh patra  wrote:
> 
> Yes, it is.
> The error is the same. I did replied that the only difference I see is with 
> Ubuntu version and different minor version of mellanox driver. 
> 
> On Wed, Mar 28, 2018, 07:29 Bill Fischofer  wrote:
> Thanks for the update. Sounds like you're already using DPDK 17.11?
> What about Mellanox driver level? Is the failure the same as you
> originally reported?
> 
> From the reported error:
> 
> pktio/dpdk.c:1538:dpdk_start():Queue setup failed: err=-12, port=0
> odp_l2fwd.c:1671:main():Error: unable to start 0
> 
> This is a DPDK PMD driver error reported by rte_eth_rx_queue_setup().
> In the Mellanox PMD (drivers/net/mlx5/mlx5_rxq.c) this is the
> mlx5_rx_queue_setup() routine. The relevant code seems to be this:
> 
> if (rxq != NULL) {
> DEBUG("%p: reusing already allocated queue index %u (%p)",
>   (void *)dev, idx, (void *)rxq);
> if (priv->started) {
> priv_unlock(priv);
> return -EEXIST;
> }
> (*priv->rxqs)[idx] = NULL;
> rxq_cleanup(rxq_ctrl);
> /* Resize if rxq size is changed. */
> if (rxq_ctrl->rxq.elts_n != log2above(desc)) {
> rxq_ctrl = rte_realloc(rxq_ctrl,
>   sizeof(*rxq_ctrl) +
>   (desc + desc_pad) *
>   sizeof(struct rte_mbuf *),
>   RTE_CACHE_LINE_SIZE);
> if (!rxq_ctrl) {
> ERROR("%p: unable to reallocate queue index %u",
>   (void *)dev, idx);
>   priv_unlock(priv);
>   return -ENOMEM;
>}
> }
> } else {
> rxq_ctrl = rte_calloc_socket("RXQ", 1, sizeof(*rxq_ctrl) +
> (desc + desc_pad) *
>  sizeof(struct rte_mbuf 
> *),
>  0, socket);
> if (rxq_ctrl == NULL) {
>  ERROR("%p: unable to allocate queue index %u",
>(void *)dev, idx);
>priv_unlock(priv);
> return -ENOMEM;
> }
> }
> 
> The reported -12 error code is -ENOMEM so I'd say the issue is some
> sort of memory allocation failure.
> 
> 
> On Wed, Mar 28, 2018 at 8:43 AM, gyanesh patra  
> wrote:
> > Hi Bill,
> > I tried with Matias' suggestions but without success.
> >
> > P Gyanesh Kumar Patra
> >
> > On Mon, Mar 26, 2018 at 4:16 PM, Bill Fischofer 
> > wrote:
> >>
> >> Hi Gyanesh,
> >>
> >> Have you had a chance to look at
> >> https://bugs.linaro.org/show_bug.cgi?id=3657 and see if Matias' suggestions
> >> are helpful to you?
> >>
> >> Thanks,
> >>
> >> Regards,
> >> Bill
> >
> >



Re: [lng-odp] lng-odp Digest, Vol 48, Issue 37

2018-03-20 Thread Elo, Matias (Nokia - FI/Espoo)
Hi,

I’m traveling this week, but here are couple things you could try. The 
experimental DPDK zero copy mode doesn’t probably work with Mellanox NICs, so 
that should be disabled. If you haven’t already tested DPDK 17.11.1, it 
includes additional fixes for Mellanox NICs.

-Matias

On 18 Mar 2018, at 7.55, gyanesh patra 
> wrote:

Hi Matias,
Thanks for the patch to compile ODP with MLX drivers.
Finally, i got to try out the patch,  but it is not working for me. I am still 
getting the same error while running 'test/performance/odp_l2fwd'.

My configuration details are :
Driver:
MLNX_OFED_LINUX-4.2-1.0.0.0 (OFED-4.2-1.0.0)
Interface:
:81:00.0 'MT27700 Family [ConnectX-4]' if=enp129s0f0 drv=mlx5_core unused=
System:
Ubuntu 16.04 x86_64 4.4.0-116-generic
DPDK: 17.11
odp-linux :  1.18.0.1

I can see that the two differences in our configuration:
Ubuntu 16 (ours) vs Ubuntu 17 (yours)
MLNX_OFED:-  OFED-4.2-1.0.0 (ours) vs
​
OFED-4.2-1.2.0.0 (yours)

Do you think this might be causing the issue? If any other details needed to 
debug, i can help with them.

Thanks,
​
P Gyanesh Kumar Patra

Message: 2
Date: Tue, 13 Mar 2018 07:05:10 +
From: bugzilla-dae...@bugs.linaro.org
To: lng-odp@lists.linaro.org
Subject: [lng-odp] [Bug 3657] PktIO does not work with Mellanox
Interfaces
Message-ID:

<010001621e2d570a-9dbbd3f1-0755-42a1-90cf-a70d852eb079-000...@email.amazonses.com>

Content-Type: text/plain; charset="UTF-8"

https://bugs.linaro.org/show_bug.cgi?id=3657

--- Comment #4 from Matias Elo 
> ---
Hi,

The Mellanox PMD drivers (mlx5) have received quite a few fixes since DPDK
v17.08. I would suggest trying DPDK v17.11 as we are moving to that version
soon anyway.

I tested some Mellanox NICs in our lab (ConnectX-4 Lx) and they work properly
with odp-linux using DPDK v17.11 and Mellanox OFED 4.2
(MLNX_
​​
OFED_LINUX-4.2-1.2.0.0-ubuntu17.10-x86_64).

The following patch was required to add the necessary libraries.

diff --git a/m4/odp_dpdk.m4 b/m4/odp_dpdk.m4
index 0050fc4b..b144b23d 100644
--- a/m4/odp_dpdk.m4
+++ b/m4/odp_dpdk.m4
@@ -9,6 +9,7 @@ cur_driver=`basename "$filename" .a | sed -e 's/^lib//'`
 AS_VAR_APPEND([DPDK_PMDS], [-l$cur_driver,])
 AS_CASE([$cur_driver],
 [rte_pmd_nfp], [AS_VAR_APPEND([DPDK_LIBS], [" -lm"])],
+[rte_pmd_mlx5], [AS_VAR_APPEND([DPDK_LIBS], [" -libverbs -lmlx5"])],
 [rte_pmd_pcap], [AS_VAR_APPEND([DPDK_LIBS], [" -lpcap"])],
 [rte_pmd_openssl], [AS_VAR_APPEND([DPDK_LIBS], [" -lcrypto"])])
 done


Regards,
Matias



Re: [lng-odp] unexpected packet handling behavior with dpdk pktio support

2018-02-07 Thread Elo, Matias (Nokia - FI/Espoo)
Did you try with the latest netmap master branch code? That seemed to work for 
me.

-Matias

On 7 Feb 2018, at 17.32, gyanesh patra 
<pgyanesh.pa...@gmail.com<mailto:pgyanesh.pa...@gmail.com>> wrote:

Is it possible to fix for netmap too in similar fashion?

P Gyanesh Kumar Patra

On Wed, Feb 7, 2018 at 1:19 PM, Elo, Matias (Nokia - FI/Espoo) 
<matias@nokia.com<mailto:matias@nokia.com>> wrote:
The PR is now available: https://github.com/Linaro/odp/pull/458

-Matias

> On 7 Feb 2018, at 15:31, gyanesh patra 
> <pgyanesh.pa...@gmail.com<mailto:pgyanesh.pa...@gmail.com>> wrote:
>
> This patch works on Intel X540-AT2 NICs too.
>
> P Gyanesh Kumar Patra
>
> On Wed, Feb 7, 2018 at 11:28 AM, Bill Fischofer 
> <bill.fischo...@linaro.org<mailto:bill.fischo...@linaro.org>> wrote:
> Thanks, Matias. Please open a bug for this and reference it in the fix.
>
> On Wed, Feb 7, 2018 at 6:36 AM, Elo, Matias (Nokia - FI/Espoo) 
> <matias@nokia.com<mailto:matias@nokia.com>> wrote:
> Hi,
>
> I actually just figured out the problem. For e.g. Niantic NICs the 
> rte_eth_rxconf.rx_drop_en has to be enabled for the NIC to continue working 
> properly when all RX queues are not emptied. The following patch fixes the 
> problem for me:
>
> diff --git a/platform/linux-generic/pktio/dpdk.c 
> b/platform/linux-generic/pktio/dpdk.c
> index bd6920e..fc535e3 100644
> --- a/platform/linux-generic/pktio/dpdk.c
> +++ b/platform/linux-generic/pktio/dpdk.c
> @@ -1402,6 +1402,7 @@ static int dpdk_open(odp_pktio_t id ODP_UNUSED,
>
>  static int dpdk_start(pktio_entry_t *pktio_entry)
>  {
> +   struct rte_eth_dev_info dev_info;
> pkt_dpdk_t *pkt_dpdk = _entry->s.pkt_dpdk;
> uint8_t port_id = pkt_dpdk->port_id;
> int ret;
> @@ -1420,7 +1421,6 @@ static int dpdk_start(pktio_entry_t *pktio_entry)
> }
> /* Init TX queues */
> for (i = 0; i < pktio_entry->s.num_out_queue; i++) {
> -   struct rte_eth_dev_info dev_info;
> const struct rte_eth_txconf *txconf = NULL;
> int ip_ena  = 
> pktio_entry->s.config.pktout.bit.ipv4_chksum_ena;
> int udp_ena = pktio_entry->s.config.pktout.bit.udp_chksum_ena;
> @@ -1470,9 +1470,14 @@ static int dpdk_start(pktio_entry_t *pktio_entry)
> }
> /* Init RX queues */
> for (i = 0; i < pktio_entry->s.num_in_queue; i++) {
> +   struct rte_eth_rxconf *rxconf = NULL;
> +
> +   rte_eth_dev_info_get(port_id, _info);
> +   rxconf = _info.default_rxconf;
> +   rxconf->rx_drop_en = 1;
> ret = rte_eth_rx_queue_setup(port_id, i, DPDK_NM_RX_DESC,
>  rte_eth_dev_socket_id(port_id),
> -NULL, pkt_dpdk->pkt_pool);
> +rxconf, pkt_dpdk->pkt_pool);
> if (ret < 0) {
> ODP_ERR("Queue setup failed: err=%d, port=%" PRIu8 
> "\n",
> ret, port_id);
>
> I'll test it a bit more for performance effects and then send a fix PR.
>
> -Matias
>
>
>
> > On 7 Feb 2018, at 14:18, gyanesh patra 
> > <pgyanesh.pa...@gmail.com<mailto:pgyanesh.pa...@gmail.com>> wrote:
> >
> > Thank you.
> > I am curious what might be the reason.
> >
> > P Gyanesh Kumar Patra
> >
> > On Wed, Feb 7, 2018 at 9:51 AM, Elo, Matias (Nokia - FI/Espoo) 
> > <matias@nokia.com<mailto:matias@nokia.com>> wrote:
> > I'm currently trying to figure out what's happening. I'll report back when 
> > I find out something.
> >
> > -Matias
> >
> >
> > > On 7 Feb 2018, at 13:44, gyanesh patra 
> > > <pgyanesh.pa...@gmail.com<mailto:pgyanesh.pa...@gmail.com>> wrote:
> > >
> > > Do you have any theory for the issue in 82599 (Niantic) NIC and why it 
> > > might be working in Intel XL710 (Fortville)? Can i identify a new 
> > > hardware without this issue by looking at their datasheet/specs?
> > > Thanks for the insight.
> > >
> > > P Gyanesh Kumar Patra
> > >
> > > On Wed, Feb 7, 2018 at 9:12 AM, Elo, Matias (Nokia - FI/Espoo) 
> > > <matias@nokia.com<mailto:matias@nokia.com>> wrote:
> > > I was unable to reproduce this with Intel XL710 (Fortville) but with 
> > > 82599 (Niantic) l2fwd operates as you have described. This may be a NIC 
> > > HW limitation since the same issue i

Re: [lng-odp] unexpected packet handling behavior with dpdk pktio support

2018-02-07 Thread Elo, Matias (Nokia - FI/Espoo)
The PR is now available: https://github.com/Linaro/odp/pull/458

-Matias

> On 7 Feb 2018, at 15:31, gyanesh patra <pgyanesh.pa...@gmail.com> wrote:
> 
> This patch works on Intel X540-AT2 NICs too.
> 
> P Gyanesh Kumar Patra
> 
> On Wed, Feb 7, 2018 at 11:28 AM, Bill Fischofer <bill.fischo...@linaro.org> 
> wrote:
> Thanks, Matias. Please open a bug for this and reference it in the fix.
> 
> On Wed, Feb 7, 2018 at 6:36 AM, Elo, Matias (Nokia - FI/Espoo) 
> <matias@nokia.com> wrote:
> Hi,
> 
> I actually just figured out the problem. For e.g. Niantic NICs the 
> rte_eth_rxconf.rx_drop_en has to be enabled for the NIC to continue working 
> properly when all RX queues are not emptied. The following patch fixes the 
> problem for me:
> 
> diff --git a/platform/linux-generic/pktio/dpdk.c 
> b/platform/linux-generic/pktio/dpdk.c
> index bd6920e..fc535e3 100644
> --- a/platform/linux-generic/pktio/dpdk.c
> +++ b/platform/linux-generic/pktio/dpdk.c
> @@ -1402,6 +1402,7 @@ static int dpdk_open(odp_pktio_t id ODP_UNUSED,
> 
>  static int dpdk_start(pktio_entry_t *pktio_entry)
>  {
> +   struct rte_eth_dev_info dev_info;
> pkt_dpdk_t *pkt_dpdk = _entry->s.pkt_dpdk;
> uint8_t port_id = pkt_dpdk->port_id;
> int ret;
> @@ -1420,7 +1421,6 @@ static int dpdk_start(pktio_entry_t *pktio_entry)
> }
> /* Init TX queues */
> for (i = 0; i < pktio_entry->s.num_out_queue; i++) {
> -   struct rte_eth_dev_info dev_info;
> const struct rte_eth_txconf *txconf = NULL;
> int ip_ena  = 
> pktio_entry->s.config.pktout.bit.ipv4_chksum_ena;
> int udp_ena = pktio_entry->s.config.pktout.bit.udp_chksum_ena;
> @@ -1470,9 +1470,14 @@ static int dpdk_start(pktio_entry_t *pktio_entry)
> }
> /* Init RX queues */
> for (i = 0; i < pktio_entry->s.num_in_queue; i++) {
> +   struct rte_eth_rxconf *rxconf = NULL;
> +
> +   rte_eth_dev_info_get(port_id, _info);
> +   rxconf = _info.default_rxconf;
> +   rxconf->rx_drop_en = 1;
> ret = rte_eth_rx_queue_setup(port_id, i, DPDK_NM_RX_DESC,
>  rte_eth_dev_socket_id(port_id),
> -NULL, pkt_dpdk->pkt_pool);
> +rxconf, pkt_dpdk->pkt_pool);
> if (ret < 0) {
> ODP_ERR("Queue setup failed: err=%d, port=%" PRIu8 
> "\n",
> ret, port_id);
> 
> I'll test it a bit more for performance effects and then send a fix PR.
> 
> -Matias
> 
> 
> 
> > On 7 Feb 2018, at 14:18, gyanesh patra <pgyanesh.pa...@gmail.com> wrote:
> >
> > Thank you.
> > I am curious what might be the reason.
> >
> > P Gyanesh Kumar Patra
> >
> > On Wed, Feb 7, 2018 at 9:51 AM, Elo, Matias (Nokia - FI/Espoo) 
> > <matias@nokia.com> wrote:
> > I'm currently trying to figure out what's happening. I'll report back when 
> > I find out something.
> >
> > -Matias
> >
> >
> > > On 7 Feb 2018, at 13:44, gyanesh patra <pgyanesh.pa...@gmail.com> wrote:
> > >
> > > Do you have any theory for the issue in 82599 (Niantic) NIC and why it 
> > > might be working in Intel XL710 (Fortville)? Can i identify a new 
> > > hardware without this issue by looking at their datasheet/specs?
> > > Thanks for the insight.
> > >
> > > P Gyanesh Kumar Patra
> > >
> > > On Wed, Feb 7, 2018 at 9:12 AM, Elo, Matias (Nokia - FI/Espoo) 
> > > <matias@nokia.com> wrote:
> > > I was unable to reproduce this with Intel XL710 (Fortville) but with 
> > > 82599 (Niantic) l2fwd operates as you have described. This may be a NIC 
> > > HW limitation since the same issue is also observed with netmap pktio.
> > >
> > > -Matias
> > >
> > >
> > > > On 7 Feb 2018, at 11:14, gyanesh patra <pgyanesh.pa...@gmail.com> wrote:
> > > >
> > > > Thanks for the info. I verified this with both odp 1.16 and odp 1.17 
> > > > with same behavior.
> > > > The traffic consists of diff Mac and ip addresses.
> > > > Without the busy loop, I could see that all the threads were receiving 
> > > > packets. So i think packet distribution is not an issue. In our case, 
> > > > we are sending packet at line rate of 10G interface. That might be 
> > > > causing

Re: [lng-odp] unexpected packet handling behavior with dpdk pktio support

2018-02-07 Thread Elo, Matias (Nokia - FI/Espoo)
Hi,

I actually just figured out the problem. For e.g. Niantic NICs the 
rte_eth_rxconf.rx_drop_en has to be enabled for the NIC to continue working 
properly when all RX queues are not emptied. The following patch fixes the 
problem for me:

diff --git a/platform/linux-generic/pktio/dpdk.c 
b/platform/linux-generic/pktio/dpdk.c
index bd6920e..fc535e3 100644
--- a/platform/linux-generic/pktio/dpdk.c
+++ b/platform/linux-generic/pktio/dpdk.c
@@ -1402,6 +1402,7 @@ static int dpdk_open(odp_pktio_t id ODP_UNUSED,
 
 static int dpdk_start(pktio_entry_t *pktio_entry)
 {
+   struct rte_eth_dev_info dev_info;
pkt_dpdk_t *pkt_dpdk = _entry->s.pkt_dpdk;
uint8_t port_id = pkt_dpdk->port_id;
int ret;
@@ -1420,7 +1421,6 @@ static int dpdk_start(pktio_entry_t *pktio_entry)
}
/* Init TX queues */
for (i = 0; i < pktio_entry->s.num_out_queue; i++) {
-   struct rte_eth_dev_info dev_info;
const struct rte_eth_txconf *txconf = NULL;
int ip_ena  = pktio_entry->s.config.pktout.bit.ipv4_chksum_ena;
int udp_ena = pktio_entry->s.config.pktout.bit.udp_chksum_ena;
@@ -1470,9 +1470,14 @@ static int dpdk_start(pktio_entry_t *pktio_entry)
}
/* Init RX queues */
for (i = 0; i < pktio_entry->s.num_in_queue; i++) {
+   struct rte_eth_rxconf *rxconf = NULL;
+
+   rte_eth_dev_info_get(port_id, _info);
+   rxconf = _info.default_rxconf;
+   rxconf->rx_drop_en = 1;
ret = rte_eth_rx_queue_setup(port_id, i, DPDK_NM_RX_DESC,
 rte_eth_dev_socket_id(port_id),
-NULL, pkt_dpdk->pkt_pool);
+rxconf, pkt_dpdk->pkt_pool);
if (ret < 0) {
ODP_ERR("Queue setup failed: err=%d, port=%" PRIu8 "\n",
ret, port_id);

I'll test it a bit more for performance effects and then send a fix PR.

-Matias



> On 7 Feb 2018, at 14:18, gyanesh patra <pgyanesh.pa...@gmail.com> wrote:
> 
> Thank you.
> I am curious what might be the reason.
> 
> P Gyanesh Kumar Patra
> 
> On Wed, Feb 7, 2018 at 9:51 AM, Elo, Matias (Nokia - FI/Espoo) 
> <matias@nokia.com> wrote:
> I'm currently trying to figure out what's happening. I'll report back when I 
> find out something.
> 
> -Matias
> 
> 
> > On 7 Feb 2018, at 13:44, gyanesh patra <pgyanesh.pa...@gmail.com> wrote:
> >
> > Do you have any theory for the issue in 82599 (Niantic) NIC and why it 
> > might be working in Intel XL710 (Fortville)? Can i identify a new hardware 
> > without this issue by looking at their datasheet/specs?
> > Thanks for the insight.
> >
> > P Gyanesh Kumar Patra
> >
> > On Wed, Feb 7, 2018 at 9:12 AM, Elo, Matias (Nokia - FI/Espoo) 
> > <matias@nokia.com> wrote:
> > I was unable to reproduce this with Intel XL710 (Fortville) but with 82599 
> > (Niantic) l2fwd operates as you have described. This may be a NIC HW 
> > limitation since the same issue is also observed with netmap pktio.
> >
> > -Matias
> >
> >
> > > On 7 Feb 2018, at 11:14, gyanesh patra <pgyanesh.pa...@gmail.com> wrote:
> > >
> > > Thanks for the info. I verified this with both odp 1.16 and odp 1.17 with 
> > > same behavior.
> > > The traffic consists of diff Mac and ip addresses.
> > > Without the busy loop, I could see that all the threads were receiving 
> > > packets. So i think packet distribution is not an issue. In our case, we 
> > > are sending packet at line rate of 10G interface. That might be causing 
> > > this behaviour.
> > > If I can provide any other info, let me know.
> > >
> > > Thanks
> > >
> > > Gyanesh
> > >
> > > On Wed, Feb 7, 2018, 05:15 Elo, Matias (Nokia - FI/Espoo) 
> > > <matias@nokia.com> wrote:
> > > Hi Gyanesh,
> > >
> > > I tested the patch on my system and everything seems to work as expected. 
> > > Based on the log you're not running the latest code (v1.17.0) but I doubt 
> > > that is the issue here.
> > >
> > > What kind of test traffic are you using? The l2fwd example uses IPv4 
> > > addresses and UDP ports to do the input hashing. If test packets are 
> > > identical they will all end up in the same input queue, which would 
> > > explain what you are seeing.
> > >
> > > -Matias
> > >
> > >
> > > > On 6 Feb 2018, at 19:00, gyanesh patra <pgyanesh.pa...@gmail.com

Re: [lng-odp] unexpected packet handling behavior with dpdk pktio support

2018-02-07 Thread Elo, Matias (Nokia - FI/Espoo)
I'm currently trying to figure out what's happening. I'll report back when I 
find out something.

-Matias


> On 7 Feb 2018, at 13:44, gyanesh patra <pgyanesh.pa...@gmail.com> wrote:
> 
> Do you have any theory for the issue in 82599 (Niantic) NIC and why it might 
> be working in Intel XL710 (Fortville)? Can i identify a new hardware without 
> this issue by looking at their datasheet/specs?
> Thanks for the insight.
> 
> P Gyanesh Kumar Patra
> 
> On Wed, Feb 7, 2018 at 9:12 AM, Elo, Matias (Nokia - FI/Espoo) 
> <matias@nokia.com> wrote:
> I was unable to reproduce this with Intel XL710 (Fortville) but with 82599 
> (Niantic) l2fwd operates as you have described. This may be a NIC HW 
> limitation since the same issue is also observed with netmap pktio.
> 
> -Matias
> 
> 
> > On 7 Feb 2018, at 11:14, gyanesh patra <pgyanesh.pa...@gmail.com> wrote:
> >
> > Thanks for the info. I verified this with both odp 1.16 and odp 1.17 with 
> > same behavior.
> > The traffic consists of diff Mac and ip addresses.
> > Without the busy loop, I could see that all the threads were receiving 
> > packets. So i think packet distribution is not an issue. In our case, we 
> > are sending packet at line rate of 10G interface. That might be causing 
> > this behaviour.
> > If I can provide any other info, let me know.
> >
> > Thanks
> >
> > Gyanesh
> >
> > On Wed, Feb 7, 2018, 05:15 Elo, Matias (Nokia - FI/Espoo) 
> > <matias@nokia.com> wrote:
> > Hi Gyanesh,
> >
> > I tested the patch on my system and everything seems to work as expected. 
> > Based on the log you're not running the latest code (v1.17.0) but I doubt 
> > that is the issue here.
> >
> > What kind of test traffic are you using? The l2fwd example uses IPv4 
> > addresses and UDP ports to do the input hashing. If test packets are 
> > identical they will all end up in the same input queue, which would explain 
> > what you are seeing.
> >
> > -Matias
> >
> >
> > > On 6 Feb 2018, at 19:00, gyanesh patra <pgyanesh.pa...@gmail.com> wrote:
> > >
> > > Hi,
> > > I tried with netmap, dpdk and dpdk with zero-copy enabled. All of them 
> > > have the same behaviour. I also tried with (200*2048) as packet pool size 
> > > without any success.
> > > I am attaching the patch for test/performance/odp_l2fwd example here to 
> > > demonstrate the behaviour. Also find the output of the example below:
> > >
> > > root@india:~/pktio/odp_ipv6/test/performance# ./odp_l2fwd -i 0,1
> > > HW time counter freq: 2094954892 hz
> > >
> > > PKTIO: initialized loop interface.
> > > PKTIO: initialized dpdk pktio, use export ODP_PKTIO_DISABLE_DPDK=1 to 
> > > disable.
> > > PKTIO: initialized pcap interface.
> > > PKTIO: initialized ipc interface.
> > > PKTIO: initialized socket mmap, use export 
> > > ODP_PKTIO_DISABLE_SOCKET_MMAP=1 to disable.
> > > PKTIO: initialized socket mmsg,use export ODP_PKTIO_DISABLE_SOCKET_MMSG=1 
> > > to disable.
> > >
> > > ODP system info
> > > ---
> > > ODP API version: 1.16.0
> > > ODP impl name:   "odp-linux"
> > > CPU model:   Intel(R) Xeon(R) CPU E5-2620 v2
> > > CPU freq (hz):   26
> > > Cache line size: 64
> > > CPU count:   12
> > >
> > >
> > > CPU features supported:
> > > SSE3 PCLMULQDQ DTES64 MONITOR DS_CPL VMX SMX EIST TM2 SSSE3 CMPXCHG16B 
> > > XTPR PDCM PCID DCA SSE4_1 SSE4_2 X2APIC POPCNT TSC_DEADLINE AES XSAVE 
> > > OSXSAVE AVX F16C RDRAND FPU VME DE PSE TSC MSR PAE MCE CX8 APIC SEP MTRR 
> > > PGE MCA CMOV PAT PSE36 CLFSH DS ACPI MMX FXSR SSE SSE2 SS HTT TM PBE 
> > > DIGTEMP ARAT PLN ECMD PTM MPERF_APERF_MSR ENERGY_EFF FSGSBASE BMI2 
> > > LAHF_SAHF SYSCALL XD 1GB_PG RDTSCP EM64T INVTSC
> > >
> > > CPU features NOT supported:
> > > CNXT_ID FMA MOVBE PSN TRBOBST ACNT2 BMI1 HLE AVX2 SMEP ERMS INVPCID RTM 
> > > AVX512F LZCNT
> > >
> > > Running ODP appl: "odp_l2fwd"
> > > -
> > > IF-count:2
> > > Using IFs:   0 1
> > > Mode:PKTIN_DIRECT, PKTOUT_DIRECT
> > >
> > > num worker threads: 10
> > > first CPU:  2
> > > cpu mask:   0xFFC
> > >
> > >
> > > Pool info
> > > -
> > >   pool0
> > >   namepacket pool
> >

Re: [lng-odp] unexpected packet handling behavior with dpdk pktio support

2018-02-07 Thread Elo, Matias (Nokia - FI/Espoo)
I was unable to reproduce this with Intel XL710 (Fortville) but with 82599 
(Niantic) l2fwd operates as you have described. This may be a NIC HW limitation 
since the same issue is also observed with netmap pktio.

-Matias


> On 7 Feb 2018, at 11:14, gyanesh patra <pgyanesh.pa...@gmail.com> wrote:
> 
> Thanks for the info. I verified this with both odp 1.16 and odp 1.17 with 
> same behavior.
> The traffic consists of diff Mac and ip addresses. 
> Without the busy loop, I could see that all the threads were receiving 
> packets. So i think packet distribution is not an issue. In our case, we are 
> sending packet at line rate of 10G interface. That might be causing this 
> behaviour. 
> If I can provide any other info, let me know.
> 
> Thanks
> 
> Gyanesh
> 
> On Wed, Feb 7, 2018, 05:15 Elo, Matias (Nokia - FI/Espoo) 
> <matias@nokia.com> wrote:
> Hi Gyanesh,
> 
> I tested the patch on my system and everything seems to work as expected. 
> Based on the log you're not running the latest code (v1.17.0) but I doubt 
> that is the issue here.
> 
> What kind of test traffic are you using? The l2fwd example uses IPv4 
> addresses and UDP ports to do the input hashing. If test packets are 
> identical they will all end up in the same input queue, which would explain 
> what you are seeing.
> 
> -Matias
> 
> 
> > On 6 Feb 2018, at 19:00, gyanesh patra <pgyanesh.pa...@gmail.com> wrote:
> >
> > Hi,
> > I tried with netmap, dpdk and dpdk with zero-copy enabled. All of them have 
> > the same behaviour. I also tried with (200*2048) as packet pool size 
> > without any success.
> > I am attaching the patch for test/performance/odp_l2fwd example here to 
> > demonstrate the behaviour. Also find the output of the example below:
> >
> > root@india:~/pktio/odp_ipv6/test/performance# ./odp_l2fwd -i 0,1
> > HW time counter freq: 2094954892 hz
> >
> > PKTIO: initialized loop interface.
> > PKTIO: initialized dpdk pktio, use export ODP_PKTIO_DISABLE_DPDK=1 to 
> > disable.
> > PKTIO: initialized pcap interface.
> > PKTIO: initialized ipc interface.
> > PKTIO: initialized socket mmap, use export ODP_PKTIO_DISABLE_SOCKET_MMAP=1 
> > to disable.
> > PKTIO: initialized socket mmsg,use export ODP_PKTIO_DISABLE_SOCKET_MMSG=1 
> > to disable.
> >
> > ODP system info
> > ---
> > ODP API version: 1.16.0
> > ODP impl name:   "odp-linux"
> > CPU model:   Intel(R) Xeon(R) CPU E5-2620 v2
> > CPU freq (hz):   26
> > Cache line size: 64
> > CPU count:   12
> >
> >
> > CPU features supported:
> > SSE3 PCLMULQDQ DTES64 MONITOR DS_CPL VMX SMX EIST TM2 SSSE3 CMPXCHG16B XTPR 
> > PDCM PCID DCA SSE4_1 SSE4_2 X2APIC POPCNT TSC_DEADLINE AES XSAVE OSXSAVE 
> > AVX F16C RDRAND FPU VME DE PSE TSC MSR PAE MCE CX8 APIC SEP MTRR PGE MCA 
> > CMOV PAT PSE36 CLFSH DS ACPI MMX FXSR SSE SSE2 SS HTT TM PBE DIGTEMP ARAT 
> > PLN ECMD PTM MPERF_APERF_MSR ENERGY_EFF FSGSBASE BMI2 LAHF_SAHF SYSCALL XD 
> > 1GB_PG RDTSCP EM64T INVTSC
> >
> > CPU features NOT supported:
> > CNXT_ID FMA MOVBE PSN TRBOBST ACNT2 BMI1 HLE AVX2 SMEP ERMS INVPCID RTM 
> > AVX512F LZCNT
> >
> > Running ODP appl: "odp_l2fwd"
> > -
> > IF-count:2
> > Using IFs:   0 1
> > Mode:PKTIN_DIRECT, PKTOUT_DIRECT
> >
> > num worker threads: 10
> > first CPU:  2
> > cpu mask:   0xFFC
> >
> >
> > Pool info
> > -
> >   pool0
> >   namepacket pool
> >   pool type   packet
> >   pool shm11
> >   user area shm   0
> >   num 8192
> >   align   64
> >   headroom128
> >   seg len 8064
> >   max data len65536
> >   tailroom0
> >   block size  8896
> >   uarea size  0
> >   shm size73196288
> >   base addr   0x7f566940
> >   uarea shm size  0
> >   uarea base addr (nil)
> >
> > EAL: Detected 12 lcore(s)
> > EAL: No free hugepages reported in hugepages-1048576kB
> > EAL: Probing VFIO support...
> > EAL: PCI device :03:00.0 on NUMA socket 0
> > EAL:   probe driver: 8086:10fb net_ixgbe
> > EAL: PCI device :03:00.1 on NUMA socket 0
> > EAL:   probe driver: 8086:10fb net_ixgbe
> > EAL: PCI device :05:00.0 on NUMA socket 0
> > EAL:   probe driver: 8086:1528 net_ixgbe
> > EAL: PCI device :05:00.1 on NUMA socket 0
> > EAL:   probe driver: 8086:1528 net_ixgbe
> 

Re: [lng-odp] unexpected packet handling behavior with dpdk pktio support

2018-02-06 Thread Elo, Matias (Nokia - FI/Espoo)
>   tx workers 5
>   rx queues 5
>   tx queues 5
> 
> [01] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
> [02] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
> [03] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
> [04] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
> [05] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
> [06] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
> [07] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
> [08] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
> [09] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
> [10] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
> 0 pps, 0 max pps,  0 rx drops, 0 tx drops
> 0 pps, 0 max pps,  0 rx drops, 0 tx drops
> 0 pps, 0 max pps,  0 rx drops, 0 tx drops
> 0 pps, 0 max pps,  0 rx drops, 0 tx drops
> 0 pps, 0 max pps,  0 rx drops, 0 tx drops
> 1396 pps, 1396 max pps,  0 rx drops, 0 tx drops
> 0 pps, 1396 max pps,  0 rx drops, 0 tx drops
> 0 pps, 1396 max pps,  0 rx drops, 0 tx drops
> 0 pps, 1396 max pps,  0 rx drops, 0 tx drops
> 0 pps, 1396 max pps,  0 rx drops, 0 tx drops
> 0 pps, 1396 max pps,  0 rx drops, 0 tx drops
> 0 pps, 1396 max pps,  0 rx drops, 0 tx drops
> 0 pps, 1396 max pps,  0 rx drops, 0 tx drops
> 0 pps, 1396 max pps,  0 rx drops, 0 tx drops
> 0 pps, 1396 max pps,  0 rx drops, 0 tx drops
> ^C0 pps, 1396 max pps,  0 rx drops, 0 tx drops
> TEST RESULT: 1396 maximum packets per second.
> 
> 
> 
> P Gyanesh Kumar Patra
> 
> On Tue, Feb 6, 2018 at 9:55 AM, Elo, Matias (Nokia - FI/Espoo) 
> <matias@nokia.com> wrote:
> 
> 
> > On 5 Feb 2018, at 19:42, Bill Fischofer <bill.fischo...@linaro.org> wrote:
> >
> > Thanks, Gyanesh, that does sound like a bug. +cc Matias: Can you comment on 
> > this?
> >
> > On Mon, Feb 5, 2018 at 5:09 AM, gyanesh patra <pgyanesh.pa...@gmail.com> 
> > wrote:
> > ​I am testing an l2fwd use-case​. I am executing the use-case with two
> > CPUs​ & two interfaces​.
> > One interface ​with 2 Rx ​queues receives pkts using 2 threads with 2
> > associated CPUs. Both the
> > threads can forward the packet over the 2nd interface which also has 2 Tx
> > queues ​mapped to
> > 2 CPUs. I am sending packets from an external packet generator and
> > ​confirmed that ​both
> > queues are receiving packets.
> > *When I run odp_pktin_recv() on both the queues, the packet*
> > * forwarding works fine. But if I put a sleep() or add a busy loop ​instead
> > of odp_pktin_recv() *
> > *on one ​thread, then the​ other ​thread stops receiving packets. If I
> > replace ​the sleep with odp_pktin_recv(), both the queues start receiving
> > packets again. *I encountered this problem on the DPDK pktio support​ on
> > ODP 1.16 and ODP 1.17​.
> > On socket-mmap it works fine. Is it expected behavior or a potential bug?
> >
> 
> 
> Hi Gyanesh,
> 
> Could you please share an example code which produces this issue? Does this 
> happen also if you enable zero-copy dpdk pktio (--enable-dpdk-zero-copy)?
> 
> Socket-mmap pktio doesn't support MQ, so comparison to that doesn't make much 
> sense. Netmap pktio supports MQ.
> 
> Regards,
> Matias
> 
> 
> 



Re: [lng-odp] unexpected packet handling behavior with dpdk pktio support

2018-02-06 Thread Elo, Matias (Nokia - FI/Espoo)


> On 6 Feb 2018, at 13:55, Elo, Matias (Nokia - FI/Espoo) 
> <matias@nokia.com> wrote:
> 
> 
> 
>> On 5 Feb 2018, at 19:42, Bill Fischofer <bill.fischo...@linaro.org> wrote:
>> 
>> Thanks, Gyanesh, that does sound like a bug. +cc Matias: Can you comment on 
>> this?
>> 
>> On Mon, Feb 5, 2018 at 5:09 AM, gyanesh patra <pgyanesh.pa...@gmail.com> 
>> wrote:
>> ​I am testing an l2fwd use-case​. I am executing the use-case with two
>> CPUs​ & two interfaces​.
>> One interface ​with 2 Rx ​queues receives pkts using 2 threads with 2
>> associated CPUs. Both the
>> threads can forward the packet over the 2nd interface which also has 2 Tx
>> queues ​mapped to
>> 2 CPUs. I am sending packets from an external packet generator and
>> ​confirmed that ​both
>> queues are receiving packets.
>> *When I run odp_pktin_recv() on both the queues, the packet*
>> * forwarding works fine. But if I put a sleep() or add a busy loop ​instead
>> of odp_pktin_recv() *
>> *on one ​thread, then the​ other ​thread stops receiving packets. If I
>> replace ​the sleep with odp_pktin_recv(), both the queues start receiving
>> packets again. *I encountered this problem on the DPDK pktio support​ on
>> ODP 1.16 and ODP 1.17​.
>> On socket-mmap it works fine. Is it expected behavior or a potential bug?
>> 
> 
> 
> Hi Gyanesh,
> 
> Could you please share an example code which produces this issue? Does this 
> happen also if you enable zero-copy dpdk pktio (--enable-dpdk-zero-copy)? 
> 
> Socket-mmap pktio doesn't support MQ, so comparison to that doesn't make much 
> sense. Netmap pktio supports MQ.
> 
> Regards,
> Matias
> 

Using too small packet pool can also cause symptoms like this, so you could try 
increasing packet pool size.

-Matias
 

Re: [lng-odp] Compilation flags for release build and performance evaluation

2018-02-06 Thread Elo, Matias (Nokia - FI/Espoo)


> On 31 Jan 2018, at 21:41, gyanesh patra  wrote:
> 
> Hi,
> I am curious if there are any specific flags available for ODP for release
> builds or performance evaluation?
> 

Hi,

For performance evaluation you probably want to set '--disable-abi-compat' to 
enable function inlining.

-Matias




Re: [lng-odp] unexpected packet handling behavior with dpdk pktio support

2018-02-06 Thread Elo, Matias (Nokia - FI/Espoo)


> On 5 Feb 2018, at 19:42, Bill Fischofer  wrote:
> 
> Thanks, Gyanesh, that does sound like a bug. +cc Matias: Can you comment on 
> this?
> 
> On Mon, Feb 5, 2018 at 5:09 AM, gyanesh patra  
> wrote:
> ​I am testing an l2fwd use-case​. I am executing the use-case with two
> CPUs​ & two interfaces​.
> One interface ​with 2 Rx ​queues receives pkts using 2 threads with 2
> associated CPUs. Both the
> threads can forward the packet over the 2nd interface which also has 2 Tx
> queues ​mapped to
> 2 CPUs. I am sending packets from an external packet generator and
> ​confirmed that ​both
> queues are receiving packets.
> *When I run odp_pktin_recv() on both the queues, the packet*
> * forwarding works fine. But if I put a sleep() or add a busy loop ​instead
> of odp_pktin_recv() *
> *on one ​thread, then the​ other ​thread stops receiving packets. If I
> replace ​the sleep with odp_pktin_recv(), both the queues start receiving
> packets again. *I encountered this problem on the DPDK pktio support​ on
> ODP 1.16 and ODP 1.17​.
> On socket-mmap it works fine. Is it expected behavior or a potential bug?
> 


Hi Gyanesh,

Could you please share an example code which produces this issue? Does this 
happen also if you enable zero-copy dpdk pktio (--enable-dpdk-zero-copy)? 

Socket-mmap pktio doesn't support MQ, so comparison to that doesn't make much 
sense. Netmap pktio supports MQ.

Regards,
Matias



Re: [lng-odp] odp dpdk

2017-12-05 Thread Elo, Matias (Nokia - FI/Espoo)
When I tested enabling HW checksum with Fortville NICs (i40e) the slower driver 
path alone caused ~20% throughput drop on l2fwd test. This was without actually 
calculating the checksums, I simply forced the slower driver path (no 
vectorization).

-Matias


> On 5 Dec 2017, at 8:59, Bogdan Pricope  wrote:
> 
> On RX side is kind-of expected result since it uses scheduler mode.
> 
> On TX side there is this drop from 10 mpps to 7.69 mpps that is unexpected.
> 
> So Petri, when you said:
> "DPDK uses less optimized driver code (on Intel NICs at least) when
> any of the L4 checksum offloads is enabled."
> 
> you were referring to this kind of drop in performance?
> 
> There is that 'folklore' that SW csum is faster on small packets while
> HW csum is faster on bigger packets. Do you have this kind of data?
> 
> Anyway, for this particular case (odp_generator), since UDP
> header/payload is not changing during the test (for now), csum is
> calculated only once at the beginning of the test: so we are comparing
> HW IPv4 + HW UDP csum vs. SW IPv4 csum yet, the differences in
> performance is huge...
> 
> 
> On 4 December 2017 at 20:37, Maxim Uvarov  wrote:
>> I added isocpus and mounted huge page TX became more stable at 7.6M. But
>> anyway it's better to test performance for this PR because previous
>> speed was 10M.
>> 
>> Maxim.
>> 
>> On 12/04/17 19:42, Honnappa Nagarahalli wrote:
>>> Can you run with Linux-DPDK in ODP 2.0?
>>> 
>>> On 4 December 2017 at 09:54, Maxim Uvarov  wrote:
 after clean patches apply and fix in run scripts I made it run.
 
 But results is really bad. --enable-dpdk-zero-copy
 
 TX rate is:
 7673155 pps
 
 RX rate is:
 5989846 pps
 
 
 Before patch PR 313 TX was 10M pps.
 
 I re run task and TX is 3.3M pps. All tests are single core. So
 something strange happens in lava or this PR.
 
 Maxim.
 
 
 On 12/04/17 17:03, Bogdan Pricope wrote:
> On TX (https://lng.validation.linaro.org/scheduler/job/23252.0) I see:
> 
> ODP_REPO='https://github.com/muvarov/odp'
> ODP_BRANCH='api-next'
> 
> 
> On RX (https://lng.validation.linaro.org/scheduler/job/23252.1) I see:
> 
> ODP_REPO='https://github.com/muvarov/odp'
> ODP_BRANCH='devel/api-next_shsum'
> 
> 
> or are you referring to other test?
> 
> 
> On 4 December 2017 at 15:53, Maxim Uvarov  wrote:
>> 
>> 
>> On 4 December 2017 at 15:11, Bogdan Pricope 
>> wrote:
>>> 
>>> You need to put 313 on TX side (not RX).
>> 
>> 
>> 
>> both rx and tx have patches from 313. l2fwd works on recv side. Generator
>> does not work.
>> 
>> Maxim.
>> 
>> 
>>> 
>>> 
>>> On 4 December 2017 at 13:19, Savolainen, Petri (Nokia - FI/Espoo)
>>>  wrote:
 Is the DPDK version 17.08 ? Other versions might not work properly.
 
 
 
 -Petri
 
 
 
 From: Maxim Uvarov [mailto:maxim.uva...@linaro.org]
 Sent: Monday, December 04, 2017 1:10 PM
 To: Savolainen, Petri (Nokia - FI/Espoo) 
 Cc: Bogdan Pricope ; lng-odp-forward
 
 
 
 Subject: Re: [lng-odp] odp dpdk
 
 
 
 313 does not work also:
 
 https://lng.validation.linaro.org/scheduler/job/23242.1
 
 I will replace RX side to l2fwd and see that will be there.
 
 Maxim.
 
 
 
 
 
 On 4 December 2017 at 13:46, Savolainen, Petri (Nokia - FI/Espoo)
  wrote:
 
 Maxim, try https://github.com/Linaro/odp/pull/313 It has been tested to
 fix
 checksum insert for 10/40GE Intel NICs.
 
 -Petri
 
 
> -Original Message-
> From: lng-odp [mailto:lng-odp-boun...@lists.linaro.org] On Behalf Of
> Bogdan Pricope
> Sent: Monday, December 04, 2017 12:21 PM
> To: Maxim Uvarov 
> Cc: lng-odp-forward 
> Subject: Re: [lng-odp] odp dpdk
> 
> I suspect this is actually caused by csum issue in TX side: on RX,
> socket pktio does not validate csum (and accept the packets) but on
> dpdk pktio the csum is validated and packets are dropped.
> 
> I am not seeing this in my setup because default txq_flags for igb
> driver (1G interface) is
> 
> .txq_flags = 0
> 
> while for ixgbe (10G interface) is:
> 
> .txq_flags = ETH_TXQ_FLAGS_NOMULTSEGS |

Re: [lng-odp] issues with usage of mellanox 100G NICs with ODP & ODP-DPDK

2017-11-09 Thread Elo, Matias (Nokia - FI/Espoo)
Hi Gyanesh,

Pretty much the same steps should also work with odp linux-generic. The main 
difference is configure script. With linux-generic you use 
'--with-dpdk-path=' option and optionally --enable-dpdk-zero-copy 
flag. The supported dpdk  version is v17.08.

-Matias

> On 9 Nov 2017, at 10:34, gyanesh patra  wrote:
> 
> Hi Maxim,
> Thanks for the help. I managed to figure out the configuration error and it
> works fine for "ODP-DPDK". The MLX5 pmd was not included properly.
> 
> But regarding "ODP" repo (not odp-dpdk), do i need to follow any steps to
> be able to use MLX ???
> 
> 
> P Gyanesh Kumar Patra
> 
> On Wed, Nov 8, 2017 at 7:56 PM, Maxim Uvarov 
> wrote:
> 
>> On 11/08/17 19:32, gyanesh patra wrote:
>>> I am not sure what you mean. Can you please elaborate?
>>> 
>>> As i mentioned before I am able to run dpdk examples. Hence the drivers
>>> are available and working fine.
>>> I configured ODP & ODP-DPDK with "LDFLAGS=-libverbs" and compiled to
>>> work with mellanox. I followed the same while compiling dpdk too.
>>> 
>>> Is there anything i am missing?
>>> 
>>> P Gyanesh Kumar Patra
>> 
>> 
>> in general if CONFIG_RTE_LIBRTE_MLX5_PMD=y was specified then it has to
>> work. I think we did test only with ixgbe. But in general it's common code.
>> 
>> "Unable to init any I/O type." means it it called all open for all pktio
>> in list here:
>> ./platform/linux-generic/pktio/io_ops.c
>> 
>> and setup_pkt_dpdk() failed for some reason.
>> 
>> I do not like allocations errors in your log.
>> 
>> Try to compile ODP with --enable-debug-print --enable-debug it will make
>> ODP_DBG() macro work and it will be visible why it does not opens pktio.
>> 
>> Maxim
>> 
>> 
>>> 
>>> On Wed, Nov 8, 2017 at 5:22 PM, Maxim Uvarov >> > wrote:
>>> 
>>>is Mellanox pmd compiled in?
>>> 
>>>Maxim.
>>> 
>>>On 11/08/17 17:58, gyanesh patra wrote:
 Hi,
 I am trying to run ODP & ODP-DPDK examples on our server with
>>>mellanox 100G
 NICs. I am using the odp_l2fwd example. While running the example,
>>>I am
 facing some issues.
 -> When I run "ODP" example using the if names given by kernel as
 arguments, I am not getting enough throughput.(the value is very
>> low)
 -> And when I try "ODP-DPDK" example using port ID as "0,1", it
>> can't
 create pktio. Whereas I am able to run the examples from "DPDK"
 repo with portID "0,1" for the same mellanox NICs. I tried running
>>>with
 "81:00.0,81:00.1" and also with if-names too without any success.
>>>Adding
 the whitelist using ODP_PLATFORM_PARAMS doesn't help either.
 
 Am I missing any steps to use mellanox NICs? OR is there a
>>>different method
 to specify the device details to create pktio?
 I am providing the output of "odp_l2fwd" examples for ODP and
>> ODP-DPDK
 repository here.
 
 The NICs being used:
 
 :81:00.0 'MT27700 Family [ConnectX-4]' if=enp129s0f0
>> drv=mlx5_core
 unused=
 :81:00.1 'MT27700 Family [ConnectX-4]' if=enp129s0f1
>> drv=mlx5_core
 unused=
 
 ODP l2fwd example run details:
 --
 root@ubuntu:/home/ubuntu/odp/test/performance# ./odp_l2fwd -i
 enp129s0f0,enp129s0f1
 HW time counter freq: 239886 
>>><(239)%20999-9886> hz
 
 _ishmphy.c:152:_odp_ishmphy_map():mmap failed:Cannot allocate
>> memory
 _ishm.c:880:_odp_ishm_reserve():No huge pages, fall back to normal
>>>pages.
 check: /proc/sys/vm/nr_hugepages.
 _ishmphy.c:152:_odp_ishmphy_map():mmap failed:Cannot allocate
>> memory
 _ishmphy.c:152:_odp_ishmphy_map():mmap failed:Cannot allocate
>> memory
 _ishmphy.c:152:_odp_ishmphy_map():mmap failed:Cannot allocate
>> memory
 PKTIO: initialized loop interface.
 PKTIO: initialized pcap interface.
 PKTIO: initialized ipc interface.
 PKTIO: initialized socket mmap, use export
>>>ODP_PKTIO_DISABLE_SOCKET_MMAP=1
 to disable.
 PKTIO: initialized socket mmsg,use export
>>>ODP_PKTIO_DISABLE_SOCKET_MMSG=1
 to disable.
 _ishmphy.c:152:_odp_ishmphy_map():mmap failed:Cannot allocate
>> memory
 _ishmphy.c:152:_odp_ishmphy_map():mmap failed:Cannot allocate
>> memory
 _ishmphy.c:152:_odp_ishmphy_map():mmap failed:Cannot allocate
>> memory
 _ishmphy.c:152:_odp_ishmphy_map():mmap failed:Cannot allocate
>> memory
 
 ODP system info
 ---
 ODP API version: 1.15.0
 ODP impl name:   "odp-linux"
 CPU model:   Intel(R) Xeon(R) CPU E5-2680 v4
 CPU freq (hz):   33
 Cache line size: 64
 CPU count:   56
 
 
 CPU features supported:
 SSE3 PCLMULQDQ DTES64 MONITOR DS_CPL VMX SMX EIST TM2 SSSE3 FMA
>>>CMPXCHG16B
 XTPR PDCM PCID DCA SSE4_1 SSE4_2 X2APIC MOVBE POPCNT TSC_DEADLINE
>>>AES XSAVE
 OSXSAVE AVX F16C RDRAND 

Re: [lng-odp] Static ODP library and DPDK drivers

2017-10-24 Thread Elo, Matias (Nokia - FI/Espoo)


> On 24 Oct 2017, at 14:11, Dmitry Eremin-Solenikov 
> <dmitry.ereminsoleni...@linaro.org> wrote:
> 
> Hello,
> 
> On 24/10/17 14:02, Elo, Matias (Nokia - FI/Espoo) wrote:
>> Hi Dmitry,
>> 
>> Currently, when odp is configured with '--disable-shared' flag, dpdk drivers 
>> are not included in the resulting libodp-linux.a library (doesn't include 
>> any dpdk driver symbols) and hence an applications using this library 
>> doesn't find any dpdk devices. However, if the flag is not used the drivers 
>> are included in the shared libodp-linux.so and everything works as expected. 
>> Do you have any ideas how this problem could be fixed?
> 
> Neither does it include openssl, pcap or any other dependency. Use
> pkg-config to generate full list of required flags for linking. It will
> include all DPDK libraries, if ODP is configured to use DPDK.

OK, thanks!

-Matias

> 
> -- 
> With best wishes
> Dmitry



[lng-odp] Static ODP library and DPDK drivers

2017-10-24 Thread Elo, Matias (Nokia - FI/Espoo)
Hi Dmitry,

Currently, when odp is configured with '--disable-shared' flag, dpdk drivers 
are not included in the resulting libodp-linux.a library (doesn't include any 
dpdk driver symbols) and hence an applications using this library doesn't find 
any dpdk devices. However, if the flag is not used the drivers are included in 
the shared libodp-linux.so and everything works as expected. Do you have any 
ideas how this problem could be fixed?

Regards,
Matias



[lng-odp] Test

2017-10-17 Thread Elo, Matias (Nokia - FI/Espoo)
Testing spam filter.


Re: [lng-odp] [PATCH 0/6] dpdk pktio: enable hardware checksum support

2017-07-21 Thread Elo, Matias (Nokia - FI/Espoo)
Thanks! This patch set is probably going to require rebasing since the dpdk 
zero-copy patch set is now merged.

-Matias

> On 21 Jul 2017, at 10:54, Bogdan Pricope <bogdan.pric...@linaro.org> wrote:
> 
> Have a nice vacation, Matias!!
> We will have time for this in the autumn...
> 
> On 21 July 2017 at 10:36, Maxim Uvarov <maxim.uva...@linaro.org> wrote:
>> ok, don't worry, have a good time!
>> 
>> On 21 July 2017 at 08:53, Elo, Matias (Nokia - FI/Espoo) <
>> matias@nokia.com> wrote:
>> 
>>> I'm starting my vacation today and have more acute things on my plate, so
>>> I don't unfortunately have time to review the patch for a couple of weeks.
>>> 
>>> -Matias
>>> 
>>>> On 20 Jul 2017, at 23:16, Maxim Uvarov <maxim.uva...@linaro.org> wrote:
>>>> 
>>>> Krishna, Matias, please review dpdk changes.
>>>> 
>>>> Maxim.
>>>> 
>>>> On 07/19/17 16:35, Bogdan Pricope wrote:
>>>>> Ping?
>>>>> 
>>>>> We still want this for odp-linux or we should implement it on odp-dpdk
>>>>> only (as soon as repository is updated)?
>>>>> 
>>>>> /B
>>>>> 
>>>>> On 20 June 2017 at 12:20, Bogdan Pricope <bogdan.pric...@linaro.org>
>>> wrote:
>>>>>> Ping?
>>>>>> 
>>>>>> On 31 May 2017 at 17:40, Bogdan Pricope <bogdan.pric...@linaro.org>
>>> wrote:
>>>>>>> Add HW checksum calculation/validation support for dpdk pktio.
>>>>>>> No UDP/TCP HW checksum calculation/validation support for:
>>>>>>> - IPv4 fragments
>>>>>>> - IPv6 packets with extension headers (including fragments)
>>>>>>> 
>>>>>>> Bogdan Pricope (6):
>>>>>>> Initialize pktio configuration structure
>>>>>>> dpdk: retrieve offload capabilities
>>>>>>> dpdk: enable per pktio RX IP/UDP/TCP checksum offload
>>>>>>> dpdk: RX - process checksum validation offload flags
>>>>>>> dpdk: TX - set checksum calculation offload flags
>>>>>>> examples: generator: update odp_generator to use HW checksum
>>>>>>>   capabilities
>>>>>>> 
>>>>>>> example/generator/odp_generator.c  | 107 ++---
>>>>>>> platform/linux-generic/odp_packet_io.c |   2 +
>>>>>>> platform/linux-generic/pktio/dpdk.c| 203
>>> -
>>>>>>> 3 files changed, 293 insertions(+), 19 deletions(-)
>>>>>>> 
>>>>>>> --
>>>>>>> 1.9.1
>>>>>>> 
>>>> 
>>> 
>>> 



[lng-odp] Monarch_lts tagging

2017-07-21 Thread Elo, Matias (Nokia - FI/Espoo)
Hi Bill,

I took a look at the patches in monarch_lts and as far as I can see the only 
fix missing, from the issues our guys have reported, is the "linux-gen: pktio: 
fix valgrind warnings" patch. This patch applies without conflicts on top the 
current monarch_lts branch.

-Matias



Re: [lng-odp] [PATCH 0/6] dpdk pktio: enable hardware checksum support

2017-07-20 Thread Elo, Matias (Nokia - FI/Espoo)
I'm starting my vacation today and have more acute things on my plate, so I 
don't unfortunately have time to review the patch for a couple of weeks.

-Matias

> On 20 Jul 2017, at 23:16, Maxim Uvarov  wrote:
> 
> Krishna, Matias, please review dpdk changes.
> 
> Maxim.
> 
> On 07/19/17 16:35, Bogdan Pricope wrote:
>> Ping?
>> 
>> We still want this for odp-linux or we should implement it on odp-dpdk
>> only (as soon as repository is updated)?
>> 
>> /B
>> 
>> On 20 June 2017 at 12:20, Bogdan Pricope  wrote:
>>> Ping?
>>> 
>>> On 31 May 2017 at 17:40, Bogdan Pricope  wrote:
 Add HW checksum calculation/validation support for dpdk pktio.
 No UDP/TCP HW checksum calculation/validation support for:
 - IPv4 fragments
 - IPv6 packets with extension headers (including fragments)
 
 Bogdan Pricope (6):
  Initialize pktio configuration structure
  dpdk: retrieve offload capabilities
  dpdk: enable per pktio RX IP/UDP/TCP checksum offload
  dpdk: RX - process checksum validation offload flags
  dpdk: TX - set checksum calculation offload flags
  examples: generator: update odp_generator to use HW checksum
capabilities
 
 example/generator/odp_generator.c  | 107 ++---
 platform/linux-generic/odp_packet_io.c |   2 +
 platform/linux-generic/pktio/dpdk.c| 203 
 -
 3 files changed, 293 insertions(+), 19 deletions(-)
 
 --
 1.9.1
 
> 



Re: [lng-odp] [PATCHv3] linux-gen: scheduler: clean up odp_scheduler_if.h

2017-07-19 Thread Elo, Matias (Nokia - FI/Espoo)

> On 19 Jul 2017, at 6:25, Honnappa Nagarahalli 
> <honnappa.nagaraha...@linaro.org> wrote:
> 
> On 18 July 2017 at 06:37, Elo, Matias (Nokia - FI/Espoo)
> <matias@nokia.com> wrote:
>> 
>>> On 18 Jul 2017, at 6:58, Honnappa Nagarahalli 
>>> <honnappa.nagaraha...@linaro.org> wrote:
>>> 
>>> On 17 July 2017 at 04:23, Elo, Matias (Nokia - FI/Espoo)
>>> <matias@nokia.com> wrote:
>>>> Does this patch fix some real problem? At leas for me it only makes the 
>>>> scheduler interface harder to follow by spreading the functions into 
>>>> multiple headers.
>>> 
>>> I have said this again and again. odp_schedule_if.h is a scheduler
>>> interface file. i.e this file is supposed to contain
>>> services/functions provided by scheduler to other components in ODP
>>> (similar to what has been done in odp_queue_if.h - appreciate if a
>>> closer attention is paid to this). Now, this file contains functions
>>> provided by packet I/O (among other things). Appreciate if you could
>>> explain why this file should contain these functions?
>>> 
>>> Also, Petri has understood what this patch does, can you check with him?
>> 
>> These functions are used by the schedulers to interface with other ODP
>> components, so the scheduler_if.h is a logical place to define them. When
>> implementing a new scheduler it's helpful to see all available  functions 
>> from one
>> place. I'm not fundamentally against this patch, but it's the task of the 
>> patch
>> submitter to justify why a change is needed, not the other way around.
>> 
>> Petri was originally opposed to moving these functions into xyz_internal.h 
>> headers,
>> and only approved moving the functions into xyz_if.h files if it must be 
>> done.
>> 
>> I'm just trying to understand why this change is necessary. I patch like this
>> would be a lot easier to justify if it was sent as a part of the patch set 
>> which requires
>> this change. Without that, a more comprehensive commit log would be helpful.
> 
> Any suggestions on the commit message?
> 
> Does adding the following sentence help?
> 
> "odp_schedule_if.h is the scheduler interface file. i.e this file is
> supposed to contain services/functions provided by scheduler to other
> components in ODP"

Based on your older message the motivation for moving these functions is that 
they are 
related to the default scheduler and queue implementations. This would be a 
good point to
mention.

> 
>> 
>> The naming of the odp_packet_io_if.h is now a bit confusing as we already 
>> have the
>> pktio_if_ops_t struct in odp_packet_io_internal.h, which is the actual pktio 
>> interface.
> 
> I definitely agree with you on this. That's how v1 of the patch was.
> It is changed after Petri's suggestion to move it to
> odp_packet_io_if.h.

In a previous email Petri suggested filename odp_queue_sched_if.h. 
Extrapolating from this, odp_packet_io_if.h should be named 
odp_packet_io_sched_if.h.




Re: [lng-odp] [PATCHv3] linux-gen: scheduler: clean up odp_scheduler_if.h

2017-07-18 Thread Elo, Matias (Nokia - FI/Espoo)

> On 18 Jul 2017, at 6:58, Honnappa Nagarahalli 
> <honnappa.nagaraha...@linaro.org> wrote:
> 
> On 17 July 2017 at 04:23, Elo, Matias (Nokia - FI/Espoo)
> <matias@nokia.com> wrote:
>> Does this patch fix some real problem? At leas for me it only makes the 
>> scheduler interface harder to follow by spreading the functions into 
>> multiple headers.
> 
> I have said this again and again. odp_schedule_if.h is a scheduler
> interface file. i.e this file is supposed to contain
> services/functions provided by scheduler to other components in ODP
> (similar to what has been done in odp_queue_if.h - appreciate if a
> closer attention is paid to this). Now, this file contains functions
> provided by packet I/O (among other things). Appreciate if you could
> explain why this file should contain these functions?
> 
> Also, Petri has understood what this patch does, can you check with him?

These functions are used by the schedulers to interface with other ODP
components, so the scheduler_if.h is a logical place to define them. When
implementing a new scheduler it's helpful to see all available  functions from 
one
place. I'm not fundamentally against this patch, but it's the task of the patch
submitter to justify why a change is needed, not the other way around.

Petri was originally opposed to moving these functions into xyz_internal.h 
headers,
and only approved moving the functions into xyz_if.h files if it must be done.

I'm just trying to understand why this change is necessary. I patch like this
would be a lot easier to justify if it was sent as a part of the patch set 
which requires
this change. Without that, a more comprehensive commit log would be helpful.

The naming of the odp_packet_io_if.h is now a bit confusing as we already have 
the
pktio_if_ops_t struct in odp_packet_io_internal.h, which is the actual pktio 
interface.

-Matias



Re: [lng-odp] [PATCHv3] linux-gen: scheduler: clean up odp_scheduler_if.h

2017-07-17 Thread Elo, Matias (Nokia - FI/Espoo)
Does this patch fix some real problem? At leas for me it only makes the 
scheduler interface harder to follow by spreading the functions into multiple 
headers.

-Matias


> On 17 Jul 2017, at 10:26, Joyce Kong  wrote:
> 
> The modular scheduler interface in odp_schedule_if.h includes functions from
> pktio and queue. It needs to be cleaned up.
> 
> Signed-off-by: Joyce Kong 
> ---
> platform/linux-generic/Makefile.am |  2 ++
> platform/linux-generic/include/odp_packet_io_if.h  | 23 +
> .../linux-generic/include/odp_queue_sched_if.h | 24 ++
> platform/linux-generic/include/odp_schedule_if.h   |  9 
> platform/linux-generic/odp_packet_io.c |  1 +
> platform/linux-generic/odp_queue.c |  1 +
> platform/linux-generic/odp_schedule.c  |  2 ++
> platform/linux-generic/odp_schedule_iquery.c   |  2 ++
> platform/linux-generic/odp_schedule_sp.c   |  2 ++
> 9 files changed, 57 insertions(+), 9 deletions(-)
> create mode 100644 platform/linux-generic/include/odp_packet_io_if.h
> create mode 100644 platform/linux-generic/include/odp_queue_sched_if.h
> 
> diff --git a/platform/linux-generic/Makefile.am 
> b/platform/linux-generic/Makefile.am
> index 26eba28..5295abb 100644
> --- a/platform/linux-generic/Makefile.am
> +++ b/platform/linux-generic/Makefile.am
> @@ -150,6 +150,7 @@ noinst_HEADERS = \
> ${srcdir}/include/odp_packet_io_internal.h \
> ${srcdir}/include/odp_packet_io_ipc_internal.h \
> ${srcdir}/include/odp_packet_io_ring_internal.h \
> +   ${srcdir}/include/odp_packet_io_if.h \
> ${srcdir}/include/odp_packet_netmap.h \
> ${srcdir}/include/odp_packet_dpdk.h \
> ${srcdir}/include/odp_packet_socket.h \
> @@ -160,6 +161,7 @@ noinst_HEADERS = \
> ${srcdir}/include/odp_queue_internal.h \
> ${srcdir}/include/odp_ring_internal.h \
> ${srcdir}/include/odp_queue_if.h \
> +   ${srcdir}/include/odp_queue_sched_if.h \
> ${srcdir}/include/odp_schedule_if.h \
> ${srcdir}/include/odp_sorted_list_internal.h \
> ${srcdir}/include/odp_shm_internal.h \
> diff --git a/platform/linux-generic/include/odp_packet_io_if.h 
> b/platform/linux-generic/include/odp_packet_io_if.h
> new file mode 100644
> index 000..e574f22
> --- /dev/null
> +++ b/platform/linux-generic/include/odp_packet_io_if.h
> @@ -0,0 +1,23 @@
> +/* Copyright (c) 2017, ARM Limited
> + * All rights reserved.
> + *
> + *SPDX-License-Identifier:   BSD-3-Clause
> + */
> +
> +#ifndef ODP_PACKET_IO_IF_H_
> +#define ODP_PACKET_IO_IF_H_
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +/* Interface for the scheduler */
> +int sched_cb_pktin_poll(int pktio_index, int num_queue, int index[]);
> +void sched_cb_pktio_stop_finalize(int pktio_index);
> +int sched_cb_num_pktio(void);
> +
> +#ifdef __cplusplus
> +}
> +#endif
> +
> +#endif
> diff --git a/platform/linux-generic/include/odp_queue_sched_if.h 
> b/platform/linux-generic/include/odp_queue_sched_if.h
> new file mode 100644
> index 000..4a301f4
> --- /dev/null
> +++ b/platform/linux-generic/include/odp_queue_sched_if.h
> @@ -0,0 +1,24 @@
> +/* Copyright (c) 2017, ARM Limited
> + * All rights reserved.
> + *
> + *SPDX-License-Identifier:   BSD-3-Clause
> + */
> +
> +#ifndef ODP_QUEUE_SCHED_IF_H_
> +#define ODP_QUEUE_SCHED_IF_H_
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +/* Interface for the scheduler */
> +odp_queue_t sched_cb_queue_handle(uint32_t queue_index);
> +void sched_cb_queue_destroy_finalize(uint32_t queue_index);
> +int sched_cb_queue_deq_multi(uint32_t queue_index, odp_event_t ev[], int 
> num);
> +int sched_cb_queue_empty(uint32_t queue_index);
> +
> +#ifdef __cplusplus
> +}
> +#endif
> +
> +#endif
> diff --git a/platform/linux-generic/include/odp_schedule_if.h 
> b/platform/linux-generic/include/odp_schedule_if.h
> index 4cd8c3e..9a1f3ff 100644
> --- a/platform/linux-generic/include/odp_schedule_if.h
> +++ b/platform/linux-generic/include/odp_schedule_if.h
> @@ -64,15 +64,6 @@ typedef struct schedule_fn_t {
> /* Interface towards the scheduler */
> extern const schedule_fn_t *sched_fn;
> 
> -/* Interface for the scheduler */
> -int sched_cb_pktin_poll(int pktio_index, int num_queue, int index[]);
> -void sched_cb_pktio_stop_finalize(int pktio_index);
> -int sched_cb_num_pktio(void);
> -odp_queue_t sched_cb_queue_handle(uint32_t queue_index);
> -void sched_cb_queue_destroy_finalize(uint32_t queue_index);
> -int sched_cb_queue_deq_multi(uint32_t queue_index, odp_event_t ev[], int 
> num);
> -int sched_cb_queue_empty(uint32_t queue_index);
> -
> /* API functions */
> typedef struct {
>   uint64_t (*schedule_wait_time)(uint64_t);
> diff --git a/platform/linux-generic/odp_packet_io.c 
> 

Re: [lng-odp] [PATCH 7/7] linux-gen: dpdk: enable zero-copy operation

2017-07-10 Thread Elo, Matias (Nokia - FI/Espoo)
> 
> For travis, please first see how result matrix looks like:
> 
> https://travis-ci.org/Linaro/odp/builds/251051276?utm_source=github_status_medium=notification
> 
> 
> So you need separate:
>- stage: test
>  env: TEST=distcheck
> 
> entry.
> Like clone entry "TEST=coverage" and put your options there.
> 
> 
> If you only need to provide one option, then you need just add one line
> here:
> 
> env:
>- CONF=""
>- CONF="--disable-abi-compat"
>- CONF="--enable-schedule-sp"
>- CONF="--enable-schedule-iquery"
>- CONF="--enable-schedule-scalable"
>   - CONF="--enable-dpdk-zero-copy"
> 
> 
> and it will be tested with clang/gcc and what we have. You can test how
> it works on your private github repo.
> 

OK, thanks for the info.

-Matias



Re: [lng-odp] [PATCH 7/7] linux-gen: dpdk: enable zero-copy operation

2017-07-10 Thread Elo, Matias (Nokia - FI/Espoo)

> On 10 Jul 2017, at 12:17, Krishna Garapati <balakrishna.garap...@linaro.org> 
> wrote:
> 
> 
> 
> On 10 July 2017 at 10:13, Elo, Matias (Nokia - FI/Espoo) 
> <matias@nokia.com> wrote:
> 
> > +   pkt_pool = mbuf_pool_create(pkt_dpdk->pool_name,
> > +   pool_entry->num, cache,
> > +   pool_entry->max_seg_len 
> > +
> > +   CONFIG_PACKET_HEADROOM,
> > +   pkt_dpdk);
> > instead of passing the whole pkt_dpdk struct, can you just pass odp_pool_t 
> > & calculate just data_room in the dequeue_bulk ?. This way the pool 
> > configuration will be more clearer & less dependent on pkt_dpdk.
> >
> 
> The value of 'data_room' is constant, so it doesn't make sense to recalculate 
> it in the fast path every time pool_dequeue_bulk() is called. 
> mbuf_pool_create() is only used by the dpdk pktio, so the dependency is not a 
> problem.
> I was actually referring to storing whole pkt_dpdk_t in to the 
> "rte_mempool_set_ops_byname".
> I see that it would be clear if we  just store odp_pool_t & store or 
> calculate the data_room part some other means. I agree that it's not good to 
> calculate data_room in the fast path.
> 

Using odp_pool_t as the only argument would indeed be clearer. The problems is 
that the only way to pass arguments to pool_dequeue_bulk() is though 
rte_mempool struct and storing the 'data_room' there would not be any better.

I originally intended to keep the mbuf_pool_create() arguments as close as 
possible to the rte_pktmbuf_pool_create() function but one option would be to 
change mbuf_pool_create() to use pool_entry* as argument.

-> static struct rte_mempool *mbuf_pool_create(const char *name, pool_t 
*pool_entry)

--> rte_mempool.pool_data = odp_pool_t
--> rte_mempool.pool_config = pool_entry *



-Matias



Re: [lng-odp] [PATCH 7/7] linux-gen: dpdk: enable zero-copy operation

2017-07-10 Thread Elo, Matias (Nokia - FI/Espoo)

> +   pkt_pool = mbuf_pool_create(pkt_dpdk->pool_name,
> +   pool_entry->num, cache,
> +   pool_entry->max_seg_len +
> +   CONFIG_PACKET_HEADROOM,
> +   pkt_dpdk);
> instead of passing the whole pkt_dpdk struct, can you just pass odp_pool_t & 
> calculate just data_room in the dequeue_bulk ?. This way the pool 
> configuration will be more clearer & less dependent on pkt_dpdk.
> 

The value of 'data_room' is constant, so it doesn't make sense to recalculate 
it in the fast path every time pool_dequeue_bulk() is called. 
mbuf_pool_create() is only used by the dpdk pktio, so the dependency is not a 
problem.

-Matias



Re: [lng-odp] [PATCH 7/7] linux-gen: dpdk: enable zero-copy operation

2017-07-07 Thread Elo, Matias (Nokia - FI/Espoo)

> On 7 Jul 2017, at 0:21, Maxim Uvarov  wrote:
> 
> On 07/03/17 15:01, Matias Elo wrote:
>> +zero_copy=0
>> +AC_ARG_ENABLE([dpdk-zero-copy],
>> +[  --enable-dpdk-zero-copy  enable experimental zero-copy DPDK pktio 
>> mode],
>> +[if test x$enableval = xyes; then
>> +zero_copy=1
>> +fi])
>> +
> 
> please add corresponding check to his to .travis.yaml
> 
> Maxim.

This seems to require some major changes to the travis configuration file. The 
zero-copy dpdk pktio is enabled by adding '--enable-dpdk-zero-copy' configure 
flag. So, to test both modes ODP would have to be configured, built, and tested 
twice.

I've pretty much zero experience working with Travis, but to me it looks like 
the minimum change would be to repeat the lines 118-122:

- ./bootstrap
- ./configure --prefix=$HOME/odp-install  --enable-test-cpp 
--enable-test-vald --enable-test-helper --enable-test-perf --enable-user-guides 
--enable-test-perf-proc --enable-test-example 
--with-dpdk-path=`pwd`/dpdk/${TARGET} --with-netmap-path=`pwd`/netmap $CONF
- make -j $(nproc)
- sudo LD_LIBRARY_PATH="/usr/local/lib:$LD_LIBRARY_PATH" make check
- make install

Then do 'make clean' and repeat the process adding the 
'--enable-dpdk-zero-copy' flag.

Then there is the code coverage test which is another problem. I've no way of 
testing this script, so I don't feel that comfortable doing any major changes. 
As you are much more experienced working with Travis would it be possible for 
you to do the necessary modifications?  

-Matias



Re: [lng-odp] [PATCH 7/7] linux-gen: dpdk: enable zero-copy operation

2017-07-07 Thread Elo, Matias (Nokia - FI/Espoo)

> On 7 Jul 2017, at 0:21, Maxim Uvarov  wrote:
> 
> On 07/03/17 15:01, Matias Elo wrote:
>> +zero_copy=0
>> +AC_ARG_ENABLE([dpdk-zero-copy],
>> +[  --enable-dpdk-zero-copy  enable experimental zero-copy DPDK pktio 
>> mode],
>> +[if test x$enableval = xyes; then
>> +zero_copy=1
>> +fi])
>> +
> 
> please add corresponding check to his to .travis.yaml
> 
> Maxim.


OK, will do.

Btw. is Dmitry's "Rework the way ODP links with other libraries" patch set 
going to be merged soon? My patch set has a small conflict with it, so it's 
probably better for me to wait until that one is merged until I send V2.

-Matias



Re: [lng-odp] [PATCH 7/7] linux-gen: dpdk: enable zero-copy operation

2017-07-06 Thread Elo, Matias (Nokia - FI/Espoo)

> On 6 Jul 2017, at 6:56, Honnappa Nagarahalli 
>  wrote:
> 
> On 3 July 2017 at 07:01, Matias Elo  wrote:
>> Implements experimental zero-copy mode for DPDK pktio. This can be enabled
>> with additional '--enable-dpdk-zero-copy' configure flag.
>> 
>> This feature has been put behind an extra configure flag as it doesn't
>> entirely adhere to the DPDK API and may behave unexpectedly with untested
>> DPDK NIC drivers. Zero-copy operation has been tested with pcap, ixgbe, and
>> i40e drivers.
>> 
> 
> Can you elaborate more on this? Which parts do not adhere to DPDK APIs?
> 

Sure, DPDK documentation states that after calling rte_mempool_create_empty() 
the
user should call rte_mempool_populate_*() to add memory chunks to the pool. 
These
functions either allocate the required memory or take a contiguous memory block 
as
argument. This memory block is then divided  into DPDK memory elements. This 
doesn't
work when we want to pass individual standard ODP packets to DPDK. So I simply
never call the populate functions. This is the part not adhering to the DPDK 
API.

In the zero-copy patch DPDK mempool operations are mapped to custom functions
(pool_enqueue(), pool_dequeue_bulk()...) which interface directly with the ODP 
packet
pool, so the DPDK mempool doesn't actually require the memory chunks suggested 
by
the populate functions.  


-Matias









Re: [lng-odp] [API-NEXT PATCH] api: system_info: add function for fetching all supported huge page sizes

2017-07-05 Thread Elo, Matias (Nokia - FI/Espoo)

> On 5 Jul 2017, at 10:09, Maxim Uvarov  wrote:
> 
> Matias,
> 
> I would change it from unsigned. That allows to reuse on variable for all 
> return code.
> 
> int ret;
> 
> re t=  odp_init_global()
> if (ret) ..
> ret = odp_packet()...
> if (ret)
> ret = odp_sys_huge_page_size_all()  <- here in your case we will need 
> additional cast to unsigned 
> if (ret) ...
> return ret;
> 

OK, I'll change the return value and 'num' param to int in V2.

-Matias




Re: [lng-odp] [API-NEXT PATCH] api: system_info: add function for fetching all supported huge page sizes

2017-07-03 Thread Elo, Matias (Nokia - FI/Espoo)

> On 3 Jul 2017, at 15:54, Bill Fischofer <bill.fischo...@linaro.org> wrote:
> 
> On Mon, Jul 3, 2017 at 7:40 AM, Elo, Matias (Nokia - FI/Espoo)
> <matias@nokia.com> wrote:
>> Ping.
> 
> Is the rest of the patch (implementation, validation test updates, doc
> updates) in preparation? The API changes have already been reviewed by
> both Petri and me.
> 


Maxim had previously some issues with the API change, but if everything is now 
OK I can
do the actual implementation.

-Matias



Re: [lng-odp] [API-NEXT PATCH] api: system_info: add function for fetching all supported huge page sizes

2017-07-03 Thread Elo, Matias (Nokia - FI/Espoo)
Ping.


> On 5 Jun 2017, at 10:11, Elo, Matias (Nokia - FI/Espoo) 
> <matias@nokia.com> wrote:
> 
> 
>> On 2 Jun 2017, at 16:48, Maxim Uvarov <maxim.uva...@linaro.org> wrote:
>> 
>> On 06/02/17 12:44, Elo, Matias (Nokia - FI/Espoo) wrote:
>>>>> 
>>>>> /**
>>>>> + * System huge page sizes in bytes
>>>>> + *
>>>>> + * Returns the number of huge page sizes supported by the system. 
>>>>> Outputs up to
>>>>> + * 'num' sizes when the 'size' array pointer is not NULL. If return 
>>>>> value is
>>>>> + * larger than 'num', there are more supported sizes than the function 
>>>>> was
>>>>> + * allowed to output. If return value (N) is less than 'num', only sizes
>>>>> + * [0 ... N-1] have been written. Returned values are ordered from 
>>>>> smallest to
>>>>> + * largest.
>>>>> + *
>>>>> + * @param[out] size Points to an array of huge page sizes for output
>>>>> + * @param  num  Maximum number of huge page sizes to output
>>>>> + *
>>>>> + * @return Number of supported huge page sizes
>>>>> + * @retval 0 on no huge pages
>>>>> + */
>>>>> +unsigned odp_sys_huge_page_size_all(uint64_t size[], unsigned num);
>>>>> +
>>>> 
>>>> I think it has to be int. -1 on error, 0 - no hp, > 0 pages.
>>>> For linux it might be similar to getpagesizes()
>>>> https://linux.die.net/man/3/getpagesizes
>>>> """
>>>> if pagesizes is NULL and n_elem is 0, then the number of pages the
>>>> system supports is returned. Otherwise, pagesizes is filled with at most
>>>> n_elem page sizes.
>>>> """
>>>> 
>>> 
>>> getpagesizes() returns -1 in a case of invalid function arguments. 
>>> odp_sys_huge_page_size_all() is documented so that the application cannot 
>>> pass invalid arguments. So an internal error would be the only possibility. 
>>> I don't see this to be likely as the function is only reading system info.
>>> 
>>> Adding -1 return value would also increase application complexity as the 
>>> error return value would require special handling from application.
>>> 
>> 
>> We have to be consistent with all odp api functions. We do not have
>> unsigned function, they are int.
> 
> We do have odp_pktio_max_index(), which returns unsigned, and anyway this 
> shouldn't be a reason not to use otherwise valid return value. Regarding 
> consistency, not returning -1 follows the same return value style as the rest 
> of the functions in system_info.h (odp_sys_huge_page_size(), 
> odp_sys_page_size(), odp_sys_cache_line_size()).
> 
>> This function is not fast path so
>> additional check is ok. And -1 can be returned on permission error to
>> assess /proc or /sys files for example or any other internal failure.
> 
> From application point of view the outcome is still the same (no huge pages) 
> and returning -1 would make this function inconsistent with the other 
> functions in this module as noted above.
> 
> -Matias
> 
> 



Re: [lng-odp] [PATCH v4 3/9] linux-gen: stop poisoning CPPFLAGS/LDFLAGS with DPDK flags

2017-07-03 Thread Elo, Matias (Nokia - FI/Espoo)

> On 3 Jul 2017, at 15:24, Bill Fischofer  wrote:
> 
> On Mon, Jul 3, 2017 at 7:04 AM, Dmitry Eremin-Solenikov
>  wrote:
>> On 03.07.2017 13:34, Savolainen, Petri (Nokia - FI/Espoo) wrote:
 diff --git a/test/Makefile.inc b/test/Makefile.inc
 index 1ef2a92c..bf31b374 100644
 --- a/test/Makefile.inc
 +++ b/test/Makefile.inc
 @@ -4,7 +4,7 @@ LIB   = $(top_builddir)/lib
 #in the following line, the libs using the symbols should come before
 #the libs containing them! The includer is given a chance to add things
 #before libodp by setting PRE_LDADD before the inclusion.
 -LDADD = $(PRE_LDADD) $(LIB)/libodphelper.la $(LIB)/libodp-linux.la
 +LDADD = $(PRE_LDADD) $(LIB)/libodphelper.la $(LIB)/libodp-linux.la
 $(DPDK_PMDS)
>>> 
>>> Application using ODP should only need to add dependency to ODP and helper 
>>> libs. It's not scalable if (all) applications need to know which (all) libs 
>>> an ODP implementation may use internally.
>> 
>> Applications using shared library don't need to know, what are ODP
>> dependencies. Deps will be pulled in using .so DT_NEEDED. Static linking
>> requires knowledge of all dependencies. Usually this will be handled by
>> pkg-config (See Libs.private) or libtool (which also usually handles
>> such configuration). Unfortunately DPDK PMDs do not fit into libtool
>> scheme because of the way they are linked. Libtool doesn't understand
>> whole -Wl,--whole-archive,... scheme, so it won't include it into
>> dependencies list. Another possibility would be to create source file,
>> which pulls in all PMDs detected by configure and link with just -ldpdk.
> 
> Didn't Matias post some patches a while back to use --whole-archive
> for this purpose? See http://patches.opendataplane.org/patch/8237/

This patch only removed the need to reference to the drivers in source code. 
Instead, the build system automatically links all PMDs with '--whole-archive' 
flag.

-Matias



Re: [lng-odp] Suspected SPAM - Re: [API-NEXT PATCH] linux-gen: queue: clean up after modular interface

2017-06-16 Thread Elo, Matias (Nokia - FI/Espoo)

> On 16 Jun 2017, at 11:03, Peltonen, Janne (Nokia - FI/Espoo) 
>  wrote:
> 
> 
> Honnappa Nagarahalli wrote:
>> On 12 June 2017 at 06:11, Petri Savolainen  
>> wrote:
>>> Clean up function and parameter naming after modular interface
>>> patch. Queue_t type is referred as "queue internal": queue_int or
>>> q_int. Term "handle" is reserved for API level handles (e.g.
>>> odp_queue_t, odp_pktio_t, etc) through out linux-gen implementation.
>>> 
>> 
>> "queue_t" type should be referred to as "handle_int". "handle_int" is
>> clearly different from "handle".
>> If we look at the definition of "queue_t":
>> 
>> typedef struct { char dummy; } _queue_t;
>> typedef _queue_t *queue_t;
>> 
>> it is nothing but a definition of a handle. Why should it be called
>> with some other name and create confusion? Just like how odp_queue_t
>> is an abstract type, queue_t is also an abstract type. Just like how
>> odp_queue_t is a handle, queue_t is also a handle, albeit a handle
>> towards internal components.
> 
> I do not see how calling variables of type queue_t handles instead
> of queue_int or q_int makes the call any clearer or less confusing.
> If the term handle is reserved for ODP API level handles, then I
> suppose this code should adhere to that. And 'handle_int' is not
> very descriptive as a variable name anyway.
> 
>>> +static inline queue_entry_t *handle_to_qentry(odp_queue_t handle)
>> 
>> Why is there a need to add this function? We already have
>> 'queue_from_ext' and 'qentry_from_int' which are a must to implement.
>> The functionality provided by 'handle_to_qentry' can be achieved from
>> these two functions. 'handle_to_qentry' is adding another layer of
>> wrapper. This adds to code complexity.
> 
> There is a need to convert from handle to queue entry in quite many
> places in the code. Having a function for that makes perfect sense
> since it reduces code duplication and simplifies all the call sites
> that no longer need to know how the conversion is done.
> 
> This is also how the code was before you changed it (unnecessarily,
> one might think), so this change merely brings back the old code
> structure (with a naming change).
> 
>>> static odp_queue_type_t queue_type(odp_queue_t handle)
>>> {
>>> -   return qentry_from_int(queue_from_ext(handle))->s.type;
>>> +   return handle_to_qentry(handle)->s.type;
>> 
>> No need to introduce another function.
>> qentry_from_int(queue_from_ext(handle)) clearly shows the current
>> status that there exists an internal handle. handle_to_qentry(handle)
>> hides that fact and makes code less readable. This comment applies to
>> all the instances of similar change.
> 
> Hiding is good. Only handle_to_qentry() needs to know that the
> conversion is (currently) done through queue_t. I would argue that
> the code is more readable with handle_to_qentry() than with having
> to read the same conversion code all the time. An if the code ever
> changes so that the conversion is better done in another way, having
> handle_to_qentry() avoids the need to change all its call sites.
> 
>   Janne
> 
> 

I agree on all points with Janne.

-Matias



Re: [lng-odp] [API-NEXT PATCH] linux-gen: queue: clean up after modular interface

2017-06-16 Thread Elo, Matias (Nokia - FI/Espoo)

>>   void *buf_hdr[], int num, int 
>> *ret);
>> typedef int (*schedule_init_global_fn_t)(void);
>> typedef int (*schedule_term_global_fn_t)(void);
>> diff --git a/platform/linux-generic/odp_queue.c 
>> b/platform/linux-generic/odp_queue.c
>> index 3e18f578..19945584 100644
>> --- a/platform/linux-generic/odp_queue.c
>> +++ b/platform/linux-generic/odp_queue.c
>> @@ -35,20 +35,22 @@
>> #include 
>> #include 
>> 
>> +static int queue_init(queue_entry_t *queue, const char *name,
>> + const odp_queue_param_t *param);
>> +
> 
> This is unnecessary for this patch. Don't waste reviewer's time with
> unwanted changes. Unwanted changes have been discussed in the context
> of other patches and clearly agreed that they should not be done. In
> fact, such changes have been reversed.

This prototype has been added here to remove the need for the following function
prototypes: _queue_enq, _queue_deq, _queue_enq_multi, _queue_deq_multi. There
are already matching typedefs in odp_queue_if.h and "re-prototyping" them here 
is
unnecessary and makes maintaining the code more complex.


>> +{
>> +   uint32_t queue_id;
>> 
>> -static int _queue_enq_multi(queue_t handle, odp_buffer_hdr_t *buf_hdr[],
>> -   int num);
>> -static int _queue_deq_multi(queue_t handle, odp_buffer_hdr_t *buf_hdr[],
>> -   int num);
>> +   queue_id = queue_to_id(handle);
>> +   return get_qentry(queue_id);
>> +}
>> 
>> static inline odp_queue_t queue_from_id(uint32_t queue_id)
>> {
>> @@ -70,50 +72,6 @@ queue_entry_t *get_qentry(uint32_t queue_id)
>>return _tbl->queue[queue_id];
>> }
>> 
>> -static int queue_init(queue_entry_t *queue, const char *name,
>> - const odp_queue_param_t *param)
>> -{
>> -   if (name == NULL) {
>> -   queue->s.name[0] = 0;
>> -   } else {
>> -   strncpy(queue->s.name, name, ODP_QUEUE_NAME_LEN - 1);
>> -   queue->s.name[ODP_QUEUE_NAME_LEN - 1] = 0;
>> -   }
>> -   memcpy(>s.param, param, sizeof(odp_queue_param_t));
>> -   if (queue->s.param.sched.lock_count > sched_fn->max_ordered_locks())
>> -   return -1;
>> -
>> -   if (param->type == ODP_QUEUE_TYPE_SCHED) {
>> -   queue->s.param.deq_mode = ODP_QUEUE_OP_DISABLED;
>> -
>> -   if (param->sched.sync == ODP_SCHED_SYNC_ORDERED) {
>> -   unsigned i;
>> -
>> -   odp_atomic_init_u64(>s.ordered.ctx, 0);
>> -   odp_atomic_init_u64(>s.ordered.next_ctx, 0);
>> -
>> -   for (i = 0; i < queue->s.param.sched.lock_count; i++)
>> -   
>> odp_atomic_init_u64(>s.ordered.lock[i],
>> -   0);
>> -   }
>> -   }
>> -   queue->s.type = queue->s.param.type;
>> -
>> -   queue->s.enqueue = _queue_enq;
>> -   queue->s.dequeue = _queue_deq;
>> -   queue->s.enqueue_multi = _queue_enq_multi;
>> -   queue->s.dequeue_multi = _queue_deq_multi;
>> -
>> -   queue->s.pktin = PKTIN_INVALID;
>> -   queue->s.pktout = PKTOUT_INVALID;
>> -
>> -   queue->s.head = NULL;
>> -   queue->s.tail = NULL;
>> -
>> -   return 0;
>> -}
>> -
>> -
> 
> Unnecessary change

Comment above.


>> 
>> -static int _queue_enq_multi(queue_t handle, odp_buffer_hdr_t *buf_hdr[],
>> -   int num)
>> +static int queue_int_enq_multi(queue_t q_int, odp_buffer_hdr_t *buf_hdr[],
>> +  int num)
> 
> No need to introduce another naming convention. The rest of the code
> in ODP follows the convention of starting the function names with '_'.
> For ex: take a look at odp_packet_io.c file.

Naming conventions may be file specific and the '_' convention is clearly
in minority.

>> 
>> 
>> -static odp_buffer_hdr_t *_queue_deq(queue_t handle)
>> +static odp_buffer_hdr_t *queue_int_deq(queue_t q_int)
> 
> No need to introduce a new naming convention.

Comment above.

>> 
>> +static int queue_init(queue_entry_t *queue, const char *name,
>> + const odp_queue_param_t *param)
>> +{
>> +   if (name == NULL) {
>> +   queue->s.name[0] = 0;
>> +   } else {
>> +   strncpy(queue->s.name, name, ODP_QUEUE_NAME_LEN - 1);
>> +   queue->s.name[ODP_QUEUE_NAME_LEN - 1] = 0;
>> +   }
>> +   memcpy(>s.param, param, sizeof(odp_queue_param_t));
>> +   if (queue->s.param.sched.lock_count > sched_fn->max_ordered_locks())
>> +   return -1;
>> +
>> +   if (param->type == ODP_QUEUE_TYPE_SCHED) {
>> +   queue->s.param.deq_mode = ODP_QUEUE_OP_DISABLED;
>> +
>> +   if (param->sched.sync == ODP_SCHED_SYNC_ORDERED) {
>> +   unsigned i;
>> +
>> +   odp_atomic_init_u64(>s.ordered.ctx, 0);
>> +   

Re: [lng-odp] [PATCH 1/2] linux-gen: socket: remove limits for maximum RX/TX burst size

2017-06-14 Thread Elo, Matias (Nokia - FI/Espoo)

> On 13 Jun 2017, at 16:57, Dmitry Eremin-Solenikov 
> <dmitry.ereminsoleni...@linaro.org> wrote:
> 
> On 13.06.2017 16:16, Elo, Matias (Nokia - FI/Espoo) wrote:
>> 
>>> On 13 Jun 2017, at 16:00, Bill Fischofer <bill.fischo...@linaro.org> wrote:
>>> 
>>> Is the bug reported here detected by the validation or one of the
>>> performance tests? Does this now show the issue fixed?
>> 
>> The pktio validation test uses burst size of 4 so it doesn't detect this 
>> issue. This fix can be tested for example with odp_l2fwd by increasing the 
>> MAX_PKT_BURST define value.
>> 
> 
> Can you extend the testsuite adding a test for this bug?


Sure, I can do that. If there are no issues in these two patches could they be 
merged as is, so they can be a part of the upcoming release? I would then send 
the validation test improvement as a separate patch.

-Matias



Re: [lng-odp] [PATCH 1/2] linux-gen: socket: remove limits for maximum RX/TX burst size

2017-06-13 Thread Elo, Matias (Nokia - FI/Espoo)

> On 13 Jun 2017, at 16:00, Bill Fischofer  wrote:
> 
> Is the bug reported here detected by the validation or one of the
> performance tests? Does this now show the issue fixed?

The pktio validation test uses burst size of 4 so it doesn't detect this issue. 
This fix can be tested for example with odp_l2fwd by increasing the 
MAX_PKT_BURST define value.


> 
> Changes look reasonable and don't cause any obvious issues when
> applied, so for this series:
> 
> Reviewed-and-tested-by: Bill Fischofer 

Thanks!



Re: [lng-odp] [PATCH v1 0/9] Rework the way ODP links with other libraries

2017-06-06 Thread Elo, Matias (Nokia - FI/Espoo)

The m4 branch works without problems, thanks!

-Matias

> On 5 Jun 2017, at 17:03, Dmitry Eremin-Solenikov 
> <dmitry.ereminsoleni...@linaro.org> wrote:
> 
> On 05.06.2017 17:00, Elo, Matias (Nokia - FI/Espoo) wrote:
>> I already left the office but I'll test it first thing in the morning. 
> 
> Thank you!
> 
>> 
>> -Matias
>> 
>>> On 5 Jun 2017, at 16.51, Dmitry Eremin-Solenikov 
>>> <dmitry.ereminsoleni...@linaro.org> wrote:
>>> 
>>>> On 05.06.2017 16:34, Elo, Matias (Nokia - FI/Espoo) wrote:
>>>> 
>>>>> On 5 Jun 2017, at 16:24, Dmitry Eremin-Solenikov 
>>>>> <dmitry.ereminsoleni...@linaro.org> wrote:
>>>>> 
>>>>> On 05.06.2017 16:22, Elo, Matias (Nokia - FI/Espoo) wrote:
>>>>>> 
>>>>>>> On 5 Jun 2017, at 16:11, Dmitry Eremin-Solenikov 
>>>>>>> <dmitry.ereminsoleni...@linaro.org> wrote:
>>>>>>> 
>>>>>>> $ nm test/common_plat/performance/odp_l2fwd | grep vdrv
>>>>>>> 001e7fd0 T rte_eal_vdrv_register
>>>>>>> 001e8000 T rte_eal_vdrv_unregister
>>>>>>> 0001bfe0 t vdrvinitfn_cryptodev_null_pmd_drv
>>>>>>> 0001bcf0 t vdrvinitfn_pmd_af_packet_drv
>>>>>>> 0001bd40 t vdrvinitfn_pmd_bond_drv
>>>>>>> 0001bfb0 t vdrvinitfn_pmd_null_drv
>>>>>>> 0001c010 t vdrvinitfn_pmd_pcap_drv
>>>>>>> 0001c080 t vdrvinitfn_pmd_ring_drv
>>>>>>> 0001c0d0 t vdrvinitfn_pmd_tap_drv
>>>>>>> 0001c100 t vdrvinitfn_pmd_vhost_drv
>>>>>>> 0001c150 t vdrvinitfn_virtio_user_driver
>>>>>> 
>>>>>> For me:
>>>>>> 
>>>>>> $ ./configure --enable-test-perf --enable-test-vald --enable-test-cpp 
>>>>>> --enable-test-example --enable-test-helper --enable-helper-linux 
>>>>>> --with-cunit-path=/home/NNN/CUnitHome 
>>>>>> --with-netmap-path=/home/NNN/dev/netmap.git 
>>>>>> --with-dpdk-path=/home/NNN/dev/dpdk.git/x86_64-native-linuxapp-gcc 
>>>>>> --prefix=/home/NNN/odp_install
>>>>> 
>>>>> Just out of curiosity, could you please do make distclean, clean rebuild
>>>>> with V=1 and then send me
>>>>> - the rebuild log.
>>>>> - config.log
>>>>> - lib/libodp-linux.la
>>>> 
>>>> Here you go.
>>> 
>>> Pushed updated m4 branch. Could you please check it?
>>> 
>>> 
>>> -- 
>>> With best wishes
>>> Dmitry
> 
> 
> -- 
> With best wishes
> Dmitry



Re: [lng-odp] [PATCH v1 0/9] Rework the way ODP links with other libraries

2017-06-05 Thread Elo, Matias (Nokia - FI/Espoo)

> On 5 Jun 2017, at 16:04, Dmitry Eremin-Solenikov 
> <dmitry.ereminsoleni...@linaro.org> wrote:
> 
> On 05.06.2017 15:27, Elo, Matias (Nokia - FI/Espoo) wrote:
>> Hi,
>> 
>> It seems that after this patch set the dpdk pmd drivers are not properly 
>> linked anymore. The build succeeds without errors but at runtime no dpdk 
>> devices are found.
> 
> Which example/test fails? I tried my best to test linking with PMD drivers.
> 
> -- 

I tested odp_l2fwd with Fortville NICs (i40e).

-Matias




Re: [lng-odp] [API-NEXT PATCH] api: system_info: add function for fetching all supported huge page sizes

2017-06-05 Thread Elo, Matias (Nokia - FI/Espoo)

> On 2 Jun 2017, at 16:48, Maxim Uvarov <maxim.uva...@linaro.org> wrote:
> 
> On 06/02/17 12:44, Elo, Matias (Nokia - FI/Espoo) wrote:
>>>> 
>>>> /**
>>>> + * System huge page sizes in bytes
>>>> + *
>>>> + * Returns the number of huge page sizes supported by the system. Outputs 
>>>> up to
>>>> + * 'num' sizes when the 'size' array pointer is not NULL. If return value 
>>>> is
>>>> + * larger than 'num', there are more supported sizes than the function was
>>>> + * allowed to output. If return value (N) is less than 'num', only sizes
>>>> + * [0 ... N-1] have been written. Returned values are ordered from 
>>>> smallest to
>>>> + * largest.
>>>> + *
>>>> + * @param[out] size Points to an array of huge page sizes for output
>>>> + * @param  num  Maximum number of huge page sizes to output
>>>> + *
>>>> + * @return Number of supported huge page sizes
>>>> + * @retval 0 on no huge pages
>>>> + */
>>>> +unsigned odp_sys_huge_page_size_all(uint64_t size[], unsigned num);
>>>> +
>>> 
>>> I think it has to be int. -1 on error, 0 - no hp, > 0 pages.
>>> For linux it might be similar to getpagesizes()
>>> https://linux.die.net/man/3/getpagesizes
>>> """
>>> if pagesizes is NULL and n_elem is 0, then the number of pages the
>>> system supports is returned. Otherwise, pagesizes is filled with at most
>>> n_elem page sizes.
>>> """
>>> 
>> 
>> getpagesizes() returns -1 in a case of invalid function arguments. 
>> odp_sys_huge_page_size_all() is documented so that the application cannot 
>> pass invalid arguments. So an internal error would be the only possibility. 
>> I don't see this to be likely as the function is only reading system info.
>> 
>> Adding -1 return value would also increase application complexity as the 
>> error return value would require special handling from application.
>> 
> 
> We have to be consistent with all odp api functions. We do not have
> unsigned function, they are int.

We do have odp_pktio_max_index(), which returns unsigned, and anyway this 
shouldn't be a reason not to use otherwise valid return value. Regarding 
consistency, not returning -1 follows the same return value style as the rest 
of the functions in system_info.h (odp_sys_huge_page_size(), 
odp_sys_page_size(), odp_sys_cache_line_size()).

> This function is not fast path so
> additional check is ok. And -1 can be returned on permission error to
> assess /proc or /sys files for example or any other internal failure.

>From application point of view the outcome is still the same (no huge pages) 
>and returning -1 would make this function inconsistent with the other 
>functions in this module as noted above.

-Matias




Re: [lng-odp] [API-NEXT PATCH] api: system_info: add function for fetching all supported huge page sizes

2017-06-02 Thread Elo, Matias (Nokia - FI/Espoo)
>> 
>> /**
>> + * System huge page sizes in bytes
>> + *
>> + * Returns the number of huge page sizes supported by the system. Outputs 
>> up to
>> + * 'num' sizes when the 'size' array pointer is not NULL. If return value is
>> + * larger than 'num', there are more supported sizes than the function was
>> + * allowed to output. If return value (N) is less than 'num', only sizes
>> + * [0 ... N-1] have been written. Returned values are ordered from smallest 
>> to
>> + * largest.
>> + *
>> + * @param[out] size Points to an array of huge page sizes for output
>> + * @param  num  Maximum number of huge page sizes to output
>> + *
>> + * @return Number of supported huge page sizes
>> + * @retval 0 on no huge pages
>> + */
>> +unsigned odp_sys_huge_page_size_all(uint64_t size[], unsigned num);
>> +
> 
> I think it has to be int. -1 on error, 0 - no hp, > 0 pages.
> For linux it might be similar to getpagesizes()
> https://linux.die.net/man/3/getpagesizes
> """
> if pagesizes is NULL and n_elem is 0, then the number of pages the
> system supports is returned. Otherwise, pagesizes is filled with at most
> n_elem page sizes.
> """
> 

getpagesizes() returns -1 in a case of invalid function arguments. 
odp_sys_huge_page_size_all() is documented so that the application cannot pass 
invalid arguments. So an internal error would be the only possibility. I don't 
see this to be likely as the function is only reading system info.

Adding -1 return value would also increase application complexity as the error 
return value would require special handling from application.

> 
> But why do we need this inside ODP? It time be reasonable to say that
> it's number of pages/sizes visible to current ODP instance (i.e. not the
> system global.)
> 

A system can simultaneously support multiple huge page sizes and an application 
may
for example do some alignment decisions based on this information. I found this 
issues when implementing shm for odp-dpdk and trying to pass the validation 
tests. This API change enables adding a proper test for 
odp_shm_info_t.page_size.

-Matias




Re: [lng-odp] [API-NEXT PATCH] api: system_info: add function for fetching all supported huge page sizes

2017-06-01 Thread Elo, Matias (Nokia - FI/Espoo)
Thanks, will do.

-Matias

> On 1 Jun 2017, at 15:44, Bill Fischofer  wrote:
> 
> This still needs implementation, validation tests, and doc updates to
> be complete.
> 
> On Thu, Jun 1, 2017 at 5:52 AM, Matias Elo  wrote:
>> A system may simultaneously support multiple huge page sizes. Add a new API
>> function odp_sys_huge_page_size_all() which returns all supported page
>> sizes. odp_sys_huge_page_size() stays unmodified to maintain backward
>> compatibility.
>> 
>> Signed-off-by: Matias Elo 
> 
> Reviewed-by: Bill Fischofer 
> 
>> ---
>> include/odp/api/spec/system_info.h | 19 +++
>> 1 file changed, 19 insertions(+)
>> 
>> diff --git a/include/odp/api/spec/system_info.h 
>> b/include/odp/api/spec/system_info.h
>> index ca4dcdc..c41d3c5 100644
>> --- a/include/odp/api/spec/system_info.h
>> +++ b/include/odp/api/spec/system_info.h
>> @@ -27,10 +27,29 @@ extern "C" {
>>  * Default system huge page size in bytes
>>  *
>>  * @return Default huge page size in bytes
>> + * @retval 0 on no huge pages
>>  */
>> uint64_t odp_sys_huge_page_size(void);
>> 
>> /**
>> + * System huge page sizes in bytes
>> + *
>> + * Returns the number of huge page sizes supported by the system. Outputs 
>> up to
>> + * 'num' sizes when the 'size' array pointer is not NULL. If return value is
>> + * larger than 'num', there are more supported sizes than the function was
>> + * allowed to output. If return value (N) is less than 'num', only sizes
>> + * [0 ... N-1] have been written. Returned values are ordered from smallest 
>> to
>> + * largest.
>> + *
>> + * @param[out] size Points to an array of huge page sizes for output
>> + * @param  num  Maximum number of huge page sizes to output
>> + *
>> + * @return Number of supported huge page sizes
>> + * @retval 0 on no huge pages
>> + */
>> +unsigned odp_sys_huge_page_size_all(uint64_t size[], unsigned num);
>> +
>> +/**
>>  * Page size in bytes
>>  *
>>  * @return Page size in bytes
>> --
>> 2.7.4
>> 



Re: [lng-odp] [API-NEXT PATCH] api: system_info: add function for fetching all supported huge page sizes

2017-06-01 Thread Elo, Matias (Nokia - FI/Espoo)

>>> + *
>>> + * @param[out] size Points to an array of huge page sizes for output
>>> + * @param  num  Maximum number of huge page sizes to output
>>> + *
>>> + * @return Number of supported huge page sizes
>>> + * @retval 0 on no huge pages
>>> + */
>>> +unsigned odp_sys_huge_page_size_all(uint64_t size[], unsigned num);
> 
> Would it make sense to have an odp_sys_hube_page_size_num() API that
> returned the number of huge page sizes available? As currently defined
> that would be a shorthand for odp_sys_huge_page_size_all() with num ==
> 0.

As you noted the same thing can already be done with this function. Since 
reading
the number of supported huge page sizes is not a fast path operation I don't see
much motivation for a separate function. Additionally, we already have similar 
API
functions e.g. odp_pktin_event_queue(), odp_pktout_queue().

-Matias



Re: [lng-odp] [PATCH] validation: shmem: fix odp_shm_info_t page_size test

2017-06-01 Thread Elo, Matias (Nokia - FI/Espoo)


> On 5 May 2017, at 17:19, Bill Fischofer  wrote:
> 
> 
> 
> On Fri, May 5, 2017 at 8:58 AM, Matias Elo  wrote:
> The old test wasn't valid, since a system may support multiple huge page
> sizes. As odp_sys_huge_page_size() returns currently only a single value
> more precise test than 'page_size != 0' cannot be performed.
> 
> Good point, but might a better approach be to expand what's returned to 
> include a list of supported sizes, or at least the range of those sizes?
> 

Hi Bill,

Could this patch be merged as a stopgap before the required API change is done? 
The problem is that odp-dpdk's (+ new native shm implementation) validation 
test fails on systems with default 2MB hugepages without this fix.

I can submit an API change proposal. 

 -Matias




Re: [lng-odp] [PATCH 0/6] dpdk pktio: enable hardware checksum support

2017-06-01 Thread Elo, Matias (Nokia - FI/Espoo)

> On 31 May 2017, at 17:40, Bogdan Pricope  wrote:
> 
> Add HW checksum calculation/validation support for dpdk pktio.
> No UDP/TCP HW checksum calculation/validation support for:
> - IPv4 fragments
> - IPv6 packets with extension headers (including fragments)
> 
> Bogdan Pricope (6):
>  Initialize pktio configuration structure
>  dpdk: retrieve offload capabilities
>  dpdk: enable per pktio RX IP/UDP/TCP checksum offload
>  dpdk: RX - process checksum validation offload flags
>  dpdk: TX - set checksum calculation offload flags
>  examples: generator: update odp_generator to use HW checksum
>capabilities
> 
> example/generator/odp_generator.c  | 107 ++---
> platform/linux-generic/odp_packet_io.c |   2 +
> platform/linux-generic/pktio/dpdk.c| 203 -
> 3 files changed, 293 insertions(+), 19 deletions(-)
> 
> -- 
> 1.9.1
> 


As I commented to the RFC, this patch set is missing RX packet parsing. Without
it being part of this set the next step would require removing the code added in
'dpdk: RX - process checksum validation offload flags'. The packet parsing 
function
could also be used by odp-dpdk.

-Matias



Re: [lng-odp] [API-NEXT PATCH v6 6/6] Add scalable scheduler

2017-06-01 Thread Elo, Matias (Nokia - FI/Espoo)

> On 31 May 2017, at 23:53, Bill Fischofer <bill.fischo...@linaro.org> wrote:
> 
> On Wed, May 31, 2017 at 8:12 AM, Elo, Matias (Nokia - FI/Espoo)
> <matias@nokia.com> wrote:
>> 
>>>>> What’s the purpose of calling ord_enq_multi() here? To save (stash)
>>>>> packets if the thread is out-of-order?
>>>>> And when the thread is in-order, it is re-enqueueing the packets which
>>>>> again will invoke pktout_enqueue/pktout_enq_multi but this time
>>>>> ord_enq_multi() will not save the packets, instead they will actually be
>>>>> transmitted by odp_pktout_send()?
>>>>> 
>>>> 
>>>> Since transmitting packets may fail, out-of-order packets cannot be
>>>> stashed here.
>>> You mean that the TX queue of the pktio might be full so not all packets
>>> will actually be enqueued for transmission.
>> 
>> Yep.
>> 
>>> This is an interesting case but is it a must to know how many packets are
>>> actually accepted? Packets can always be dropped without notice, the
>>> question is from which point this is acceptable. If packets enqueued onto
>>> a pktout (egress) queue are accepted, this means that they must also be
>>> put onto the driver TX queue (as done by odp_pktout_send)?
>>> 
>> 
>> Currently, the packet_io/queue APIs don't say anything about packets being
>> possibly dropped after successfully calling odp_queue_enq() to a pktout
>> event queue. So to be consistent with standard odp_queue_enq() operations I
>> think it is better to return the number of events actually accepted to the 
>> TX queue.
>> 
>> To have more leeway one option would be to modify the API documentation to
>> state that packets may still be dropped after a successful odp_queue_enq() 
>> call
>> before reaching the NIC. If the application would like to be sure that the
>> packets are actually sent, it should use odp_pktout_send() instead.
> 
> Ordered queues simply say that packets will be delivered to the next
> queue in the pipeline in the order they originated from their source
> queue. What happens after that depends on the attributes of the target
> queue. If the target queue is an exit point from the application, then
> this is outside of ODP's scope.

My point was that with stashing the application has no way of knowing if an
ordered pktout enqueue call actually succeed. In case of parallel and atomic
queues it does. So my question is, is this acceptable?




Re: [lng-odp] [API-NEXT PATCH v6 6/6] Add scalable scheduler

2017-05-31 Thread Elo, Matias (Nokia - FI/Espoo)

>>> What’s the purpose of calling ord_enq_multi() here? To save (stash)
>>> packets if the thread is out-of-order?
>>> And when the thread is in-order, it is re-enqueueing the packets which
>>> again will invoke pktout_enqueue/pktout_enq_multi but this time
>>> ord_enq_multi() will not save the packets, instead they will actually be
>>> transmitted by odp_pktout_send()?
>>> 
>> 
>> Since transmitting packets may fail, out-of-order packets cannot be
>> stashed here.
> You mean that the TX queue of the pktio might be full so not all packets
> will actually be enqueued for transmission.

Yep. 

> This is an interesting case but is it a must to know how many packets are
> actually accepted? Packets can always be dropped without notice, the
> question is from which point this is acceptable. If packets enqueued onto
> a pktout (egress) queue are accepted, this means that they must also be
> put onto the driver TX queue (as done by odp_pktout_send)?
> 

Currently, the packet_io/queue APIs don't say anything about packets being
possibly dropped after successfully calling odp_queue_enq() to a pktout
event queue. So to be consistent with standard odp_queue_enq() operations I
think it is better to return the number of events actually accepted to the TX 
queue.

To have more leeway one option would be to modify the API documentation to
state that packets may still be dropped after a successful odp_queue_enq() call
before reaching the NIC. If the application would like to be sure that the
packets are actually sent, it should use odp_pktout_send() instead.

-Matias



Re: [lng-odp] [API-NEXT PATCH v6 6/6] Add scalable scheduler

2017-05-31 Thread Elo, Matias (Nokia - FI/Espoo)

> On 31 May 2017, at 12:04, Ola Liljedahl  wrote:
> 
> 
> 
> On 31/05/2017, 10:38, "Peltonen, Janne (Nokia - FI/Espoo)"
>  wrote:
> 
>> 
>> 
>> Ola Liljedahl wrote:
>>> On 23/05/2017, 16:49, "Peltonen, Janne (Nokia - FI/Espoo)"
>>>  wrote:
>>> 
>>> 
 
> +static int ord_enq_multi(uint32_t queue_index, void *p_buf_hdr[],
> +  int num, int *ret)
> +{
> + (void)queue_index;
> + (void)p_buf_hdr;
> + (void)num;
> + (void)ret;
> + return 0;
> +}
 
 How is packet order maintained when enqueuing packets read from an
>>> ordered
 queue to a pktout queue? Matias' recent fix uses the ord_enq_multi
 scheduler
 function for that, but this version does not do any ordering. Or is the
 ordering guaranteed by some other means?
>>> The scalable scheduler standard queue enqueue function also handles
>>> ordered queues. odp_queue_scalable.c can refer to the same
>>> thread-specific
>>> data as odp_schedule_scalable.c so we don¹t need this internal
>>> interface.
>>> We could perhaps adapt the code to use this interface but I think this
>>> interface is just an artefact of the implementation of the default
>>> queues/scheduler.
>> 
>> The problem is that odp_pktout_queue_config() sets qentry->s.enqueue
>> to pktout_enqueue() and that does not have any of the scalable scheduler
>> specific magic that odp_queue_scalable.c:queue_enq{_multi}() has. So
>> ordering does not happen for pktout queues even if it works for other
>> queues, right?
> This must be a recent change, it doesn’t look like that in the working
> branch we are using.
> I see the code when changing to the master branch.
> The code in pktout_enqueue() does look like a hack:
>if (sched_fn->ord_enq_multi(qentry->s.index, (void **)buf_hdr,
> len, ))
> A cast to “void **”???
> 
> What’s the purpose of calling ord_enq_multi() here? To save (stash)
> packets if the thread is out-of-order?
> And when the thread is in-order, it is re-enqueueing the packets which
> again will invoke pktout_enqueue/pktout_enq_multi but this time
> ord_enq_multi() will not save the packets, instead they will actually be
> transmitted by odp_pktout_send()?
> 

Since transmitting packets may fail, out-of-order packets cannot be stashed 
here.
With the current scheduler implementation sched_fn->ord_enq_multi() waits until
in-order and always returns 0 (in case of pktout queue). After this 
odp_pktout_send()
is called.

-Matias



Re: [lng-odp] [RFCv5] dpdk: enable hardware checksum support

2017-05-30 Thread Elo, Matias (Nokia - FI/Espoo)

> Also, I
> am considering putting this in odp-dpdk instead of odp-linux since it
> does not require API changes and odp-linux is not relevant for
> performance runs.
> 

This is a good idea. The changes can be ported to back linux-generic after the
necessary API changes are merged.

odp-dpdk probably requires couple patches before you start working on the parser
code. At least the API change 'api: pktio: add parser configuration' and the
implementation 'linux-gen: packet: remove lazy parsing' should to be ported.


> It seams, with your changes you are favoring "non HW csum" case - this
> should be the corner case. ODP is meant for performance and one would
> select a board with HW capabilities and would expect maximum
> performance for this case.

My objective is to minimise the overhead for the raw throughput case when
no checksumming etc. is enabled as this is almost always the baseline when
doing performance benchmarking.

-Matias



Re: [lng-odp] [RFCv5] dpdk: enable hardware checksum support

2017-05-29 Thread Elo, Matias (Nokia - FI/Espoo)
Hi Bogdan,

I still think rx checksum calculation should be combined with packet parsing 
but below are some comments regarding this rfc code.

-Matias

> 
> static int dpdk_open(odp_pktio_t id ODP_UNUSED,
> @@ -605,9 +666,11 @@ static inline int mbuf_to_pkt(pktio_entry_t *pktio_entry,
>   int nb_pkts = 0;
>   int alloc_len, num;
>   odp_pool_t pool = pktio_entry->s.pkt_dpdk.pool;
> + odp_pktin_config_opt_t *pktin_cfg;
> 
>   /* Allocate maximum sized packets */
>   alloc_len = pktio_entry->s.pkt_dpdk.data_room;
> + pktin_cfg = _entry->s.config.pktin;
> 
>   num = packet_alloc_multi(pool, alloc_len, pkt_table, mbuf_num);
>   if (num != mbuf_num) {
> @@ -658,6 +721,34 @@ static inline int mbuf_to_pkt(pktio_entry_t *pktio_entry,
>   if (mbuf->ol_flags & PKT_RX_RSS_HASH)
>   odp_packet_flow_hash_set(pkt, mbuf->hash.rss);
> 
> + if ((mbuf->packet_type & RTE_PTYPE_L3_IPV4) && /* covers IPv4, 
> IPv4_EXT, IPv4_EXT_UKN */
> + pktin_cfg->bit.ipv4_chksum &&
> + mbuf->ol_flags & PKT_RX_IP_CKSUM_BAD) {

pktin_cfg->bit.ipv4_chksum should be checked first.

> + if (pktin_cfg->bit.drop_ipv4_err) {
> + odp_packet_free(pkt);
> + continue;

The mbuf is never freed.

> + } else
> + pkt_hdr->p.error_flags.ip_err = 1;
> + }
> +
> + if ((mbuf->packet_type & RTE_PTYPE_L4_MASK) == RTE_PTYPE_L4_UDP 
> &&
> + pktin_cfg->bit.udp_chksum &&
> + mbuf->ol_flags & PKT_RX_L4_CKSUM_BAD) {

pktin_cfg->bit.udp_chksum first.

> + if (pktin_cfg->bit.drop_udp_err) {
> + odp_packet_free(pkt);
> + continue;

The mbuf is never freed.

> + } else
> + pkt_hdr->p.error_flags.udp_err = 1;
> + } else if ((mbuf->packet_type & RTE_PTYPE_L4_MASK) == 
> RTE_PTYPE_L4_TCP &&
> + pktin_cfg->bit.tcp_chksum &&
> + mbuf->ol_flags & PKT_RX_L4_CKSUM_BAD) {

pktin_cfg->bit.tcp_chksum first.

> + if (pktin_cfg->bit.drop_tcp_err) {
> + odp_packet_free(pkt);
> + continue;

The mbuf is never freed.

> 
> +#define IP_VERSION   0x40
> +#define IP6_VERSION  0x60
> +
> +static int packet_parse(void *l3_hdr, uint8_t *l3_proto_v4, uint8_t 
> *l4_proto)
> +{
> + uint8_t l3_proto = (*(uint8_t *)l3_hdr & 0xf0);
> +

You can use _ODP_IPV4HDR_VER(), _ODP_IPV4, _ODP_IPV6 here.

> @@ -700,9 +841,45 @@ static inline int pkt_to_mbuf(pktio_entry_t *pktio_entry,
>   }
> 
>   /* Packet always fits in mbuf */
> - data = rte_pktmbuf_append(mbuf_table[i], pkt_len);
> + data = rte_pktmbuf_append(mbuf, pkt_len);
> +
> + odp_packet_copy_to_mem(pkt, 0, pkt_len, data);

You can check pktio_entry->s.config.pktout.all_bits here and do continue if no 
checksums
are enabled.

> + ipv4_chksum_pkt = l3_proto_v4 && ipv4_chksum_cfg;
> + udp_chksum_pkt = (l4_proto == IPPROTO_UDP) && udp_chksum_cfg;
> + tcp_chksum_pkt = (l4_proto == IPPROTO_TCP) && tcp_chksum_cfg;

Config checks first.



Re: [lng-odp] [RFCv4] dpdk: enable hardware checksum support

2017-05-24 Thread Elo, Matias (Nokia - FI/Espoo)

> On 24 May 2017, at 11:32, Bogdan Pricope  wrote:
> 
> Hi Matias,
> 
> Using ptypes reported by dpdk in parser was intended for another patch
> (next work after csum).
> 

Good, so we are on the same page. When implementing packet parsing you have to
move/reimplement this checksum code anyway, so it makes more sense to implement
both of them in the same patch / patch set.  

> I guess your test is a degradation test (due to new ifs) and you did
> not enabled csum offloads/ set flags on packets.

Yep, just standard l2fwd test without any offload flags.

> 
> What will be interesting to see:
> - in a generation or termination test (UDP), what will be
> degradation/gain with csum offload enabled
> - how degradation/gain is changing with bigger packets (256 bytes vs 64 bytes)

That would definitely be more interesting. I tried quickly enabling 
'ipv4_chksum'
and 'udp_chksum' flags on odp_l2fwd and the performance degradation was minimal
(~0.2%).

While testing this I noticed a small problem in the code:

> + ptype_cnt = rte_eth_dev_get_supported_ptypes(pkt_dpdk->port_id,
> + 
>ptype_mask, ptypes, ptype_cnt);
> + for (i = 0; i < ptype_cnt; i++)
> + switch (ptypes[i]) {
> + case RTE_PTYPE_L3_IPV4:
> + ptype_l3_ipv4 = 1;
> + break;
> + case RTE_PTYPE_L4_TCP:
> + ptype_l4_tcp = 1;
> + break;
> + case RTE_PTYPE_L4_UDP:
> + ptype_l4_udp = 1;
> + break;
> + }
> + }

This doesn't work alone in all cases. For example Fortville NIC uses
RTE_PTYPE_L3_IPV4_EXT_UNKNOWN but not RTE_PTYPE_L3_IPV4.


-Matias



Re: [lng-odp] [RFCv4] dpdk: enable hardware checksum support

2017-05-24 Thread Elo, Matias (Nokia - FI/Espoo)
Hi Bogdan,

I ran a quick test run with the patch and the overhead seems to be surprisingly 
small at least on a Xeon cpu (E5-2697v3). However, I would still suggest making 
some changes to the code. More below.

-Matias



>
> @@ -605,9 +663,11 @@ static inline int mbuf_to_pkt(pktio_entry_t *pktio_entry,
>int nb_pkts = 0;
>int alloc_len, num;
>odp_pool_t pool = pktio_entry->s.pkt_dpdk.pool;
> + odp_pktin_config_opt_t *pktin_cfg;
>
>/* Allocate maximum sized packets */
>alloc_len = pktio_entry->s.pkt_dpdk.data_room;
> + pktin_cfg = _entry->s.config.pktin;
>
>num = packet_alloc_multi(pool, alloc_len, pkt_table, mbuf_num);
>if (num != mbuf_num) {
> @@ -658,6 +718,34 @@ static inline int mbuf_to_pkt(pktio_entry_t *pktio_entry,
>if (mbuf->ol_flags & PKT_RX_RSS_HASH)
>odp_packet_flow_hash_set(pkt, mbuf->hash.rss);
>
> + if ((mbuf->packet_type & RTE_PTYPE_L3_MASK) == 
> RTE_PTYPE_L3_IPV4 &&
> + pktin_cfg->bit.ipv4_chksum &&
> + mbuf->ol_flags & PKT_RX_IP_CKSUM_BAD) {
> + if (pktin_cfg->bit.drop_ipv4_err) {
> + odp_packet_free(pkt);
> + continue;
> + } else
> + pkt_hdr->p.error_flags.ip_err = 1;
> + }
> +
> + if ((mbuf->packet_type & RTE_PTYPE_L4_MASK) == RTE_PTYPE_L4_UDP 
> &&
> + pktin_cfg->bit.udp_chksum &&
> + mbuf->ol_flags & PKT_RX_L4_CKSUM_BAD) {
> + if (pktin_cfg->bit.drop_udp_err) {
> + odp_packet_free(pkt);
> + continue;
> + } else
> + pkt_hdr->p.error_flags.udp_err = 1;
> + } else if ((mbuf->packet_type & RTE_PTYPE_L4_MASK) == 
> RTE_PTYPE_L4_TCP &&
> + pktin_cfg->bit.tcp_chksum &&
> + mbuf->ol_flags & PKT_RX_L4_CKSUM_BAD) {
> + if (pktin_cfg->bit.drop_tcp_err) {
> + odp_packet_free(pkt);
> + continue;
> + } else
> + pkt_hdr->p.error_flags.tcp_err = 1;
> + }
> +

Instead of doing packet parsing and checksum validation separately I would do 
both in one function. The api-next pktio code (should be merged to master) has 
a new configuration option 'odp_pktio_config_t.parser.layer', which selects the 
parsing level. packet_parse_layer() function is then used to parse the received 
packet up to the selected level.

So, instead of of calling packet_parse_layer() in dpdk pktio I would add a new 
dpdk specific implementation of this function. This way we can exploit all dpdk 
packet parsing features in addition to the checksum calculations. Also, by 
doing this you can remove most of the if() calls above. Enabling a higher 
protocol layer checksum calculation than the selected parsing level would be a 
user error (e.g. ODP_PKTIO_PARSER_LAYER_L2 and TCP checksum enabled).




rfc-patch-bech.xlsx
Description: rfc-patch-bech.xlsx


Re: [lng-odp] [PATCH] linux-gen: sched: fix ordered enqueue to pktout queue

2017-05-23 Thread Elo, Matias (Nokia - FI/Espoo)

> On 23 May 2017, at 15:04, Peltonen, Janne (Nokia - FI/Espoo) 
>  wrote:
> 
> 
>> -Original Message-
>> From: lng-odp [mailto:lng-odp-boun...@lists.linaro.org] On Behalf Of Matias 
>> Elo
>> Sent: Monday, May 22, 2017 12:39 PM
>> To: lng-odp@lists.linaro.org
>> Subject: [lng-odp] [PATCH] linux-gen: sched: fix ordered enqueue to pktout 
>> queue
>> 
>> Make sure packet order is maintained if enqueueing packets from an ordered
>> queue.
>> 
>> Fixes https://bugs.linaro.org/show_bug.cgi?id=3002
>> 
>> Signed-off-by: Matias Elo 
>> ---
>> platform/linux-generic/odp_packet_io.c   | 8 
>> platform/linux-generic/odp_queue.c   | 1 +
>> platform/linux-generic/odp_schedule.c| 5 -
>> platform/linux-generic/odp_schedule_iquery.c | 5 -
>> 4 files changed, 17 insertions(+), 2 deletions(-)
>> 
>> diff --git a/platform/linux-generic/odp_packet_io.c b/platform/linux-
>> generic/odp_packet_io.c
>> index 98460a5..7e45c63 100644
>> --- a/platform/linux-generic/odp_packet_io.c
>> +++ b/platform/linux-generic/odp_packet_io.c
>> @@ -586,6 +586,10 @@ int pktout_enqueue(queue_entry_t *qentry, 
>> odp_buffer_hdr_t *buf_hdr)
>>  int len = 1;
>>  int nbr;
>> 
>> +if (sched_fn->ord_enq_multi(qentry->s.index, (void **)buf_hdr, len,
>> +))
>> +return nbr;
>> +
> 
> The return value is not right here. If the ord_enq_multi() returns 1,
> the packet was successfully enqueued and odp_queue_enq() should return 0,
> not 1.
> 
> After this patch the default scheduler always returns 0 in this case so
> the branch is never taken and the bug is not exposed.
> 
>>  nbr = odp_pktout_send(qentry->s.pktout, , len);
>>  return (nbr == len ? 0 : -1);
>> }


Good catch, will fix.

-Matias



Re: [lng-odp] [RFCv2] dpdk: enable hardware checksum support

2017-05-15 Thread Elo, Matias (Nokia - FI/Espoo)

> On 15 May 2017, at 9:42, Bogdan Pricope  wrote:
> 
> Signed-off-by: Bogdan Pricope 
> ---
> example/generator/odp_generator.c  | 102 
> platform/linux-generic/odp_packet_io.c |   2 +
> platform/linux-generic/pktio/dpdk.c| 117 +++--
> 3 files changed, 202 insertions(+), 19 deletions(-)
> 
> 
> diff --git a/platform/linux-generic/pktio/dpdk.c 
> b/platform/linux-generic/pktio/dpdk.c
> index 6ac89bd..8296908 100644
> --- a/platform/linux-generic/pktio/dpdk.c
> +++ b/platform/linux-generic/pktio/dpdk.c
> @@ -27,6 +27,9 @@
> #include 
> #include 
> #include 
> +#include 
> +#include 
> +#include 
> #include 
> 
> static int disable_pktio; /** !0 this pktio disabled, 0 enabled */
> @@ -189,6 +192,7 @@ static int dpdk_setup_port(pktio_entry_t *pktio_entry)
>   int ret;
>   pkt_dpdk_t *pkt_dpdk = _entry->s.pkt_dpdk;
>   struct rte_eth_rss_conf rss_conf;
> + uint16_t hw_ip_checksum = 0;
> 
>   /* Always set some hash functions to enable DPDK RSS hash calculation */
>   if (pkt_dpdk->hash.all_bits == 0) {
> @@ -198,13 +202,18 @@ static int dpdk_setup_port(pktio_entry_t *pktio_entry)
>   rss_conf_to_hash_proto(_conf, _dpdk->hash);
>   }
> 
> + if (pktio_entry->s.config.pktin.bit.ipv4_chksum ||
> + pktio_entry->s.config.pktin.bit.udp_chksum ||
> + pktio_entry->s.config.pktin.bit.tcp_chksum)
> + hw_ip_checksum = 1;
> +
>   struct rte_eth_conf port_conf = {
>   .rxmode = {
>   .mq_mode = ETH_MQ_RX_RSS,
>   .max_rx_pkt_len = pkt_dpdk->data_room,
>   .split_hdr_size = 0,
>   .header_split   = 0,
> - .hw_ip_checksum = 0,
> + .hw_ip_checksum = hw_ip_checksum,
>   .hw_vlan_filter = 0,
>   .jumbo_frame= 1,
>   .hw_strip_crc   = 0,
> @@ -434,6 +443,22 @@ static void dpdk_init_capability(pktio_entry_t 
> *pktio_entry,
>   odp_pktio_config_init(>config);
>   capa->config.pktin.bit.ts_all = 1;
>   capa->config.pktin.bit.ts_ptp = 1;
> + capa->config.pktin.bit.ipv4_chksum =
> + (dev_info->rx_offload_capa & DEV_RX_OFFLOAD_IPV4_CKSUM)? 1:0;
> + capa->config.pktin.bit.udp_chksum =
> + (dev_info->rx_offload_capa & DEV_RX_OFFLOAD_UDP_CKSUM)? 1:0;
> + capa->config.pktin.bit.tcp_chksum =
> + (dev_info->rx_offload_capa & DEV_RX_OFFLOAD_TCP_CKSUM)? 1:0;
> + capa->config.pktin.bit.drop_ipv4_err = 1;
> + capa->config.pktin.bit.drop_udp_err = 1;
> + capa->config.pktin.bit.drop_tcp_err = 1;
> +
> + capa->config.pktout.bit.ipv4_chksum =
> + (dev_info->tx_offload_capa & DEV_TX_OFFLOAD_IPV4_CKSUM)? 1:0;
> + capa->config.pktout.bit.udp_chksum =
> + (dev_info->tx_offload_capa & DEV_TX_OFFLOAD_UDP_CKSUM)? 1:0;
> + capa->config.pktout.bit.tcp_chksum=
> + (dev_info->tx_offload_capa & DEV_TX_OFFLOAD_TCP_CKSUM)? 1:0;
> }
> 
> static int dpdk_open(odp_pktio_t id ODP_UNUSED,
> @@ -605,9 +630,11 @@ static inline int mbuf_to_pkt(pktio_entry_t *pktio_entry,
>   int nb_pkts = 0;
>   int alloc_len, num;
>   odp_pool_t pool = pktio_entry->s.pkt_dpdk.pool;
> + odp_pktin_config_opt_t *pktin_cfg;
> 
>   /* Allocate maximum sized packets */
>   alloc_len = pktio_entry->s.pkt_dpdk.data_room;
> + pktin_cfg = _entry->s.config.pktin;
> 
>   num = packet_alloc_multi(pool, alloc_len, pkt_table, mbuf_num);
>   if (num != mbuf_num) {
> @@ -658,6 +685,34 @@ static inline int mbuf_to_pkt(pktio_entry_t *pktio_entry,
>   if (mbuf->ol_flags & PKT_RX_RSS_HASH)
>   odp_packet_flow_hash_set(pkt, mbuf->hash.rss);
> 
> + if (mbuf->packet_type & RTE_PTYPE_L3_IPV4 &&

Supported packet types vary amongst dpdk ethernet devices, so one may not 
assume that the mbuf->packet_type is always set correctly. 
rte_eth_dev_get_supported_ptypes() can be used to query supported types. In 
this case the comparison is probably fine if the particular rx checksum offload 
is supported.

As a whole this patch adds quite many if statements to the rx fast path (even 
when checksum calculation is not enabled) and I'm a bit worried about the 
performance penalty. Would it make sense to add a dpdk pktio specific packet 
parsing function? Checksum validations would be done in this function at the 
matching layer. This would also enable us to use all available dpdk packet 
parsing features (mainly the packet types).

-Matias




Re: [lng-odp] [API-NEXT PATCH v5 4/8] Add arch/ files

2017-05-15 Thread Elo, Matias (Nokia - FI/Espoo)

> On 13 May 2017, at 1:29, Honnappa Nagarahalli 
> <honnappa.nagaraha...@linaro.org> wrote:
> 
> On 10 May 2017 at 02:29, Elo, Matias (Nokia - FI/Espoo)
> <matias@nokia.com> wrote:
>> This may have been reported already, but 64bit ARM build is failing for me:
>> 
>> 
>> In file included from ../../platform/linux-generic/arch/arm/odp_cpu.h:59:0,
>> from ./include/odp_bitset.h:10,
>> from ./include/odp_schedule_scalable_ordered.h:14,
>> from ./include/odp_schedule_scalable.h:15,
>> from ./include/odp_queue_internal.h:36,
>> from ./include/odp_classification_datamodel.h:27,
>> from ./include/odp_packet_io_internal.h:23,
>> from pktio/io_ops.c:7:
>> ../../platform/linux-generic/arch/arm/odp_llsc.h: In function 'll8':
>> ../../platform/linux-generic/arch/arm/odp_llsc.h:114:3: error: implicit 
>> declaration of function 'ODP_ABORT' [-Werror=implicit-function-declaration]
>>   ODP_ABORT();
>>   ^
>> ../../platform/linux-generic/arch/arm/odp_llsc.h:114:3: error: nested extern 
>> declaration of 'ODP_ABORT' [-Werror=nested-externs]
>> cc1: all warnings being treated as errors
>> Makefile:1022: recipe for target 'pktio/io_ops.lo' failed
>> make[1]: *** [pktio/io_ops.lo] Error 1
>> make[1]: Leaving directory '/root/dev/odp.git/platform/linux-generic'
>> Makefile:506: recipe for target 'all-recursive' failed
>> make: *** [all-recursive] Error 1
>> 
>> 
>> System:
>> Marvell Armada 8040 (Cortex-A72) @ 1.3GHz
>> Ubuntu 16.04 - 4.4.8-armada-17.02.2-g4126e30
>> 
>> 
>> -Matias
>> 
>> 
> 
> Is this for clang/gcc?
> 


gcc 5.4.0.

$ ./configure --enable-schedule-scalable
...

opendataplane 1.14.0.0

ODP Library version:114:0:1
Helper Library version: 112:0:0

implementation_name:odp-linux
host:   aarch64-unknown-linux-gnu
ARCH_DIRarm
ARCH_ABIarm64-linux
with_platform:  linux-generic
helper_linux:   no
prefix: /usr/local
sysconfdir: ${prefix}/etc
libdir: ${exec_prefix}/lib
includedir: ${prefix}/include
testdir:
WITH_ARCH:  arm

cc: gcc
cc version: 5.4.0
cppflags:   
am_cppflags: -I/usr/local/include
am_cxxflags:-std=c++11
cflags: -g -O2 -I/usr/local/include
am_cflags:   -pthread  -DODP_SCHEDULE_SCALABLE 
-DIMPLEMENTATION_NAME=odp-linux -DODP_DEBUG_PRINT=0 -DODPH_DEBUG_PRINT=0 
-DODP_DEBUG=0 -W -Wall -Werror -Wstrict-prototypes -Wmissing-prototypes 
-Wmissing-declarations -Wold-style-definition -Wpointer-arith -Wcast-align 
-Wnested-externs -Wcast-qual -Wformat-nonliteral -Wformat-security -Wundef 
-Wwrite-strings -std=c99 
ldflags:
am_ldflags:   -pthread -lrt -ldl
libs:   -lrt -ldl -lcrypto   -L/usr/local/lib -lconfig
defs:   -DHAVE_CONFIG_H
static libraries:   yes
shared libraries:   yes
ABI compatible: yes
Deprecated APIs:no
cunit:  no
test_vald:  no
test_perf:  no
test_perf_proc: no
test_cpp:   no
test_helper:no
test_example:   no
user_guides:no





Re: [lng-odp] [PATCH] helper: tables: avoid invalid odp_shm_addr() calls

2017-05-11 Thread Elo, Matias (Nokia - FI/Espoo)

> Patch itself is ok,  it might be reasonable to rewrite the latest chunk in 
> more readable way (if you already touched that chunk):
> 
> Instead of:
> 
>shm = odp_shm_lookup(name);
>if (shm != ODP_SHM_INVALID)
>hash_tbl = (odph_hash_table_imp *)odp_shm_addr(shm);
>if (hash_tbl != NULL && strcmp(hash_tbl->name, name) == 0)
>return (odph_table_t)hash_tbl;
>return NULL;
> }
> 
> Write:
> 
>shm = odp_shm_lookup(name);
>if (shm == ODP_SHM_INVALID)
>return NULL;
> 
>hash_tbl = (odph_hash_table_imp *)odp_shm_addr(shm);
> if (hash_tbl == NULL ||  strcmp(hash_tbl->name, name) != 0)
>  return NULL;
>return (odph_table_t)hash_tbl;
> }

True, however cleanup should be done in another patch. I'm not touching any 
other code in hashtable/lineartable. I found this bug while implementing native 
dpdk shm for odp-dpdk. 

-Matias



Re: [lng-odp] [API-NEXT PATCH v5 4/8] Add arch/ files

2017-05-10 Thread Elo, Matias (Nokia - FI/Espoo)
This may have been reported already, but 64bit ARM build is failing for me:


In file included from ../../platform/linux-generic/arch/arm/odp_cpu.h:59:0,
 from ./include/odp_bitset.h:10,
 from ./include/odp_schedule_scalable_ordered.h:14,
 from ./include/odp_schedule_scalable.h:15,
 from ./include/odp_queue_internal.h:36,
 from ./include/odp_classification_datamodel.h:27,
 from ./include/odp_packet_io_internal.h:23,
 from pktio/io_ops.c:7:
../../platform/linux-generic/arch/arm/odp_llsc.h: In function 'll8':
../../platform/linux-generic/arch/arm/odp_llsc.h:114:3: error: implicit 
declaration of function 'ODP_ABORT' [-Werror=implicit-function-declaration]
   ODP_ABORT();
   ^
../../platform/linux-generic/arch/arm/odp_llsc.h:114:3: error: nested extern 
declaration of 'ODP_ABORT' [-Werror=nested-externs]
cc1: all warnings being treated as errors
Makefile:1022: recipe for target 'pktio/io_ops.lo' failed
make[1]: *** [pktio/io_ops.lo] Error 1
make[1]: Leaving directory '/root/dev/odp.git/platform/linux-generic'
Makefile:506: recipe for target 'all-recursive' failed
make: *** [all-recursive] Error 1


System:
Marvell Armada 8040 (Cortex-A72) @ 1.3GHz
Ubuntu 16.04 - 4.4.8-armada-17.02.2-g4126e30


-Matias


> On 5 May 2017, at 7:34, Brian Brooks  wrote:
> 
> Signed-off-by: Brian Brooks 
> Signed-off-by: Ola Liljedahl 
> Reviewed-by: Honnappa Nagarahalli 
> ---
> platform/linux-generic/Makefile.am   |   2 +
> platform/linux-generic/arch/arm/odp_atomic.h | 210 +++
> platform/linux-generic/arch/arm/odp_cpu.h|  63 ++
> platform/linux-generic/arch/arm/odp_cpu_idling.h |  51 +
> platform/linux-generic/arch/arm/odp_llsc.h   | 249 +++
> platform/linux-generic/arch/default/odp_cpu.h|  10 +
> platform/linux-generic/arch/mips64/odp_cpu.h |  10 +
> platform/linux-generic/arch/powerpc/odp_cpu.h|  10 +
> platform/linux-generic/arch/x86/odp_cpu.h|  41 
> 9 files changed, 646 insertions(+)
> create mode 100644 platform/linux-generic/arch/arm/odp_atomic.h
> create mode 100644 platform/linux-generic/arch/arm/odp_cpu.h
> create mode 100644 platform/linux-generic/arch/arm/odp_cpu_idling.h
> create mode 100644 platform/linux-generic/arch/arm/odp_llsc.h
> create mode 100644 platform/linux-generic/arch/default/odp_cpu.h
> create mode 100644 platform/linux-generic/arch/mips64/odp_cpu.h
> create mode 100644 platform/linux-generic/arch/powerpc/odp_cpu.h
> create mode 100644 platform/linux-generic/arch/x86/odp_cpu.h
> 



Re: [lng-odp] [PATCH] validation: shmem: fix odp_shm_info_t page_size test

2017-05-08 Thread Elo, Matias (Nokia - FI/Espoo)

> On 5 May 2017, at 17:19, Bill Fischofer  wrote:
> 
> 
> 
> On Fri, May 5, 2017 at 8:58 AM, Matias Elo  wrote:
> The old test wasn't valid, since a system may support multiple huge page
> sizes. As odp_sys_huge_page_size() returns currently only a single value
> more precise test than 'page_size != 0' cannot be performed.
> 
> Good point, but might a better approach be to expand what's returned to 
> include a list of supported sizes, or at least the range of those sizes?
>  

Yes, testing for all supported sizes would be the best option. According to the 
kernel documentation /sys/kernel/mm/hugepages directory should include 
subdirectories for all supported huge page sizes.

However, I'm wondering if this is too platform specific and should we instead 
change the odp_sys_huge_page_size() API to return an array of supported huge 
page sizes? If so, this patch could be a stopgap until the API change is 
completed.

-Matias 



Re: [lng-odp] [API-NEXT PATCH 1/4] api: pool: add maximum packet counts to pool info

2017-04-27 Thread Elo, Matias (Nokia - FI/Espoo)

>> -   /** The number of packets that the pool must provide
>> -   that are packet length 'len' bytes or smaller.
>> -   The maximum value is defined by pool capability
>> -   pkt.max_num. */
>> +   /** The exact number of 'len' byte packets that the 
>> pool
>> +   must provide. The maximum value is defined by 
>> pool
>> +   capability pkt.max_num. Pool is empty after
>> +   allocating all the 'len' byte packets. Pool 
>> capacity
>> +   for other packet lengths may vary. See
>> +   odp_pool_info_t for details. */
>>uint32_t num;
> 
> This documentation says that the pool must be empty after allocating
> "num" packets of size "len" but in reality it is possible that
> implementation might do some round-off on the pool allocation for
> better optimisation and hence there could be some minor additional
> packets which might be available in the pool.

This same issue is also in the old spec version ("The number of packets
that the pool must provide that are packet length 'len' bytes or smaller"),
just not as clearly defined. We'll start updating the patch tomorrow and
take this problem into consideration (as well as other issues presented in
the earlier conversations).

>> 
>> +   /** Maximum number of packets of any length */
>> +   uint32_t max_num;
>> +
>> +   /** Maximum number of minimum length packets */
>> +   uint32_t num_min_len;
> 
> What is the difference between "num_min_len" and "max_num" both might
> be the same since the maximum of any length packet will usually be the
> number of packets of minimum length?

In most cases max_num == num_min_len. However, in sub-pool implementations
this may not be always true.

-Matias



Re: [lng-odp] [PATCH 1/2] test: tm: add paths to find tm binary

2017-04-27 Thread Elo, Matias (Nokia - FI/Espoo)
This patch fixes the validation test failure. Couple minor comments below.

-Matias

> 
> From: Maxim Uvarov 
> 
> Use the same algorithm as pktio_run.sh to find paths in
> different cases (in tree build, out of tree build, distcheck
> and etc).
> Fixes:
> https://bugs.linaro.org/show_bug.cgi?id=2969
> 
> Signed-off-by: Maxim Uvarov 
> ---
> /** Email created from pull request 17 (muvarov:master_bug2969)
> ** https://github.com/Linaro/odp/pull/17
> ** Patch: https://github.com/Linaro/odp/pull/17.patch
> ** Base sha: 9b993a1531c94782b48292adff54a95de9d2be5c
> ** Merge commit sha: 597d9211c21adde3887c416ede815431ee0a175c
> **/
> .../validation/api/traffic_mngr/traffic_mngr.sh| 24 +++---
> 1 file changed, 21 insertions(+), 3 deletions(-)
> 
> diff --git a/test/common_plat/validation/api/traffic_mngr/traffic_mngr.sh 
> b/test/common_plat/validation/api/traffic_mngr/traffic_mngr.sh
> index a7d5416..5b0a4cb 100755
> --- a/test/common_plat/validation/api/traffic_mngr/traffic_mngr.sh
> +++ b/test/common_plat/validation/api/traffic_mngr/traffic_mngr.sh
> @@ -6,13 +6,31 @@
> # SPDX-License-Identifier:BSD-3-Clause
> #
> 
> -# directory where test binaries have been built
> -TEST_DIR="${TEST_DIR:-$(dirname $0)}"
> +# directories where pktio_main binary can be found:

This should be traffic_mngr_main.

> +# -in the validation when running standalone (./traffic_mngr) intree.

Is this relevant for tm? pktio had the separate pktio_run script.

> +# -in the current directory.
> +# running stand alone out of tree requires setting PATH
> +PATH=${TEST_DIR}/api/traffic_mngr:$PATH
> +PATH=$(dirname $0):$PATH
> +PATH=`pwd`:$PATH
> +PATH=$(dirname $0)/../../../../common_plat/validation/api/traffic_mngr:$PATH

The order of paths should match the order of comments above.

> +
> +traffic_mngr_main_path=$(which traffic_mngr_main${EXEEXT})
> +if [ -x "$traffic_mngr_main_path" ] ; then
> + echo "running with traffic_mngr_main: $traffic_mngr_run_path"
> +else
> + echo "cannot find traffic_mngr_main: please set you PATH for it."
> + echo $PWD
> + echo $PATH

These two echo lines could be dropped or they require some prefix text.

-Matias




Re: [lng-odp] [API-NEXT PATCH v4 3/8] pktio: loop: use handle instead of pointer to buffer

2017-04-26 Thread Elo, Matias (Nokia - FI/Espoo)

> On 19 Apr 2017, at 10:14, Brian Brooks  wrote:
> 
> Signed-off-by: Kevin Wang 
> ---
> platform/linux-generic/pktio/loop.c | 11 +--
> 1 file changed, 9 insertions(+), 2 deletions(-)
> 
> diff --git a/platform/linux-generic/pktio/loop.c 
> b/platform/linux-generic/pktio/loop.c
> index e9ad22ba..cbb15179 100644
> --- a/platform/linux-generic/pktio/loop.c
> +++ b/platform/linux-generic/pktio/loop.c
> @@ -80,11 +80,13 @@ static int loopback_recv(pktio_entry_t *pktio_entry, int 
> index ODP_UNUSED,
> 
>   for (i = 0; i < nbr; i++) {
>   uint32_t pkt_len;
> -
> +#ifdef ODP_SCHEDULE_SCALABLE
> + pkt = _odp_packet_from_buffer((odp_buffer_t)(hdr_tbl[i]));
> +#else
>   pkt = _odp_packet_from_buffer(odp_hdr_to_buf(hdr_tbl[i]));
> +#endif
>   pkt_len = odp_packet_len(pkt);
> 
> -

No #ifdef code please. Especially since the pktio should be completely 
independent
from the scheduler code.

>   if (pktio_cls_enabled(pktio_entry)) {
>   odp_packet_t new_pkt;
>   odp_pool_t new_pool;
> @@ -163,7 +165,12 @@ static int loopback_send(pktio_entry_t *pktio_entry, int 
> index ODP_UNUSED,
>   len = QUEUE_MULTI_MAX;
> 
>   for (i = 0; i < len; ++i) {
> +#ifdef ODP_SCHEDULE_SCALABLE
> + hdr_tbl[i] = (odp_buffer_hdr_t *)(uintptr_t)
> + _odp_packet_to_buffer(pkt_tbl[i]);
> +#else
>   hdr_tbl[i] = buf_hdl_to_hdr(_odp_packet_to_buffer(pkt_tbl[i]));
> +#endif
>   bytes += odp_packet_len(pkt_tbl[i]);
>   }
> 
> -- 
> 2.12.2
> 



Re: [lng-odp] [PATCH v5 1/3] validation: packet: increase test pool size

2017-04-13 Thread Elo, Matias (Nokia - FI/Espoo)
Ping.


> On 6 Apr 2017, at 14:41, Krishna Garapati  
> wrote:
> 
> for this patch series,
> 
> Reviewed-by: Balakrishna Garapati 
> 
> /Krishna
> 
> On 31 March 2017 at 14:18, Matias Elo  wrote:
> Previously packet_test_concatsplit() could fail on some pool
> implementations as the pool ran out of buffers. Increase default pools size
> and use capability to make sure the value is valid.
> 
> Signed-off-by: Matias Elo 
> ---
>  test/common_plat/validation/api/packet/packet.c | 7 ++-
>  1 file changed, 6 insertions(+), 1 deletion(-)
> 
> diff --git a/test/common_plat/validation/api/packet/packet.c 
> b/test/common_plat/validation/api/packet/packet.c
> index 669122a..1997139 100644
> --- a/test/common_plat/validation/api/packet/packet.c
> +++ b/test/common_plat/validation/api/packet/packet.c
> @@ -13,6 +13,8 @@
>  #define PACKET_BUF_LEN ODP_CONFIG_PACKET_SEG_LEN_MIN
>  /* Reserve some tailroom for tests */
>  #define PACKET_TAILROOM_RESERVE  4
> +/* Number of packets in the test packet pool */
> +#define PACKET_POOL_NUM 300
> 
>  static odp_pool_t packet_pool, packet_pool_no_uarea, 
> packet_pool_double_uarea;
>  static uint32_t packet_len;
> @@ -109,6 +111,7 @@ int packet_suite_init(void)
> uint32_t udat_size;
> uint8_t data = 0;
> uint32_t i;
> +   uint32_t num = PACKET_POOL_NUM;
> 
> if (odp_pool_capability() < 0) {
> printf("pool_capability failed\n");
> @@ -130,13 +133,15 @@ int packet_suite_init(void)
> segmented_packet_len = capa.pkt.min_seg_len *
>capa.pkt.max_segs_per_pkt;
> }
> +   if (capa.pkt.max_num != 0 && capa.pkt.max_num < num)
> +   num = capa.pkt.max_num;
> 
> odp_pool_param_init();
> 
> params.type   = ODP_POOL_PACKET;
> params.pkt.seg_len= capa.pkt.min_seg_len;
> params.pkt.len= capa.pkt.min_seg_len;
> -   params.pkt.num= 100;
> +   params.pkt.num= num;
> params.pkt.uarea_size = sizeof(struct udata_struct);
> 
> packet_pool = odp_pool_create("packet_pool", );
> --
> 2.7.4
> 
> 



Re: [lng-odp] [PATCH v2 1/3] linux-gen: add internal helper for reading system thread id

2017-04-13 Thread Elo, Matias (Nokia - FI/Espoo)
Ping.

> On 28 Mar 2017, at 10:41, Elo, Matias (Nokia - FI/Espoo) 
> <matias@nokia-bell-labs.com> wrote:
> 
> Ping.
> 
>> On 17 Mar 2017, at 14:16, Matias Elo <matias@nokia.com> wrote:
>> 
>> Signed-off-by: Matias Elo <matias@nokia.com>
>> ---
>> platform/linux-generic/Makefile.am   |  1 +
>> platform/linux-generic/include/odp_thread_internal.h | 20 
>> 
>> platform/linux-generic/odp_thread.c  | 10 ++
>> 3 files changed, 31 insertions(+)
>> create mode 100644 platform/linux-generic/include/odp_thread_internal.h
>> 
>> diff --git a/platform/linux-generic/Makefile.am 
>> b/platform/linux-generic/Makefile.am
>> index 056ba67..b2ae971 100644
>> --- a/platform/linux-generic/Makefile.am
>> +++ b/platform/linux-generic/Makefile.am
>> @@ -144,6 +144,7 @@ noinst_HEADERS = \
>>${srcdir}/include/odp_schedule_if.h \
>>${srcdir}/include/odp_sorted_list_internal.h \
>>${srcdir}/include/odp_shm_internal.h \
>> +  ${srcdir}/include/odp_thread_internal.h \
>>${srcdir}/include/odp_timer_internal.h \
>>${srcdir}/include/odp_timer_wheel_internal.h \
>>${srcdir}/include/odp_traffic_mngr_internal.h \
>> diff --git a/platform/linux-generic/include/odp_thread_internal.h 
>> b/platform/linux-generic/include/odp_thread_internal.h
>> new file mode 100644
>> index 000..9a8e482
>> --- /dev/null
>> +++ b/platform/linux-generic/include/odp_thread_internal.h
>> @@ -0,0 +1,20 @@
>> +/* Copyright (c) 2017, Linaro Limited
>> + * All rights reserved.
>> + *
>> + * SPDX-License-Identifier: BSD-3-Clause
>> + */
>> +
>> +#ifndef ODP_THREAD_INTERNAL_H_
>> +#define ODP_THREAD_INTERNAL_H_
>> +
>> +#ifdef __cplusplus
>> +extern "C" {
>> +#endif
>> +
>> +pid_t sys_thread_id(void);
>> +
>> +#ifdef __cplusplus
>> +}
>> +#endif
>> +
>> +#endif
>> diff --git a/platform/linux-generic/odp_thread.c 
>> b/platform/linux-generic/odp_thread.c
>> index 33a8a7f..e98fa7a 100644
>> --- a/platform/linux-generic/odp_thread.c
>> +++ b/platform/linux-generic/odp_thread.c
>> @@ -17,15 +17,19 @@
>> #include 
>> #include 
>> #include 
>> +#include 
>> 
>> #include 
>> #include 
>> #include 
>> +#include 
>> +#include 
>> 
>> typedef struct {
>>  int thr;
>>  int cpu;
>>  odp_thread_type_t type;
>> +pid_t sys_thr_id;
>> } thread_state_t;
>> 
>> 
>> @@ -135,6 +139,11 @@ static int free_id(int thr)
>>  return thread_globals->num;
>> }
>> 
>> +pid_t sys_thread_id(void)
>> +{
>> +return this_thread->sys_thr_id;
>> +}
>> +
>> int odp_thread_init_local(odp_thread_type_t type)
>> {
>>  int id;
>> @@ -159,6 +168,7 @@ int odp_thread_init_local(odp_thread_type_t type)
>>  thread_globals->thr[id].thr  = id;
>>  thread_globals->thr[id].cpu  = cpu;
>>  thread_globals->thr[id].type = type;
>> +thread_globals->thr[id].sys_thr_id = (pid_t)syscall(SYS_gettid);
>> 
>>  this_thread = _globals->thr[id];
>> 
>> -- 
>> 2.7.4
>> 
> 



Re: [lng-odp] [API-NEXT PATCH 0/4] add maximum packet counts to pool info

2017-04-13 Thread Elo, Matias (Nokia - FI/Espoo)

> On 12 Apr 2017, at 22:52, Bill Fischofer  wrote:
> 
> On Wed, Apr 12, 2017 at 7:58 AM, Matias Elo  wrote:
>> On some packet pool implementations the number of available packets may vary
>> depending on the packet length (examples below). E.g. a packet pool may 
>> include
>> smaller sub-pools for different packet length ranges.
> 
> I'm not sure what the motivation is for this proposed change, but the
> pkt.num specified on odp_pool_create() sets the maximum number of
> packets that can be allocated from the pool, not the minimum. This is
> combined with the pkt.len field to say that pkt.num packets of len
> pkt.len can be allocated.

The need for this information originally came up when fixing an issue in
pool_create().  pool_create() didn't allocate enough packets if the requested
length (pkt.len) packets were segmented. As a result of this fix it is now
possible to allocate more than pkt.num packets from the pool, if the
requested packet length fits into a single segment (or less segments than
the original pkt.len).

The spec of pkt.num is changed to "The exact number of 'len' byte packets
that the pool must provide.".

>  If the application allocated packets larger
> than this size (up to pkt.max_len) then the actual number of packets
> that can be successfully allocated from the pool may be lower than
> pkt.num, but it will never be greater.

Depending on the pool implementation this may not always be the case
(not likely) and the ascii pictures in the cover letter tried to visualise this
possibility. For example, a pool may consists of smaller HW sub-pools for
different packet length ranges and these sub-pools may be different in size.

> 
> The reason for wanting the maximum number is to bound the number of
> "in flight" packets for QoS purposes. As a pool reaches this limit,
> RED and similar algorithms kick in to start dropping packets at RX. If
> there is no fixed maximum number of packets that makes QoS processing
> erratic.
> 

There is still a maximum and an application can query it with odp_pool_info()
(odp_pool_info_t.pkt.max_num). RED and similar work as previously as
there is no API for configuring them in ODP.

> We had previously floated the idea of allowing pool groups to be
> defined which would accommodate the practice where packets get
> assigned to different sub-pools based on their size, however this was
> seen to be overly complicated, especially on platforms that have
> hardware buffer managers.

True, this gets complicated fast. The updated odp_pool_info_t includes new
maximum packet counts to provide more information to the application
without going to the specifics of the underlying pool implementation.

> Do you have a specific use case that can't be handled by the current 
> structures?

Described above.


-Matias





Re: [lng-odp] [API-NEXT PATCH 2/4] linux-gen: socket: handle recv/send calls with large burst size

2017-04-12 Thread Elo, Matias (Nokia - FI/Espoo)

> On 12 Apr 2017, at 18:03, Bill Fischofer  wrote:
> 
> This patch seems orthogonal to the rest of this series. Shouldn't this
> be a separate patch?
> 

The pktio validation test fails without this fix.

The following packet pool patch in the series fixes an issue where the pool to 
didn't allocate enough packets if the requested length (params->pkt.len) 
packets were segmented. As a result of this fix it is now possible to allocate 
more than params->pkt.num packets from the pool, if the requested packet length 
fits into a single segment (or less segments than the original params->pkt.len).

Now, the pktio validation test pktio_test_start_stop() tries to create up to 
1000 packets and then send as many it managed to allocate. After the 
aforementioned change the test manages to allocate 64 packets from the pool 
instead of the previous 32. The socket pktio max tx burst is set to 32, so the 
test will fail.


-Matias



Re: [lng-odp] [API-NEXT PATCH v2 2/4] linux-gen: packet: remove lazy parsing

2017-04-05 Thread Elo, Matias (Nokia - FI/Espoo)



> On 4 Apr 2017, at 18:30, Maxim Uvarov  wrote:
> 
> breaks build:
> https://travis-ci.org/muvarov/odp/jobs/218496566
> 
> 

Hi Maxim,

I'm unable to repeat this problem. Were the patches perhaps merged to the wrong 
branch?

-Matias



Re: [lng-odp] [API-NEXT PATCH 1/4] linux-gen: packet: recognize ICMPv6 packets

2017-04-04 Thread Elo, Matias (Nokia - FI/Espoo)
Sure, I'll send rebased v2.

-Matias


> On 3 Apr 2017, at 22:38, Maxim Uvarov  wrote:
> 
> Matias can you please update patches and check them?
> 
> Maxim.
> 
> On 04/03/17 02:36, Bill Fischofer wrote:
>> For this series:
>> 
>> Reviewed-and-tested-by: Bill Fischofer 
>> 
>> On Wed, Mar 22, 2017 at 10:29 AM, Matias Elo  wrote:
>>> Signed-off-by: Matias Elo 
>>> ---
>>> example/generator/odp_generator.c | 4 ++--
>>> example/ipsec/odp_ipsec_stream.c  | 6 +++---
>>> helper/include/odp/helper/ip.h| 3 ++-
>>> platform/linux-generic/include/protocols/ip.h | 3 ++-
>>> platform/linux-generic/odp_packet.c   | 5 -
>>> 5 files changed, 13 insertions(+), 8 deletions(-)
>>> 
>>> diff --git a/example/generator/odp_generator.c 
>>> b/example/generator/odp_generator.c
>>> index 8062d87..1fd4899 100644
>>> --- a/example/generator/odp_generator.c
>>> +++ b/example/generator/odp_generator.c
>>> @@ -267,7 +267,7 @@ static odp_packet_t pack_icmp_pkt(odp_pool_t pool)
>>>ip->ver_ihl = ODPH_IPV4 << 4 | ODPH_IPV4HDR_IHL_MIN;
>>>ip->tot_len = odp_cpu_to_be_16(args->appl.payload + ODPH_ICMPHDR_LEN 
>>> +
>>>   ODPH_IPV4HDR_LEN);
>>> -   ip->proto = ODPH_IPPROTO_ICMP;
>>> +   ip->proto = ODPH_IPPROTO_ICMPv4;
>>>seq = odp_atomic_fetch_add_u64(, 1) % 0x;
>>>ip->id = odp_cpu_to_be_16(seq);
>>>ip->chksum = 0;
>>> @@ -483,7 +483,7 @@ static void print_pkts(int thr, odp_packet_t pkt_tbl[], 
>>> unsigned len)
>>>}
>>> 
>>>/* icmp */
>>> -   if (ip->proto == ODPH_IPPROTO_ICMP) {
>>> +   if (ip->proto == ODPH_IPPROTO_ICMPv4) {
>>>icmp = (odph_icmphdr_t *)(buf + offset);
>>>/* echo reply */
>>>if (icmp->type == ICMP_ECHOREPLY) {
>>> diff --git a/example/ipsec/odp_ipsec_stream.c 
>>> b/example/ipsec/odp_ipsec_stream.c
>>> index 428ec04..b9576ae 100644
>>> --- a/example/ipsec/odp_ipsec_stream.c
>>> +++ b/example/ipsec/odp_ipsec_stream.c
>>> @@ -219,7 +219,7 @@ odp_packet_t create_ipv4_packet(stream_db_entry_t 
>>> *stream,
>>>ip->src_addr = odp_cpu_to_be_32(entry->tun_src_ip);
>>>ip->dst_addr = odp_cpu_to_be_32(entry->tun_dst_ip);
>>>} else {
>>> -   ip->proto = ODPH_IPPROTO_ICMP;
>>> +   ip->proto = ODPH_IPPROTO_ICMPv4;
>>>ip->src_addr = odp_cpu_to_be_32(stream->src_ip);
>>>ip->dst_addr = odp_cpu_to_be_32(stream->dst_ip);
>>>}
>>> @@ -262,7 +262,7 @@ odp_packet_t create_ipv4_packet(stream_db_entry_t 
>>> *stream,
>>>inner_ip = (odph_ipv4hdr_t *)data;
>>>memset((char *)inner_ip, 0, sizeof(*inner_ip));
>>>inner_ip->ver_ihl = 0x45;
>>> -   inner_ip->proto = ODPH_IPPROTO_ICMP;
>>> +   inner_ip->proto = ODPH_IPPROTO_ICMPv4;
>>>inner_ip->id = odp_cpu_to_be_16(stream->id);
>>>inner_ip->ttl = 64;
>>>inner_ip->tos = 0;
>>> @@ -519,7 +519,7 @@ clear_packet:
>>>icmp = (odph_icmphdr_t *)(inner_ip + 1);
>>>data = (uint8_t *)icmp;
>>>} else {
>>> -   if (ODPH_IPPROTO_ICMP != ip->proto)
>>> +   if (ODPH_IPPROTO_ICMPv4 != ip->proto)
>>>return FALSE;
>>>icmp = (odph_icmphdr_t *)data;
>>>}
>>> diff --git a/helper/include/odp/helper/ip.h b/helper/include/odp/helper/ip.h
>>> index ba6e675..91776fa 100644
>>> --- a/helper/include/odp/helper/ip.h
>>> +++ b/helper/include/odp/helper/ip.h
>>> @@ -205,13 +205,14 @@ typedef struct ODP_PACKED {
>>>  * IP protocol values (IPv4:'proto' or IPv6:'next_hdr')
>>>  * @{*/
>>> #define ODPH_IPPROTO_HOPOPTS 0x00 /**< IPv6 hop-by-hop options */
>>> -#define ODPH_IPPROTO_ICMP0x01 /**< Internet Control Message Protocol 
>>> (1) */
>>> +#define ODPH_IPPROTO_ICMPv4  0x01 /**< Internet Control Message Protocol 
>>> (1) */
>>> #define ODPH_IPPROTO_TCP 0x06 /**< Transmission Control Protocol (6) */
>>> #define ODPH_IPPROTO_UDP 0x11 /**< User Datagram Protocol (17) */
>>> #define ODPH_IPPROTO_ROUTE   0x2B /**< IPv6 Routing header (43) */
>>> #define ODPH_IPPROTO_FRAG0x2C /**< IPv6 Fragment (44) */
>>> #define ODPH_IPPROTO_AH  0x33 /**< Authentication Header (51) */
>>> #define ODPH_IPPROTO_ESP 0x32 /**< Encapsulating Security Payload (50) 
>>> */
>>> +#define ODPH_IPPROTO_ICMPv6  0x3A /**< Internet Control Message Protocol 
>>> (58) */
>>> #define ODPH_IPPROTO_INVALID 0xFF /**< Reserved invalid by IANA */
>>> 
>>> /**@}*/
>>> diff --git a/platform/linux-generic/include/protocols/ip.h 
>>> b/platform/linux-generic/include/protocols/ip.h
>>> index 20041f1..2b34a75 100644
>>> --- a/platform/linux-generic/include/protocols/ip.h
>>> 

Re: [lng-odp] [PATCH] helper: iplookuptable: fix prefix_entry_t member order

2017-03-31 Thread Elo, Matias (Nokia - FI/Espoo)

> On 31 Mar 2017, at 16:54, Maxim Uvarov  wrote:
> 
> On 03/31/17 10:43, Matias Elo wrote:
>> Fixes https://bugs.linaro.org/show_bug.cgi?id=2910
>> 
> 
> Matias please add some description here. Link to problem is good
> but people like to read only git logs.
> 
> Maxim.

Sure, fixed in v2.

-Matias



Re: [lng-odp] [PATCH] test: performance: lower the MAX_PKT_SIZE to 1518

2017-03-30 Thread Elo, Matias (Nokia - FI/Espoo)
Hi,

The patch I just submitted "test: bench_packet: fix headroom/tailroom test" 
should fix this problem.

-Matias


> On 30 Mar 2017, at 10:13, Krishna Garapati  
> wrote:
> 
> ping
> 
> On 24 March 2017 at 14:40, Balakrishna Garapati <
> balakrishna.garap...@linaro.org> wrote:
> 
>> "bench_packet_tailroom" test fails on odp-dpdk with the pkt size 2048
>> leaving no space for tailroom.
>> 
>> Signed-off-by: Balakrishna Garapati 
>> ---
>> test/common_plat/performance/odp_bench_packet.c | 4 ++--
>> 1 file changed, 2 insertions(+), 2 deletions(-)
>> 
>> diff --git a/test/common_plat/performance/odp_bench_packet.c
>> b/test/common_plat/performance/odp_bench_packet.c
>> index 7a3a004..6b73423 100644
>> --- a/test/common_plat/performance/odp_bench_packet.c
>> +++ b/test/common_plat/performance/odp_bench_packet.c
>> @@ -35,7 +35,7 @@
>> #define TEST_MIN_PKT_SIZE 64
>> 
>> /** Maximum test packet size */
>> -#define TEST_MAX_PKT_SIZE 2048
>> +#define TEST_MAX_PKT_SIZE 1518
>> 
>> /** Number of test runs per individual benchmark */
>> #define TEST_REPEAT_COUNT 1000
>> @@ -78,7 +78,7 @@ ODP_STATIC_ASSERT((TEST_ALIGN_OFFSET + TEST_ALIGN_LEN)
>> <= TEST_MIN_PKT_SIZE,
>> 
>> /** Test packet sizes */
>> const uint32_t test_packet_len[] = {WARM_UP, TEST_MIN_PKT_SIZE, 128, 256,
>> 512,
>> -   1024, 1518, TEST_MAX_PKT_SIZE};
>> +   1024, TEST_MAX_PKT_SIZE};
>> 
>> /**
>>  * Parsed command line arguments
>> --
>> 1.9.1
>> 
>> 



Re: [lng-odp] [API-NEXT PATCH 1/2] validation: packet: increase test pool size

2017-03-29 Thread Elo, Matias (Nokia - FI/Espoo)
Sorry, wrong version number. Should be ignored.

-Matias


> On 29 Mar 2017, at 16:10, Matias Elo  wrote:
> 
> Previously packet_test_concatsplit() could fail on some pool
> implementations as the pool ran out of buffers. Increase default pools size
> and use capability to make sure the value is valid.
> 
> Signed-off-by: Matias Elo 
> ---
> V3:
> - Increase packet pool size (Krishna)
> 
> test/common_plat/validation/api/packet/packet.c | 7 ++-
> 1 file changed, 6 insertions(+), 1 deletion(-)
> 
> diff --git a/test/common_plat/validation/api/packet/packet.c 
> b/test/common_plat/validation/api/packet/packet.c
> index 900c426..48fb75e 100644
> --- a/test/common_plat/validation/api/packet/packet.c
> +++ b/test/common_plat/validation/api/packet/packet.c
> @@ -13,6 +13,8 @@
> #define PACKET_BUF_LENODP_CONFIG_PACKET_SEG_LEN_MIN
> /* Reserve some tailroom for tests */
> #define PACKET_TAILROOM_RESERVE  4
> +/* Number of packets in the test packet pool */
> +#define PACKET_POOL_NUM 300
> 
> static odp_pool_t packet_pool, packet_pool_no_uarea, packet_pool_double_uarea;
> static uint32_t packet_len;
> @@ -109,6 +111,7 @@ int packet_suite_init(void)
>   uint32_t udat_size;
>   uint8_t data = 0;
>   uint32_t i;
> + uint32_t num = PACKET_POOL_NUM;
> 
>   if (odp_pool_capability() < 0) {
>   printf("pool_capability failed\n");
> @@ -128,13 +131,15 @@ int packet_suite_init(void)
>   segmented_packet_len = capa.pkt.min_seg_len *
>  capa.pkt.max_segs_per_pkt;
>   }
> + if (capa.pkt.max_num != 0 && capa.pkt.max_num < num)
> + num = capa.pkt.max_num;
> 
>   odp_pool_param_init();
> 
>   params.type   = ODP_POOL_PACKET;
>   params.pkt.seg_len= capa.pkt.min_seg_len;
>   params.pkt.len= capa.pkt.min_seg_len;
> - params.pkt.num= 100;
> + params.pkt.num= num;
>   params.pkt.uarea_size = sizeof(struct udata_struct);
> 
>   packet_pool = odp_pool_create("packet_pool", );
> -- 
> 2.7.4
> 



Re: [lng-odp] [API-NEXT PATCH v2 1/2] validation: packet: increase test pool size

2017-03-29 Thread Elo, Matias (Nokia - FI/Espoo)

> On 28 Mar 2017, at 17:21, Krishna Garapati  
> wrote:
> 
> This pool size is not enough with odp-dpdk. It runs out of buffers hence the 
> "packet_test_ref" fails. We should increase it to even higher size.
> /Krishna


OK, if you have the reference patches merged could you verify which value would 
be large enough?

-Matias



Re: [lng-odp] [API-NEXT PATCH v2 1/2] validation: packet: increase test pool size

2017-03-28 Thread Elo, Matias (Nokia - FI/Espoo)

Ping.

> On 27 Feb 2017, at 14:18, Matias Elo  wrote:
> 
> Previously packet_test_concatsplit() could fail on some pool
> implementations as the pool ran out of buffers. Increase default pools size
> and use capability to make sure the value is valid.
> 
> Signed-off-by: Matias Elo 
> ---
> V2:
> - Add define PACKET_POOL_NUM for test packet pool size (Bala)
> 
> test/common_plat/validation/api/packet/packet.c | 7 ++-
> 1 file changed, 6 insertions(+), 1 deletion(-)
> 
> diff --git a/test/common_plat/validation/api/packet/packet.c 
> b/test/common_plat/validation/api/packet/packet.c
> index 900c426..6ebc1fe 100644
> --- a/test/common_plat/validation/api/packet/packet.c
> +++ b/test/common_plat/validation/api/packet/packet.c
> @@ -13,6 +13,8 @@
> #define PACKET_BUF_LENODP_CONFIG_PACKET_SEG_LEN_MIN
> /* Reserve some tailroom for tests */
> #define PACKET_TAILROOM_RESERVE  4
> +/* Number of packets in the test packet pool */
> +#define PACKET_POOL_NUM 200
> 
> static odp_pool_t packet_pool, packet_pool_no_uarea, packet_pool_double_uarea;
> static uint32_t packet_len;
> @@ -109,6 +111,7 @@ int packet_suite_init(void)
>   uint32_t udat_size;
>   uint8_t data = 0;
>   uint32_t i;
> + uint32_t num = PACKET_POOL_NUM;
> 
>   if (odp_pool_capability() < 0) {
>   printf("pool_capability failed\n");
> @@ -128,13 +131,15 @@ int packet_suite_init(void)
>   segmented_packet_len = capa.pkt.min_seg_len *
>  capa.pkt.max_segs_per_pkt;
>   }
> + if (capa.pkt.max_num != 0 && capa.pkt.max_num < num)
> + num = capa.pkt.max_num;
> 
>   odp_pool_param_init();
> 
>   params.type   = ODP_POOL_PACKET;
>   params.pkt.seg_len= capa.pkt.min_seg_len;
>   params.pkt.len= capa.pkt.min_seg_len;
> - params.pkt.num= 100;
> + params.pkt.num= num;
>   params.pkt.uarea_size = sizeof(struct udata_struct);
> 
>   packet_pool = odp_pool_create("packet_pool", );
> -- 
> 2.7.4
> 



Re: [lng-odp] [PATCH v2 1/3] linux-gen: add internal helper for reading system thread id

2017-03-28 Thread Elo, Matias (Nokia - FI/Espoo)
Ping.

> On 17 Mar 2017, at 14:16, Matias Elo  wrote:
> 
> Signed-off-by: Matias Elo 
> ---
> platform/linux-generic/Makefile.am   |  1 +
> platform/linux-generic/include/odp_thread_internal.h | 20 
> platform/linux-generic/odp_thread.c  | 10 ++
> 3 files changed, 31 insertions(+)
> create mode 100644 platform/linux-generic/include/odp_thread_internal.h
> 
> diff --git a/platform/linux-generic/Makefile.am 
> b/platform/linux-generic/Makefile.am
> index 056ba67..b2ae971 100644
> --- a/platform/linux-generic/Makefile.am
> +++ b/platform/linux-generic/Makefile.am
> @@ -144,6 +144,7 @@ noinst_HEADERS = \
> ${srcdir}/include/odp_schedule_if.h \
> ${srcdir}/include/odp_sorted_list_internal.h \
> ${srcdir}/include/odp_shm_internal.h \
> +   ${srcdir}/include/odp_thread_internal.h \
> ${srcdir}/include/odp_timer_internal.h \
> ${srcdir}/include/odp_timer_wheel_internal.h \
> ${srcdir}/include/odp_traffic_mngr_internal.h \
> diff --git a/platform/linux-generic/include/odp_thread_internal.h 
> b/platform/linux-generic/include/odp_thread_internal.h
> new file mode 100644
> index 000..9a8e482
> --- /dev/null
> +++ b/platform/linux-generic/include/odp_thread_internal.h
> @@ -0,0 +1,20 @@
> +/* Copyright (c) 2017, Linaro Limited
> + * All rights reserved.
> + *
> + * SPDX-License-Identifier: BSD-3-Clause
> + */
> +
> +#ifndef ODP_THREAD_INTERNAL_H_
> +#define ODP_THREAD_INTERNAL_H_
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +pid_t sys_thread_id(void);
> +
> +#ifdef __cplusplus
> +}
> +#endif
> +
> +#endif
> diff --git a/platform/linux-generic/odp_thread.c 
> b/platform/linux-generic/odp_thread.c
> index 33a8a7f..e98fa7a 100644
> --- a/platform/linux-generic/odp_thread.c
> +++ b/platform/linux-generic/odp_thread.c
> @@ -17,15 +17,19 @@
> #include 
> #include 
> #include 
> +#include 
> 
> #include 
> #include 
> #include 
> +#include 
> +#include 
> 
> typedef struct {
>   int thr;
>   int cpu;
>   odp_thread_type_t type;
> + pid_t sys_thr_id;
> } thread_state_t;
> 
> 
> @@ -135,6 +139,11 @@ static int free_id(int thr)
>   return thread_globals->num;
> }
> 
> +pid_t sys_thread_id(void)
> +{
> + return this_thread->sys_thr_id;
> +}
> +
> int odp_thread_init_local(odp_thread_type_t type)
> {
>   int id;
> @@ -159,6 +168,7 @@ int odp_thread_init_local(odp_thread_type_t type)
>   thread_globals->thr[id].thr  = id;
>   thread_globals->thr[id].cpu  = cpu;
>   thread_globals->thr[id].type = type;
> + thread_globals->thr[id].sys_thr_id = (pid_t)syscall(SYS_gettid);
> 
>   this_thread = _globals->thr[id];
> 
> -- 
> 2.7.4
> 



Re: [lng-odp] [API-NEXT PATCH 1/4] linux-gen: packet: recognize ICMPv6 packets

2017-03-28 Thread Elo, Matias (Nokia - FI/Espoo)
Ping.


> On 22 Mar 2017, at 17:29, Matias Elo  wrote:
> 
> Signed-off-by: Matias Elo 
> ---
> example/generator/odp_generator.c | 4 ++--
> example/ipsec/odp_ipsec_stream.c  | 6 +++---
> helper/include/odp/helper/ip.h| 3 ++-
> platform/linux-generic/include/protocols/ip.h | 3 ++-
> platform/linux-generic/odp_packet.c   | 5 -
> 5 files changed, 13 insertions(+), 8 deletions(-)
> 
> diff --git a/example/generator/odp_generator.c 
> b/example/generator/odp_generator.c
> index 8062d87..1fd4899 100644
> --- a/example/generator/odp_generator.c
> +++ b/example/generator/odp_generator.c
> @@ -267,7 +267,7 @@ static odp_packet_t pack_icmp_pkt(odp_pool_t pool)
>   ip->ver_ihl = ODPH_IPV4 << 4 | ODPH_IPV4HDR_IHL_MIN;
>   ip->tot_len = odp_cpu_to_be_16(args->appl.payload + ODPH_ICMPHDR_LEN +
>  ODPH_IPV4HDR_LEN);
> - ip->proto = ODPH_IPPROTO_ICMP;
> + ip->proto = ODPH_IPPROTO_ICMPv4;
>   seq = odp_atomic_fetch_add_u64(, 1) % 0x;
>   ip->id = odp_cpu_to_be_16(seq);
>   ip->chksum = 0;
> @@ -483,7 +483,7 @@ static void print_pkts(int thr, odp_packet_t pkt_tbl[], 
> unsigned len)
>   }
> 
>   /* icmp */
> - if (ip->proto == ODPH_IPPROTO_ICMP) {
> + if (ip->proto == ODPH_IPPROTO_ICMPv4) {
>   icmp = (odph_icmphdr_t *)(buf + offset);
>   /* echo reply */
>   if (icmp->type == ICMP_ECHOREPLY) {
> diff --git a/example/ipsec/odp_ipsec_stream.c 
> b/example/ipsec/odp_ipsec_stream.c
> index 428ec04..b9576ae 100644
> --- a/example/ipsec/odp_ipsec_stream.c
> +++ b/example/ipsec/odp_ipsec_stream.c
> @@ -219,7 +219,7 @@ odp_packet_t create_ipv4_packet(stream_db_entry_t *stream,
>   ip->src_addr = odp_cpu_to_be_32(entry->tun_src_ip);
>   ip->dst_addr = odp_cpu_to_be_32(entry->tun_dst_ip);
>   } else {
> - ip->proto = ODPH_IPPROTO_ICMP;
> + ip->proto = ODPH_IPPROTO_ICMPv4;
>   ip->src_addr = odp_cpu_to_be_32(stream->src_ip);
>   ip->dst_addr = odp_cpu_to_be_32(stream->dst_ip);
>   }
> @@ -262,7 +262,7 @@ odp_packet_t create_ipv4_packet(stream_db_entry_t *stream,
>   inner_ip = (odph_ipv4hdr_t *)data;
>   memset((char *)inner_ip, 0, sizeof(*inner_ip));
>   inner_ip->ver_ihl = 0x45;
> - inner_ip->proto = ODPH_IPPROTO_ICMP;
> + inner_ip->proto = ODPH_IPPROTO_ICMPv4;
>   inner_ip->id = odp_cpu_to_be_16(stream->id);
>   inner_ip->ttl = 64;
>   inner_ip->tos = 0;
> @@ -519,7 +519,7 @@ clear_packet:
>   icmp = (odph_icmphdr_t *)(inner_ip + 1);
>   data = (uint8_t *)icmp;
>   } else {
> - if (ODPH_IPPROTO_ICMP != ip->proto)
> + if (ODPH_IPPROTO_ICMPv4 != ip->proto)
>   return FALSE;
>   icmp = (odph_icmphdr_t *)data;
>   }
> diff --git a/helper/include/odp/helper/ip.h b/helper/include/odp/helper/ip.h
> index ba6e675..91776fa 100644
> --- a/helper/include/odp/helper/ip.h
> +++ b/helper/include/odp/helper/ip.h
> @@ -205,13 +205,14 @@ typedef struct ODP_PACKED {
>  * IP protocol values (IPv4:'proto' or IPv6:'next_hdr')
>  * @{*/
> #define ODPH_IPPROTO_HOPOPTS 0x00 /**< IPv6 hop-by-hop options */
> -#define ODPH_IPPROTO_ICMP0x01 /**< Internet Control Message Protocol (1) 
> */
> +#define ODPH_IPPROTO_ICMPv4  0x01 /**< Internet Control Message Protocol (1) 
> */
> #define ODPH_IPPROTO_TCP 0x06 /**< Transmission Control Protocol (6) */
> #define ODPH_IPPROTO_UDP 0x11 /**< User Datagram Protocol (17) */
> #define ODPH_IPPROTO_ROUTE   0x2B /**< IPv6 Routing header (43) */
> #define ODPH_IPPROTO_FRAG0x2C /**< IPv6 Fragment (44) */
> #define ODPH_IPPROTO_AH  0x33 /**< Authentication Header (51) */
> #define ODPH_IPPROTO_ESP 0x32 /**< Encapsulating Security Payload (50) */
> +#define ODPH_IPPROTO_ICMPv6  0x3A /**< Internet Control Message Protocol 
> (58) */
> #define ODPH_IPPROTO_INVALID 0xFF /**< Reserved invalid by IANA */
> 
> /**@}*/
> diff --git a/platform/linux-generic/include/protocols/ip.h 
> b/platform/linux-generic/include/protocols/ip.h
> index 20041f1..2b34a75 100644
> --- a/platform/linux-generic/include/protocols/ip.h
> +++ b/platform/linux-generic/include/protocols/ip.h
> @@ -157,13 +157,14 @@ typedef struct ODP_PACKED {
>  * IP protocol values (IPv4:'proto' or IPv6:'next_hdr')
>  * @{*/
> #define _ODP_IPPROTO_HOPOPTS 0x00 /**< IPv6 hop-by-hop options */
> -#define _ODP_IPPROTO_ICMP0x01 /**< Internet Control Message Protocol (1) 
> */
> +#define _ODP_IPPROTO_ICMPv4  0x01 /**< Internet Control Message Protocol (1) 
> */
> #define _ODP_IPPROTO_TCP 0x06 /**< Transmission Control Protocol (6) */
> #define _ODP_IPPROTO_UDP 0x11 /**< User Datagram Protocol (17) */
> #define 

Re: [lng-odp] [PATCH 1/2] linux-gen: netmap: use pid to make vdev mac addresses unique

2017-03-17 Thread Elo, Matias (Nokia - FI/Espoo)

> On 16 Mar 2017, at 17:01, Bogdan Pricope  wrote:
> 
> Hi Matias,
> 
> Today, on "ODP Apps, Cloud, Demos, OFP" meeting I asked about the
> possibility/opportunity to add an odp_pktio_mac_addr_set() API.
> 
> This API may not make sense for some pktios but may be useful for
> others: OFP may (eventually) use tap pktio to replace existing tap
> functionality associated with slow path support if will be able to set
> the same MAC for tap interface and 'real' interface (e.g. tap
> interface and dpdk interface).
> 
> An odp_pktio_mac_addr_set() API will solve your problem? Or maybe you
> need an extra argument in odp_pktio_open()?
> 
> BR,
> Bogdan
> 

Hi Bogdan,

In this particular use case mac set api is not needed since I want to hide the
whole issue from the application. 

-Matias






Re: [lng-odp] [PATCH 1/2] linux-gen: netmap: use pid to make vdev mac addresses unique

2017-03-17 Thread Elo, Matias (Nokia - FI/Espoo)

> But for better support thread and process modes I think
> getpid()+gettid() is needed.
> 
> Maxim.


A valid point. Will be fixed in v2.

-Matias




Re: [lng-odp] [PATCH] test: bench_packet: add tests for reference functions

2017-03-14 Thread Elo, Matias (Nokia - FI/Espoo)

> On 13 Mar 2017, at 19:58, Bill Fischofer  wrote:
> 
> This is a good start and I have no problem merging this as-is if we want to 
> do this in stages, but I think a bit more is needed. For one, this is using a 
> fixed offset for all references. I'd like to see references created with a 
> couple of different offsets (e.g., 0 and packet_len/2) to get a better feel 
> for any variability in performance due to different offsets, especially as we 
> expect most references to be created with relatively small offsets covering 
> packet headers rather than payload.
> 
> Aside from that, for completeness we really should measure all of the other 
> ODP packet APIs when the input argument is a reference. That would allow us 
> to verify that there are no meaningful performance differences between using 
> references vs. non-references in these other APIs. 
> 

I agree with this but it will take a while until I actually have time to 
implement this. So I would suggest merging this simple patch first before 
implementing a more comprehensive test suite for the packet references.

Testing with zero offset would be a  trivial change and could be added already 
to this patch if required.

-Matias
 

Re: [lng-odp] [PATCH 1/2] linux-gen: packet: remove unnecessary packet reparsing

2017-02-27 Thread Elo, Matias (Nokia - FI/Espoo)
Ping.

> On 15 Feb 2017, at 18:01, Matias Elo  wrote:
> 
> Previously the highest already parsed layer was unnecessarily reparsed on
> the following packet_parse_common() calls.
> 
> Signed-off-by: Matias Elo 
> ---
> platform/linux-generic/odp_packet.c | 22 +++---
> 1 file changed, 15 insertions(+), 7 deletions(-)
> 
> diff --git a/platform/linux-generic/odp_packet.c 
> b/platform/linux-generic/odp_packet.c
> index 024f694..a6cf4cd 100644
> --- a/platform/linux-generic/odp_packet.c
> +++ b/platform/linux-generic/odp_packet.c
> @@ -2022,12 +2022,15 @@ int packet_parse_common(packet_parser_t *prs, const 
> uint8_t *ptr,
>   case LAYER_NONE:
>   /* Fall through */
> 
> - case LAYER_L2:
> + case LAYER_L1:
>   {
>   const _odp_ethhdr_t *eth;
>   uint16_t macaddr0, macaddr2, macaddr4;
>   const _odp_vlanhdr_t *vlan;
> 
> + if (layer <= LAYER_L1)
> + return prs->error_flags.all != 0;
> +
>   offset = sizeof(_odp_ethhdr_t);
>   if (packet_parse_l2_not_done(prs))
>   packet_parse_l2(prs, frame_len);
> @@ -2091,13 +2094,14 @@ int packet_parse_common(packet_parser_t *prs, const 
> uint8_t *ptr,
> 
>   prs->l3_offset = offset;
>   prs->parsed_layers = LAYER_L2;
> - if (layer == LAYER_L2)
> - return prs->error_flags.all != 0;
>   }
>   /* Fall through */
> 
> - case LAYER_L3:
> + case LAYER_L2:
>   {
> + if (layer <= LAYER_L2)
> + return prs->error_flags.all != 0;
> +
>   offset = prs->l3_offset;
>   parseptr = (const uint8_t *)(ptr + offset);
>   /* Set l3_offset+flag only for known ethtypes */
> @@ -2131,13 +2135,14 @@ int packet_parse_common(packet_parser_t *prs, const 
> uint8_t *ptr,
>   /* Set l4_offset+flag only for known ip_proto */
>   prs->l4_offset = offset;
>   prs->parsed_layers = LAYER_L3;
> - if (layer == LAYER_L3)
> - return prs->error_flags.all != 0;
>   }
>   /* Fall through */
> 
> - case LAYER_L4:
> + case LAYER_L3:
>   {
> + if (layer <= LAYER_L3)
> + return prs->error_flags.all != 0;
> +
>   offset = prs->l4_offset;
>   parseptr = (const uint8_t *)(ptr + offset);
>   prs->input_flags.l4 = 1;
> @@ -2186,6 +2191,9 @@ int packet_parse_common(packet_parser_t *prs, const 
> uint8_t *ptr,
>   break;
>   }
> 
> + case LAYER_L4:
> + break;
> +
>   case LAYER_ALL:
>   break;
> 
> -- 
> 2.7.4
> 



Re: [lng-odp] [API-NEXT PATCH 1/2] validation: packet: increase test pool size

2017-02-27 Thread Elo, Matias (Nokia - FI/Espoo)

> IMO, It is better for the above num value to be a #define rather than
> a local variable so that its easy to modify for multiple platforms if
> required.
> 

True, will fix this.

-Matias



Re: [lng-odp] Data corruption during TCP download

2017-02-20 Thread Elo, Matias (Nokia - FI/Espoo)
Good to hear you are making progress. I haven’t seen this problem in my
test systems but I’m mainly using netmap and dpdk without virtual interfaces
to access NIC queues directly skipping the kernel altogether.

-Matias


> On 20 Feb 2017, at 16:48, Oriol Arcas <or...@starflownetworks.com> wrote:
> 
> Small update: currently, we are having issues without ODP, just using the
> netmap example bridges. For kernel 4.1 it exhibits errors, for 4.9 it
> doesn't. We are trying to find a minimal working kernel.
> 
> So our current hypothesis is that there is some kind of bug that appears
> under concurrent transactions at kernel level, triggered by socket mmap or
> netmap... I don't know if this matches your experience.
> 
> --
> Oriol Arcas
> Software Engineer
> Starflow Networks
> 
> On Mon, Feb 20, 2017 at 11:39 AM, Maxim Uvarov <maxim.uva...@linaro.org>
> wrote:
> 
>> version from .travis file.
>> 
>> On 20 February 2017 at 13:05, Oriol Arcas <or...@starflownetworks.com>
>> wrote:
>> 
>>> Hi,
>>> 
>>> Thank you for your feedback Matias and Maxim, we really appreciate it.
>>> 
>>> We are trying netmap, but sometimes it doesn't solve the problem. Could
>>> you share what versions (ODP, netmap, Linux) are you using and are working
>>> fine? This would help us having a control group for our tests...
>>> 
>>> --
>>> Oriol Arcas
>>> Software Engineer
>>> Starflow Networks
>>> 
>>> On Fri, Feb 17, 2017 at 8:26 PM, Maxim Uvarov <maxim.uva...@linaro.org>
>>> wrote:
>>> 
>>>> On 02/17/17 18:45, Oriol Arcas wrote:
>>>>> I tried setting the MAC addresses. In my local test, the problem
>>>>> disappeared, but I doubt that it's been fixed.
>>>>> 
>>>>> On our larger testbed, with OpenVPN tunnels, the bug persists event
>>>> with
>>>>> the MAC addresses. But our setup may be problematic, for instance in
>>>> this
>>>>> interface chain:
>>>>> 
>>>>> veth0 -|- veth1 <---> l2fwd <---> veth2 -|- veth3
>>>>> 
>>>>> we set the addresses from the endpoints (veth0, veth3), while l2fwd is
>>>>> attached to middle interfaces (veth1, veth2).
>>>>> 
>>>>> Do you think this is interfering with the network stack? It looks like
>>>> a
>>>>> serious bug in the kernel, then...
>>>>> 
>>>>> It seems that we'll have to try netmap.
>>>>> 
>>>> 
>>>> 
>>>> that is environment which we use in 'make check' testing. Even for dpdk
>>>> or netmap. You can take exact steps from .travis.yml file. But it always
>>>> run in our CI. Maybe you have some issues related to promisc mode
>>>> and you get some additional files? Or might be packet mmap fanout
>>>> problems. But that is very strange because we would see this issue
>>>> before because that env bring up at each test run.
>>>> 
>>>> Maxim.
>>>> 
>>>>> --
>>>>> Oriol Arcas
>>>>> Software Engineer
>>>>> Starflow Networks
>>>>> 
>>>>> On Fri, Feb 17, 2017 at 1:23 PM, Elo, Matias (Nokia - FI/Espoo) <
>>>>> matias@nokia-bell-labs.com> wrote:
>>>>> 
>>>>>> 
>>>>>>> On 17 Feb 2017, at 14:03, Oriol Arcas <or...@starflownetworks.com>
>>>>>> wrote:
>>>>>>> 
>>>>>>> Hi,
>>>>>>> 
>>>>>>> Thanks for your reply Marias.
>>>>>>> 
>>>>>>> I tried a simpler setup and the bug persists. With a linux bridge it
>>>>>> works fine.
>>>>>>> 
>>>>>>> My setup is the following:
>>>>>>> 
>>>>>>> | nginx <---> veth0 -|- veth1 <---> l2fwd <---> veth2 -|- veth3 <--->
>>>>>> wget |
>>>>>>> 
>>>>>>> where the | delimiters mean a network namespace.
>>>>>>> 
>>>>>>> I have tried it with ODP_PKTIO_DISABLE_SOCKET_MMAP, all the different
>>>>>> scheduling modes and -c 1.
>>>>>>> 
>>>>>>> Our next test would be using DPDK or netmap, if they can be used with
>>>>>> veth interfaces.
>>>>>> 
>>>>>> At least netmap should work with virtual interfaces.
>>>>>> 
>>>>>> More things to try with odp_l2fwd arguments:
>>>>>> 
>>>>>> Set the MAC addresses correctly. Using the same MAC from multiple
>>>>>> interfaces
>>>>>> could potentially cause some issues with the host network stack.
>>>>>>For example: odp_l2fwd -i if1,if2 -d 1 -s 1 -r
>>>> <if1_mac,if2_mac2>
>>>>>> 
>>>>>> Use direct pktio mode: -m 0
>>>>>> 
>>>>>> 
>>>>>> -Matias
>>>> 
>>>> 
>>> 
>> 



Re: [lng-odp] Data corruption during TCP download

2017-02-17 Thread Elo, Matias (Nokia - FI/Espoo)

> On 17 Feb 2017, at 14:03, Oriol Arcas  wrote:
> 
> Hi,
> 
> Thanks for your reply Marias.
> 
> I tried a simpler setup and the bug persists. With a linux bridge it works 
> fine.
> 
> My setup is the following:
> 
> | nginx <---> veth0 -|- veth1 <---> l2fwd <---> veth2 -|- veth3 <---> wget |
> 
> where the | delimiters mean a network namespace.
> 
> I have tried it with ODP_PKTIO_DISABLE_SOCKET_MMAP, all the different 
> scheduling modes and -c 1.
> 
> Our next test would be using DPDK or netmap, if they can be used with veth 
> interfaces.

At least netmap should work with virtual interfaces.

More things to try with odp_l2fwd arguments:

Set the MAC addresses correctly. Using the same MAC from multiple interfaces
could potentially cause some issues with the host network stack.
For example: odp_l2fwd -i if1,if2 -d 1 -s 1 -r 

Use direct pktio mode: -m 0


-Matias

Re: [lng-odp] Data corruption during TCP download

2017-02-17 Thread Elo, Matias (Nokia - FI/Espoo)
Hi Oriol,

This seems rather odd indeed (especially point e). Just to be clear, are you 
using OFP in any part of the test setup or is the simplified setup as follows?

standard nginx <> odp_l2fwd <> standard wget

You could try testing with different odp pktio types (preferably netmap or 
dpdk) to see if the problem persists. You can disable mmap pktio with 
ODP_PKTIO_DISABLE_SOCKET_MMAP environment variable.

Second thing to try would be to run odp_l2fwd with a single core (-c 1) to rule 
out possible synchronisation problems.

-Matias

 
> On 16 Feb 2017, at 17:37, Oriol Arcas  wrote:
> 
> Hi,
> 
> We have been using ODP for a while, and we found this weird bug which is
> difficult to explain, involving data corruption in TCP transfers, I hope
> somebody may reproduce and shed some light on this.
> 
> To reproduce this bug, we set up the following environment:
> 
> 1- Two Debian Jessie VMs, running on QEMU/libvirt
> 2- Each VM has Linux kernel 3.16.39 (any other version should experience
> the same issues)
> 3- The eth0 "physical" interfaces of the VMs are for management, the eth1
> are connected through a bridge in the host
> 4- We have OpenVPN taps through the eth1 interfaces (10.52.34.1/30 and
> 10.52.34.2/30)
> 5- In each VM, there is a pair of veth interfaces, vethi (10.52.34.5/30 and
> 10.52.34.6/30) and vethe (no IP)
> 6- We "bridge" the vethe and the tap interfaces with the odp_l2fwd example
> app
> 7- We have an nginx server and wget clients (curl produces the same result)
> 
> The setup looks like this:
> 
> Server VM
> nginx - vethi (10.52.34.5) - vethe - odp_l2fwd - tap (10.52.34.1) - ...
> 
> [host bridge]
> 
> Client VM
> ... - tap (10.52.34.2) - odp_l2fwd - vethe - vethi (10.52.34.6) - wget
> 
> The idea is that there should be tunnelled IP connections through the
> corresponding vethi endpoints.
> 
> The unmodified odp_l2fwd are run with the following command:
> 
> sudo /usr/lib/odp/linux/examples/odp_l2fwd -i tap,vethe -d 0 -s 0 -m 1
> 
> To do our tests, we have a 10 MB text file called "download" with the
> following contents:
> 
> 1 
> 2 
> 3 
> ...
> 147178 
> 147179 00
> 
> We download the data from the client VM with the following command:
> 
> $> wget http://10.52.34.5/download
> 
> The data arrives completely (and in this case, correctly), and both
> odp_l2fwd apps report the processed packets.
> 
> However, when we perform several parallel downloads:
> 
> $> for i in `seq 30`; do wget http://10.52.34.5/download -O download_${i}
> &; done
> 
> The downloads end, but the downloaded data is wrong:
> 
> $> for i in `seq 1 30`; do cmp download_${i} download; done
> download_7 download differ: byte 5140175, line 72554
> download_19 download differ: byte 4739, line 70
> download_25 download differ: byte 39677, line 577
> 
> To be clear, we add the following comments:
> a) We tried this with the ODP official packages 1.10.1, and also ODP LTS
> 1.11 and the current master head (~1.13)
> b) We have tried this with a bridge instead of the odp_l2fwd app, and it
> worked fine
> c) It seems that it happens when the client has ODP, regardless of the
> server having ODP or a bridge; if only the server has ODP, it works fine
> d) The data corruption presumably consists in packets of one TCP flow
> interleaved with another flow.
> 
> We tried this downloading simultaneously files with '0' and files with '1';
> the result was files with chunks of '0' and '1' interleaved:
> 
> $> diff download_1 download
> 71692,71700c72149
> < 72149
> 00111
> < 94804 
> < 94805 
> < 94806 
> < 94807 
> < 94808 
> < 94809 
> < 94810 
> < 94811 1100
> ---
>> 72149 
> 
> e) If there was data corruption during the transmission, and since we are
> using TCP, the protocol should not allow this to happen, right?
> f) We have PCAP traces from vethi and tap, and Wireshark shows that the TCP
> conversation is OK; we cannot explain this
> g) The TCP checksums and the TCP lengths seem to be OK; just in case, we
> disabled checksum 

Re: [lng-odp] [PATCH] linux-gen: fix dpdk pktio init

2017-02-06 Thread Elo, Matias (Nokia - FI/Espoo)
Good catch.

Reviewed-and-tested-by: Matias Elo 

> On 4 Feb 2017, at 22:33, Maxim Uvarov  wrote:
> 
> struct rte_eth_dev_info should be initialized before
> usage with strcmp(dev_info.driver_name, "rte_ixgbe_pmd").
> Patch fixes segfault on that compare.
> 
> CC: Elo Matias 
> Signed-off-by: Maxim Uvarov 
> ---
> platform/linux-generic/pktio/dpdk.c | 14 +++---
> 1 file changed, 7 insertions(+), 7 deletions(-)
> 
> diff --git a/platform/linux-generic/pktio/dpdk.c 
> b/platform/linux-generic/pktio/dpdk.c
> index 0eb025ae..9a9f7a4e 100644
> --- a/platform/linux-generic/pktio/dpdk.c
> +++ b/platform/linux-generic/pktio/dpdk.c
> @@ -560,19 +560,19 @@ static int dpdk_output_queues_config(pktio_entry_t 
> *pktio_entry,
>   return 0;
> }
> 
> -static void dpdk_init_capability(pktio_entry_t *pktio_entry)
> +static void dpdk_init_capability(pktio_entry_t *pktio_entry,
> +  struct rte_eth_dev_info *dev_info)
> {
>   pkt_dpdk_t *pkt_dpdk = _entry->s.pkt_dpdk;
>   odp_pktio_capability_t *capa = _dpdk->capa;
> - struct rte_eth_dev_info dev_info;
> 
> - memset(_info, 0, sizeof(struct rte_eth_dev_info));
> + memset(dev_info, 0, sizeof(struct rte_eth_dev_info));
>   memset(capa, 0, sizeof(odp_pktio_capability_t));
> 
> - rte_eth_dev_info_get(pkt_dpdk->port_id, _info);
> - capa->max_input_queues = RTE_MIN(dev_info.max_rx_queues,
> + rte_eth_dev_info_get(pkt_dpdk->port_id, dev_info);
> + capa->max_input_queues = RTE_MIN(dev_info->max_rx_queues,
>PKTIO_MAX_QUEUES);
> - capa->max_output_queues = RTE_MIN(dev_info.max_tx_queues,
> + capa->max_output_queues = RTE_MIN(dev_info->max_tx_queues,
> PKTIO_MAX_QUEUES);
>   capa->set_op.op.promisc_mode = 1;
> 
> @@ -631,7 +631,7 @@ static int dpdk_open(odp_pktio_t id ODP_UNUSED,
>   return -1;
>   }
> 
> - dpdk_init_capability(pktio_entry);
> + dpdk_init_capability(pktio_entry, _info);
> 
>   mtu = dpdk_mtu_get(pktio_entry);
>   if (mtu == 0) {
> -- 
> 2.11.0.295.gd7dffce
> 



Re: [lng-odp] [PATCH 1/2] linux-gen: dpdk: improve pmd driver linking

2017-02-01 Thread Elo, Matias (Nokia - FI/Espoo)

> On 2 Feb 2017, at 9:25, Christophe Milard  
> wrote:
> 
> hmmm... that sound promising. thanks for the update. may I ask which
> libtool version you tried with (latest)?


Libtool: 2.4.6
Automake: 1.15
Autoconf: 2.69

They seem to be the standard packages in ubuntu 16.04.

-Matias



Re: [lng-odp] [PATCH 1/2] linux-gen: dpdk: improve pmd driver linking

2017-02-01 Thread Elo, Matias (Nokia - FI/Espoo)

> On 1 Feb 2017, at 16:01, Christophe Milard  
> wrote:
> 
> No, saddly. I got stuck on this.
> I summed up the situation here:
> https://lists.linaro.org/pipermail/lng-odp/2016-October/026120.html
> ...
> But if you get it to go, it is a good new: Just make sure that works
> on the latest libtool/autotools: Going forward is OK. If it does not
> work on latest, then, it is problematic...
> (Cannot remember what version I got it to fail, to be honest...)
> 
> Just let me know what you get to. I'd be glad if I am wrong (or things
> have changed)
> I think Krishna may want to know as well.
> 
> Christophe.

Good description of the problem. I checked my libtool/automake and I’m using 
the latest version of both of them. Hopefully this also helps on your driver 
issue.

-Matias



Re: [lng-odp] ODP install error with dpdk 16.07

2017-02-01 Thread Elo, Matias (Nokia - FI/Espoo)
> 
> Thanks, Matias. For your benchmarking it would be good to get a
> comparison run without those options to better quantify the overhead
> of ABI compatibility mode. Right now we're taking a strict approach to
> ABI so as to minimize the coordination requirements between
> implementations supporting that ABI, however it's possible to tighten
> things up by allowing certain inline function expansions if the
> members of the ABI agree on some common aspects of their respective
> internals. These sort of comparisons would be the sort of
> justification needed for doing that extra spec work.


Sure, I can run the comparison benchmarks. The performance difference 
correlates with the number of inline functions. Petri is currently working on a 
patch set, which inlines a large set of commonly used packet functions. It’s 
probably best to wait until his patch set is ready (probably tomorrow?) before 
running the tests.

-Matias



  1   2   3   >