after clean patches apply and fix in run scripts I made it run.

But results is really bad. --enable-dpdk-zero-copy

TX rate is:
7673155 pps

RX rate is:
5989846 pps


Before patch PR 313 TX was 10M pps.

I re run task and TX is 3.3M pps. All tests are single core. So
something strange happens in lava or this PR.

Maxim.


On 12/04/17 17:03, Bogdan Pricope wrote:
> On TX (https://lng.validation.linaro.org/scheduler/job/23252.0) I see:
> 
> ODP_REPO='https://github.com/muvarov/odp'
> ODP_BRANCH='api-next'
> 
> 
> On RX (https://lng.validation.linaro.org/scheduler/job/23252.1) I see:
> 
> ODP_REPO='https://github.com/muvarov/odp'
> ODP_BRANCH='devel/api-next_shsum'
> 
> 
> or are you referring to other test?
> 
> 
> On 4 December 2017 at 15:53, Maxim Uvarov <maxim.uva...@linaro.org> wrote:
>>
>>
>> On 4 December 2017 at 15:11, Bogdan Pricope <bogdan.pric...@linaro.org>
>> wrote:
>>>
>>> You need to put 313 on TX side (not RX).
>>
>>
>>
>> both rx and tx have patches from 313. l2fwd works on recv side. Generator
>> does not work.
>>
>> Maxim.
>>
>>
>>>
>>>
>>> On 4 December 2017 at 13:19, Savolainen, Petri (Nokia - FI/Espoo)
>>> <petri.savolai...@nokia.com> wrote:
>>>> Is the DPDK version 17.08 ? Other versions might not work properly.
>>>>
>>>>
>>>>
>>>> -Petri
>>>>
>>>>
>>>>
>>>> From: Maxim Uvarov [mailto:maxim.uva...@linaro.org]
>>>> Sent: Monday, December 04, 2017 1:10 PM
>>>> To: Savolainen, Petri (Nokia - FI/Espoo) <petri.savolai...@nokia.com>
>>>> Cc: Bogdan Pricope <bogdan.pric...@linaro.org>; lng-odp-forward
>>>> <lng-odp@lists.linaro.org>
>>>>
>>>>
>>>> Subject: Re: [lng-odp] odp dpdk
>>>>
>>>>
>>>>
>>>> 313 does not work also:
>>>>
>>>> https://lng.validation.linaro.org/scheduler/job/23242.1
>>>>
>>>> I will replace RX side to l2fwd and see that will be there.
>>>>
>>>> Maxim.
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> On 4 December 2017 at 13:46, Savolainen, Petri (Nokia - FI/Espoo)
>>>> <petri.savolai...@nokia.com> wrote:
>>>>
>>>> Maxim, try https://github.com/Linaro/odp/pull/313 It has been tested to
>>>> fix
>>>> checksum insert for 10/40GE Intel NICs.
>>>>
>>>> -Petri
>>>>
>>>>
>>>>> -----Original Message-----
>>>>> From: lng-odp [mailto:lng-odp-boun...@lists.linaro.org] On Behalf Of
>>>>> Bogdan Pricope
>>>>> Sent: Monday, December 04, 2017 12:21 PM
>>>>> To: Maxim Uvarov <maxim.uva...@linaro.org>
>>>>> Cc: lng-odp-forward <lng-odp@lists.linaro.org>
>>>>> Subject: Re: [lng-odp] odp dpdk
>>>>>
>>>>> I suspect this is actually caused by csum issue in TX side: on RX,
>>>>> socket pktio does not validate csum (and accept the packets) but on
>>>>> dpdk pktio the csum is validated and packets are dropped.
>>>>>
>>>>> I am not seeing this in my setup because default txq_flags for igb
>>>>> driver (1G interface) is
>>>>>
>>>>> .txq_flags = 0
>>>>>
>>>>> while for ixgbe (10G interface) is:
>>>>>
>>>>> .txq_flags = ETH_TXQ_FLAGS_NOMULTSEGS |
>>>>>                 ETH_TXQ_FLAGS_NOOFFLOADS,
>>>>>
>>>>>
>>>>> /B
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On 1 December 2017 at 23:47, Maxim Uvarov <maxim.uva...@linaro.org>
>>>>> wrote:
>>>>>>
>>>>>> Looking to dpdk pktio support and generator. It looks like receive
>>>>>> part
>>>>>> is broken. If for receive I use sockets it works well but receive
>>>>>> with
>>>>>> dpdk does not get any packets. For both master and api-next. Can
>>>>>> somebody confirm please that it's so. Lava is not supper friendly to
>>>>>> debug issue.
>>>>>>
>>>>>>
>>>>>> 1. Recv
>>>>>> odp_generator -I 0 -m r -c 0x4
>>>>>>
>>>>>> https://lng.validation.linaro.org/scheduler/job/23206.1
>>>>>> Network devices using DPDK-compatible driver
>>>>>> ============================================
>>>>>> 0000:07:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb'
>>>>>> drv=igb_uio unused=
>>>>>>
>>>>>>
>>>>>>
>>>>>> 2. Send
>>>>>> odp_generator -I 0 --srcmac 38:ea:a7:93:98:94 --dstmac
>>>>>> 38:ea:a7:93:83:a0
>>>>>> --srcip 192.168.100.2 --dstip 192.168.100.1 -m u -i 0 -c 0x8 -p 18 -e
>>>>>> 5000 -f 5001 -n 800000000
>>>>>>
>>>>>> https://lng.validation.linaro.org/scheduler/job/23206.0
>>>>>>
>>>>>> Thank you,
>>>>>> Maxim.
>>>>
>>>>
>>
>>

Reply via email to