On 21 July 2015 at 13:05, Nicolas Morey-Chaisemartin <[email protected]>
wrote:

>
>
> On 07/20/2015 07:24 PM, Bala Manoharan wrote:
>
> Hi,
>
>  Few comments inline
>
> On 20 July 2015 at 22:38, Nicolas Morey-Chaisemartin < <[email protected]>
> [email protected]> wrote:
>
>> Replace current segmentation with an explicit define.
>> This mainly means two things:
>>  - All code can now test and check the max segmentation which will prove
>>    useful for tests and open the way for many code optimizations.
>>  - The minimum segment length and the maximum buffer len can now be
>> decorrelated.
>>    This means that pool with very small footprints can be allocated for
>> small packets,
>>    while pool for jumbo frame will still work as long as seg_len *
>> ODP_CONFIG_PACKET_MAX_SEG >= packet_len
>>
>> Signed-off-by: Nicolas Morey-Chaisemartin < <[email protected]>
>> [email protected]>
>> ---
>>  include/odp/api/config.h                             | 10 +++++++++-
>>  platform/linux-generic/include/odp_buffer_internal.h |  9 +++------
>>  platform/linux-generic/odp_pool.c                    |  4 ++--
>>  test/validation/packet/packet.c                      |  3 ++-
>>  4 files changed, 16 insertions(+), 10 deletions(-)
>>
>> diff --git a/include/odp/api/config.h b/include/odp/api/config.h
>> index b5c8fdd..1f44db6 100644
>> --- a/include/odp/api/config.h
>> +++ b/include/odp/api/config.h
>> @@ -108,6 +108,13 @@ extern "C" {
>>  #define ODP_CONFIG_PACKET_SEG_LEN_MAX (64*1024)
>>
>>  /**
>> + * Maximum number of segments in a packet
>> + *
>> + * This defines the maximum number of segment buffers in a packet
>> + */
>> +#define ODP_CONFIG_PACKET_MAX_SEG 6
>>
>
>  What is the use-case of the above define? Does it mean that the packet
> should not be stored in a pool if the max number of segment gets reached?
> If this is something used in the linux-generic we can define it in the
> internal header file.
>
>  The reason is that the #defines in config.h file has to be defined by
> all the platforms.
>
>  Regards,
> Bala
>
>    This maybe a little to linux-generic otiented I guess. What I'm
> looking for is a clean way to handle segment length vs packet length in
> pools.
>

The optimisations specific to linux-generic should be in internal header
and not in config files as any change in config file will have to be
handled by all the platforms.

I was trying to kill two birds with one stone in this patch:
> - Be able to disable segmentation completely and add fast compile time in
> the code to avoid segment computations
> - Fix packet validation test (and maybe enhance my proposal for
> pktio/segmentation) which rely heavily on the number of supported segment.
>
> For testing, the main issue I guess is that there is no way to know the
> actual segment length and length used by the pool. We could go to the
> internals but that would make the tests platform specific.
> Something like odp_pool_get_seg_len() and odp_pool_get_len()  could be
> qute useful for building tests but not very interesting for end users...
>

IMO the testing for segmentation should be written in such a way that the
validation suite should not fail if the implementation has handled the
given requirement without creating segments as technically creating the
segmentation is an implementation optimisation but not a requirement.

The validation suite should try to allocate a larger packet from a pool
with a small segment size and then it can only expect that the
implementation has stored it as segments if the packet is segmented then
segment tests should be run else it should not throw an error since by not
creating segmentation the implementation has not violated any ODP
requirement.

Regards,
Bala

>
> I'd still like to see some easy way to disable segmentation so user code
> can check for that and remove complex mapping, memcopying to/from packet
> and iterating on segments.
>
>
_______________________________________________
lng-odp mailing list
[email protected]
https://lists.linaro.org/mailman/listinfo/lng-odp

Reply via email to