On 3 February 2015 at 13:59, Petri Savolainen
<[email protected]> wrote:
> Completed odp_pool_param_t definition with packet pool parameters.
> Parameter definition is close to what we are using already.
>
> * seg_len: Defines minimum segment buffer length.
>            With this parameter user can:
>            * trade-off between pool memory usage and SW performance (linear 
> memory access)
>            * avoid segmentation in packet head (e.g. if legacy app cannot 
> handle
>              segmentation in the middle of the packet headers)
We already had defined a minimum segment size for conforming ODP
implementations. Isn't that enough?

I can see value in specifying the minimum size of the first segment of
a packet (which would contain all headers the application is going to
process). But this proposal goes much further than that.


>            * seg_len < ODP_CONFIG_PACKET_SEG_LEN_MIN is rounded up to 
> ODP_CONFIG_PACKET_SEG_LEN_MIN
>            * seg_len > ODP_CONFIG_PACKET_SEG_LEN_MAX is not valid
>
> * seg_align: Defines minimum segment buffer alignment. With this parameter,
>              user can force buffer alignment to match e.g. aligment 
> requirements
>              of data structures stored in or algorithms accessing the packet
Can you give a practical example of when this configuration is useful?
To my knowledge, most data structures have quite small alignment
requirements, e.g. based on alignment requirements of individual
fields. But here I assume that we would specify alignment in multiples
of cache lines here (because the minimum segment alignment would be
the cache line size).

>              headroom. When user don't have specific alignment requirement 0
>              should be used for default.
>
> * seg_num: Number of segments. This is also the maximum number of packets.
I think these configurations could be hints but not strict
requirements. They do not change the *functionality* so an application
should not fail if these configurations can not be obeyed (except for
that legacy situation you describe above). The hints enable more
optimal utilization of e.g. packet memory and may decrease SW overhead
during packet processing but do not change the functionality.

To enable different hardware implementations, ODP apps should not
enforce unnecessary (non-functional) requirements on the ODP
implementations and limit the number of targets ODP can be implemented
on. ODP is not DPDK.

Applications should also not have to first check the limits of the
specific ODP implementation (as you suggested yesterday), adapts its
configuration to that and then send back those requirements to the ODP
implementation (which still has to check the parameters to verify that
they are valid). This is too complicated and will likely lead to code
that cheats and thus is not portable. Better for applications just to
specify its requested configuration to ODP and then get back the
results (i.e. actual values that will be used). The application can
then if necessary check that the configuration was honored. This
follows the normal programming flow.

>
> Signed-off-by: Petri Savolainen <[email protected]>
> ---
>  include/odp/api/pool.h | 26 +++++++++++++++++++++-----
>  1 file changed, 21 insertions(+), 5 deletions(-)
>
> diff --git a/include/odp/api/pool.h b/include/odp/api/pool.h
> index d09d92e..a1d7494 100644
> --- a/include/odp/api/pool.h
> +++ b/include/odp/api/pool.h
> @@ -61,13 +61,29 @@ typedef struct odp_pool_param_t {
>                                              of 8. */
>                         uint32_t num;   /**< Number of buffers in the pool */
>                 } buf;
> -/* Reserved for packet and timeout specific params
>                 struct {
> -                       uint32_t seg_size;
> -                       uint32_t seg_align;
> -                       uint32_t num;
> +                       uint32_t seg_len;   /**< Minimum packet segment buffer
> +                                                length in bytes. It includes
> +                                                possible head-/tailroom 
> bytes.
> +                                                The maximum value is defined 
> by
> +                                                
> ODP_CONFIG_PACKET_SEG_LEN_MAX.
> +                                                Use 0 for default length. */
> +                       uint32_t seg_align; /**< Minimum packet segment buffer
> +                                                alignment in bytes. Valid
> +                                                values are powers of two. The
> +                                                maximum value is defined by
> +                                                
> ODP_CONFIG_PACKET_SEG_ALIGN_MAX
> +                                                . Use 0 for default 
> alignment.
> +                                                Default will always be a
> +                                                multiple of 8.
> +                                            */
> +                       uint32_t seg_num;   /**< Number of packet segments in
> +                                                the pool. This is also the
> +                                                maximum number of packets,
> +                                                since each packet consist of
> +                                                at least one segment.
What if both seg_num and a shared memory region is specified in the
odp_pool_create call? Which takes precedence?

> +                                            */
>                 } pkt;
> -*/
>                 struct {
>                         uint32_t __res1; /* Keep struct identical to buf, */
>                         uint32_t __res2; /* until pool implementation is 
> fixed*/
> --
> 2.2.2
>
>
> _______________________________________________
> lng-odp mailing list
> [email protected]
> http://lists.linaro.org/mailman/listinfo/lng-odp

_______________________________________________
lng-odp mailing list
[email protected]
http://lists.linaro.org/mailman/listinfo/lng-odp

Reply via email to