In the original packet API design it was proposed that one of the pool options should be UNSEGMENTED, which says the application does not wish to see segments at all. The implementation would then either comply or else fail the pool create if it is unable to support unsegmented pools. However, that didn't make the cut for v1.0. If there is a use case perhaps that should be revisited?
On Tue, Jul 21, 2015 at 2:35 AM, Nicolas Morey-Chaisemartin < [email protected]> wrote: > > > On 07/20/2015 07:24 PM, Bala Manoharan wrote: > > Hi, > > Few comments inline > > On 20 July 2015 at 22:38, Nicolas Morey-Chaisemartin < <[email protected]> > [email protected]> wrote: > >> Replace current segmentation with an explicit define. >> This mainly means two things: >> - All code can now test and check the max segmentation which will prove >> useful for tests and open the way for many code optimizations. >> - The minimum segment length and the maximum buffer len can now be >> decorrelated. >> This means that pool with very small footprints can be allocated for >> small packets, >> while pool for jumbo frame will still work as long as seg_len * >> ODP_CONFIG_PACKET_MAX_SEG >= packet_len >> >> Signed-off-by: Nicolas Morey-Chaisemartin < <[email protected]> >> [email protected]> >> --- >> include/odp/api/config.h | 10 +++++++++- >> platform/linux-generic/include/odp_buffer_internal.h | 9 +++------ >> platform/linux-generic/odp_pool.c | 4 ++-- >> test/validation/packet/packet.c | 3 ++- >> 4 files changed, 16 insertions(+), 10 deletions(-) >> >> diff --git a/include/odp/api/config.h b/include/odp/api/config.h >> index b5c8fdd..1f44db6 100644 >> --- a/include/odp/api/config.h >> +++ b/include/odp/api/config.h >> @@ -108,6 +108,13 @@ extern "C" { >> #define ODP_CONFIG_PACKET_SEG_LEN_MAX (64*1024) >> >> /** >> + * Maximum number of segments in a packet >> + * >> + * This defines the maximum number of segment buffers in a packet >> + */ >> +#define ODP_CONFIG_PACKET_MAX_SEG 6 >> > > What is the use-case of the above define? Does it mean that the packet > should not be stored in a pool if the max number of segment gets reached? > If this is something used in the linux-generic we can define it in the > internal header file. > > The reason is that the #defines in config.h file has to be defined by > all the platforms. > > Regards, > Bala > > This maybe a little to linux-generic otiented I guess. What I'm > looking for is a clean way to handle segment length vs packet length in > pools. > I was trying to kill two birds with one stone in this patch: > - Be able to disable segmentation completely and add fast compile time in > the code to avoid segment computations > - Fix packet validation test (and maybe enhance my proposal for > pktio/segmentation) which rely heavily on the number of supported segment. > > For testing, the main issue I guess is that there is no way to know the > actual segment length and length used by the pool. We could go to the > internals but that would make the tests platform specific. > Something like odp_pool_get_seg_len() and odp_pool_get_len() could be > qute useful for building tests but not very interesting for end users... > > I'd still like to see some easy way to disable segmentation so user code > can check for that and remove complex mapping, memcopying to/from packet > and iterating on segments. > > > _______________________________________________ > lng-odp mailing list > [email protected] > https://lists.linaro.org/mailman/listinfo/lng-odp > >
_______________________________________________ lng-odp mailing list [email protected] https://lists.linaro.org/mailman/listinfo/lng-odp
