Hi, An ODP application is SW running on cores (not HW or firmware). If there's no SW, there's no need for ODP API. Application receives odp_packets, which consist of odp_packet_segs. From application point of view, each segment is a block of contiguous memory (implementation can be anything as long as application can access it contiguously). Application has the knowledge on incoming packets and SW processing of those. Segmentation is a trade-off between linear memory processing (good performance) and memory usage. These parameters give application the change to tune the trade-off in each use case.
For example, application may - receive 49% 64 byte packets, 49% 1500 byte packets and 2% 9kB packets - read up to 300 bytes into a packet - add upto 54 bytes of tunnel headers in front of the packet => ask first min seg len 354 bytes, that would fit all accesses. Implementation rounds that up to 384 bytes (full cache line) => run tests and notice that performance is good, but we need to lower the memory usage of this pool => drop min seg len to 200 bytes, implementation rounds that to 256 => run tests, performance is still good enough and pool uses less memory (which can be then used for something else) How implementation could do this trade-off analysis behalf of the user? + uint32_t seg_len; /**< Minimum packet segment buffer + length in bytes. It includes + possible head-/tailroom bytes. + Use 0 for default length. */ In addition to this, we have lower and upper config limits (ODP_CONFIG_PACKET_BUF_LEN_XXX need to be updated also). If implementation can support only one segment len, it will be documented by those min/max limits being the same. -Petri > -----Original Message----- > From: ext Ola Liljedahl [mailto:[email protected]] > Sent: Sunday, February 01, 2015 12:06 AM > To: Bill Fischofer > Cc: Petri Savolainen; LNG ODP Mailman List > Subject: Re: [lng-odp] [PATCH 1/2] api: pool: Added packet pool parameters > > One important aspect of ODP is a hardware abstraction, the ODP API is > supposed to hide implementations details such as how buffers are > managed (segmentation is one implementation detail that we allow to > leak through as this is common and very expensive to hide from the > application). I recently heard of a 40G NIC which doesn't use (user > visible) buffer pools at all. You just pass a large (shared) memory > region to the NIC and it carves up suitable buffers as needed. > Needless to say, DPDK has problem with that. But ODP shouldn't. > > On 31 January 2015 at 01:22, Bill Fischofer <[email protected]> > wrote: > > I really can't concur with this proposed design. The fundamental issue > here > > is the proper relationship between applications and implementations with > > respect to packet storage. Packets have an overall length, and this is > > within the purview of the application, since it understands the type of > > traffic it is looking to process. Length is a property that packets > have > > "on the wire" and is independent of how packets may be stored within a > > processing node. > > > > Segments are always an implementation construct and exist for the > > convenience of an implementation. Segments do not exist on the wire and > are > > not part of any inherent (i.e., platform-independent) packet structure. > To > > assert application control over packet segmentation is to say that the > > application is controlling the implementation of packet storage. This is > a > > fundamental departure from the API/Implementation separation paradigm > that > > ODP is promoting. If an application wishes to do this it is leaving no > > room for HW offload or innovation in this area--it's just using the HW > as a > > raw block manager and doing everything itself in SW. > > > > It is understood that the existence of segmentation imposes some > processing > > overhead on SW to the extent that SW must deal with the "seams" in a > packet > > addressability that results from segmentation. There are two ways to > > address this. > > > > The first is to recognize that in the data plane the vast bulk of > processing > > is on packet headers rather than payload and that HW is aware of this > fact, > > which is why HW designed for packet processing invariably uses segment > sizes > > large enough to contain all of the packet headers within the first > packet > > segment for the vast majority of packets of interest. In the spec we > worked > > on last year we stated that ODP would require a minimum segment size of > 256 > > bytes so that applications would have assurance regarding this, and no > > surveyed platforms had issues with that. > > > > The second means of addressing this problem is to allow applications to > > explicitly request unsegmented pools. While recognizing that not all > > platforms can provide unsegmented pools efficiently, the idea behind > > unsegmented pools was that this would aid applications that for whatever > > reason could not deal with packet segments. > > > > So in this model the application has two choices. It can either work > with > > an implementation-chosen segment size, understanding that that size will > be > > large enough so that it need not worry about segment boundaries in > packet > > headers for almost all packets of interest, or it can request that the > pool > > be unsegmented so that the entire packet is always a single segment. > > > > If you believe that this model is insufficient, I would like to > understand, > > with use cases, why that is so. I would also like to hear from SoC > vendors > > looking to implement ODP whether they can efficiently support arbitrary > > application-specified segment sizes for packet processing. > > > > > > On Fri, Jan 30, 2015 at 7:10 AM, Petri Savolainen > > <[email protected]> wrote: > >> > >> Completed odp_pool_param_t definition with packet pool parameters. > >> Parameter definition is close to what we are using already. Segment > >> min length, segment min alignment and number of segments. > >> > >> Signed-off-by: Petri Savolainen <[email protected]> > >> --- > >> include/odp/api/pool.h | 20 +++++++++++++++----- > >> 1 file changed, 15 insertions(+), 5 deletions(-) > >> > >> diff --git a/include/odp/api/pool.h b/include/odp/api/pool.h > >> index 1582102..e407704 100644 > >> --- a/include/odp/api/pool.h > >> +++ b/include/odp/api/pool.h > >> @@ -61,13 +61,23 @@ typedef struct odp_pool_param_t { > >> of 8. */ > >> uint32_t num; /**< Number of buffers in the > pool > >> */ > >> } buf; > >> -/* Reserved for packet and timeout specific params > >> struct { > >> - uint32_t seg_size; > >> - uint32_t seg_align; > >> - uint32_t num; > >> + uint32_t seg_len; /**< Minimum packet segment > >> buffer > >> + length in bytes. It > >> includes > >> + possible head- > /tailroom > >> bytes. > >> + Use 0 for default > length. > >> */ > >> + uint32_t seg_align; /**< Minimum packet segment > >> buffer > >> + alignment in bytes. > Valid > >> + values are powers of > two. > >> Use 0 > >> + for default alignment. > >> Default > >> + will always be a > multiple > >> of 8. > >> + */ > >> + uint32_t seg_num; /**< Number of packet > segments > >> in > >> + the pool. It's also > the > >> maximum > >> + number of packets, > since > >> each > >> + packet consist of at > >> least one > >> + segment. */ > >> } pkt; > >> -*/ > >> struct { > >> uint32_t __res1; /* Keep struct identical to > buf, > >> */ > >> uint32_t __res2; /* until pool implementation > is > >> fixed*/ > >> -- > >> 2.2.2 > >> > >> > >> _______________________________________________ > >> lng-odp mailing list > >> [email protected] > >> http://lists.linaro.org/mailman/listinfo/lng-odp > > > > > > > > _______________________________________________ > > lng-odp mailing list > > [email protected] > > http://lists.linaro.org/mailman/listinfo/lng-odp > > _______________________________________________ lng-odp mailing list [email protected] http://lists.linaro.org/mailman/listinfo/lng-odp
