As discussed on the ODP architecture call. It could be useful to be able to
know how much you need to over-provision a pool in order to be guaranteed
that you can allocate N elements from that pool (allowing for an
implementation dependent amount of per-CPU caching etc), regardless of how
that pool previously has been used by other agents. I can imagine use cases
where an application has to be able to handle N connections or other
stateful entities. Consistent failures to do so (because nominally free
buffers cannot be allocated by an arbitrary thread) would be a violation of
the specification.

On 26 January 2016 at 15:29, Savolainen, Petri (Nokia - FI/Espoo) <
[email protected]> wrote:

> This would also allow HW blocks (like packet input) to cache some free
> buffers (for incoming packets). It would be hard to guarantee  that any
> given time a single thread would be able to allocate (steal) all
> pre-allocated buffers from all pool users. For example, could you steal all
> packet buffers from a packet input ring without reseting the pktio device,
> etc.
>
>
>
> -Petri
>
>
>
>
>
>
>
> *From:* EXT Ola Liljedahl [mailto:[email protected]]
> *Sent:* Tuesday, January 26, 2016 4:14 PM
> *To:* Savolainen, Petri (Nokia - FI/Espoo)
> *Cc:* EXT Zoltan Kiss; [email protected]
> *Subject:* Re: [lng-odp] [API-NEXT PATCH 1/2] api: pool: allow per-thread
> caching
>
>
>
>
>
>
>
> On 26 January 2016 at 14:59, Savolainen, Petri (Nokia - FI/Espoo) <
> [email protected]> wrote:
>
> Reviewed-by: Petri Savolainen <[email protected]>
>
>
>
> > -----Original Message-----
> > From: EXT Zoltan Kiss [mailto:[email protected]]
> > Sent: Friday, January 22, 2016 8:24 PM
> > To: [email protected]
> > Cc: [email protected]; Savolainen, Petri (Nokia - FI/Espoo)
> > Subject: [API-NEXT PATCH 1/2] api: pool: allow per-thread caching
> >
> > This addition explicitly loses the requirement that a single thread
> should
> > be able to allocate the whole pool even if nothing used it before.
> > Therefore per-thread caches are allowed to retain some elements locally.
>
> But is this a good change? How does it change the semantics of the ODP
> memory management?
> Threads can have local caches of buffers but if the local cache is
> exhausted, a thread could steal buffers from other threads. This would mean
> that the cache management has to be thread-safe (e.g. using atomic
> instructions). There would still be a benefit of local buffer caches since
> this would provide more cache locality for a thread/CPU. The cache metadata
> would also be local to the cache (unless just accessed by some other CPU).
>
>
>
> What's the overhead on e.g. x86 to use CAS to manage the local cache?
>
>
>
> >
> > Signed-off-by: Zoltan Kiss <[email protected]>
> > ---
> >  include/odp/api/pool.h | 3 +++
> >  1 file changed, 3 insertions(+)
> >
> > diff --git a/include/odp/api/pool.h b/include/odp/api/pool.h
> > index 2e79a55..deab963 100644
> > --- a/include/odp/api/pool.h
> > +++ b/include/odp/api/pool.h
> > @@ -43,6 +43,9 @@ extern "C" {
> >  /**
> >   * Pool parameters
> >   * Used to communicate pool creation options.
> > + * @note A single thread may not be able to allocate all 'num' elements
> > + * from the pool at any particular time, as other threads or hardware
> > + * blocks are allowed to keep some for caching purposes.
> >   */
> >  typedef struct odp_pool_param_t {
> >       /** Pool type */
>
> > --
> > 1.9.1
>
>
> _______________________________________________
> lng-odp mailing list
> [email protected]
> https://lists.linaro.org/mailman/listinfo/lng-odp
>
>
>
_______________________________________________
lng-odp mailing list
[email protected]
https://lists.linaro.org/mailman/listinfo/lng-odp

Reply via email to