Insofar as possible, the mechanics of NUMA should be the responsibility of
the ODP implementation, rather than the application, since that way the
application retains maximum portability.

However, from an ODP API perspective, I think we need to be mindful of NUMA
considerations to give implementations the necessary "hooks" to properly
support the NUMA aspects of their platform.  This is why ODP APIs need to
be careful about what addressability assumptions they make.

If Gábor or Jerrin can list a couple of specific relevant cases I think
that will help in focusing the discussion and get us off to a good start.

On Fri, May 8, 2015 at 8:26 AM, Savolainen, Petri (Nokia - FI/Espoo) <
[email protected]> wrote:

> Hi,
>
> ODP is OS agnostic and thus thread management (e.g. thread creation and
> pinning to physical cores) and NUMA awareness should happen mostly outside
> of ODP APIs.
>
> For example, NUMA could be visible in ODP APIs this way:
> * Add odp_cpumask_xxx() calls that indicate NUMA dependency between CPUs
> (just for information)
> * Add a way to identify groups of threads which frequently share resources
> (memory and handles) within the group
> * Give the thread group as a hint (parameter) to various ODP calls that
> create shared resources. Implementation can use the information to allocate
> resources "near" to the threads in the group. However, the user is
> responsible to group the threads and map/pin those into physical CPUs in a
> way that enables NUMA aware optimizations.
>
>
> -Petri
>
>
>
> > -----Original Message-----
> > From: lng-odp [mailto:[email protected]] On Behalf Of ext
> > Gábor Sándor Enyedi
> > Sent: Friday, May 08, 2015 10:48 AM
> > To: Jerin Jacob; Zoltan Kiss
> > Cc: [email protected]
> > Subject: Re: [lng-odp] NUMA aware memory allocation?
> >
> > Hi,
> >
> > Thanks. So, is the workaround for now to start the threads, and do all
> > the memory reservation on the thread? And to call odp_shm_reserve()
> > instead of simple malloc() calls? Can I use multiple buffer pools, one
> > for each thread or interface?
> > BR,
> >
> > Gabor
> >
> > P.s.: Do you know when will this issue in the API be fixed (e.g. in next
> > release or whatever)?
> >
> > On 05/08/2015 09:06 AM, Jerin Jacob wrote:
> > > On Thu, May 07, 2015 at 05:00:54PM +0100, Zoltan Kiss wrote:
> > >
> > >> Hi,
> > >>
> > >> I'm not aware of any such interface, but others with more knowledge
> can
> > >> comment about it. The ODP-DPDK implementation creates buffer pools on
> > the
> > >> NUMA node where the pool create function were actually called.
> > > current ODP spec is not NUMA aware. We need to have API to support
> nodes
> > enumeration and
> > > explicit node parameter to alloc/free resource from specific node like
> > odp_shm_reserve_onnode(node, ...)
> > > and while keeping existing API odp_shm_reserve() allocated on node
> where
> > the current code runs
> > >
> > >
> > >> Regards,
> > >>
> > >> Zoli
> > >>
> > >> On 07/05/15 16:32, Gábor Sándor Enyedi wrote:
> > >>> Hi!
> > >>>
> > >>> I just started to test ODP, trying to write my first application, but
> > >>> found a problem: if I want to write NUMA aware code, how should I
> > >>> allocate memory close to a given thread? I mean, I know there is
> > >>> libnuma, but should I use it? I guess not, but I cannot find memory
> > >>> allocation functions in ODP. Is there a function similar to
> > >>> numa_alloc_onnode()?
> > >>> Thanks,
> > >>>
> > >>> Gabor
> > >>> _______________________________________________
> > >>> lng-odp mailing list
> > >>> [email protected]
> > >>> https://lists.linaro.org/mailman/listinfo/lng-odp
> > >> _______________________________________________
> > >> lng-odp mailing list
> > >> [email protected]
> > >> https://lists.linaro.org/mailman/listinfo/lng-odp
> >
> >
> > _______________________________________________
> > lng-odp mailing list
> > [email protected]
> > https://lists.linaro.org/mailman/listinfo/lng-odp
> _______________________________________________
> lng-odp mailing list
> [email protected]
> https://lists.linaro.org/mailman/listinfo/lng-odp
>
_______________________________________________
lng-odp mailing list
[email protected]
https://lists.linaro.org/mailman/listinfo/lng-odp

Reply via email to