> -----Original Message-----
> From: ext Stuart Haslam [mailto:[email protected]]
> Sent: Thursday, June 04, 2015 4:10 PM
> To: Savolainen, Petri (Nokia - FI/Espoo)
> Cc: [email protected]
> Subject: Re: [lng-odp] [API-NEXT PATCH 2/3] api: cpumask: added default
> masks
>
> On Wed, Jun 03, 2015 at 05:24:44PM +0300, Petri Savolainen wrote:
> > Added default cpumask functions for worker and control threads.
> > These will replace odph_linux_cpumask_default() helper. CPU masks
> > and IDs are system specific, API is generic.
> >
> > Signed-off-by: Petri Savolainen <[email protected]>
> > ---
> > include/odp/api/cpumask.h | 22 +++++++++++++++++++++
> > platform/linux-generic/odp_cpumask.c | 38
> ++++++++++++++++++++++++++++++++++++
> > 2 files changed, 60 insertions(+)
> >
> > diff --git a/include/odp/api/cpumask.h b/include/odp/api/cpumask.h
> > index 85cdf6e..fad6835 100644
> > --- a/include/odp/api/cpumask.h
> > +++ b/include/odp/api/cpumask.h
> > @@ -194,6 +194,28 @@ int odp_cpumask_last(const odp_cpumask_t *mask);
> > int odp_cpumask_next(const odp_cpumask_t *mask, int cpu);
> >
> > /**
> > + * Default cpumask for worker threads
> > + *
> > + * Creates cpumask based on starting count, actual value returned.
>
> Not sure what "based on starting count" is intended to mean, is this
> sentence actually needed?
>
Copy-paste from current linux helper. Basically, allocates up to 'num' CPUs,
returns the number of CPUs actually allocated.
I'll reformulate.
> > + *
> > + * @param[out] mask CPU mask to update
>
> s/update/populate.. update implies that it's not going to zero the mask
> first.
yes, copy-paste error should say "to initialize"
>
> > + * @param num Number of worker threads, zero for all
> available CPUs
> > + * @return Actual number of CPUs used to create the mask
> > + */
> > +int odp_cpumask_def_worker(odp_cpumask_t *mask, int num);
> > +
> > +/**
> > + * Default cpumask for control threads
> > + *
> > + * Creates cpumask based on starting count, actual value returned.
> > + *
> > + * @param[out] mask CPU mask to update
> > + * @param num Number of control threads, zero for all
> available CPUs
> > + * @return Actual number of CPUs used to create the mask
> > + */
> > +int odp_cpumask_def_control(odp_cpumask_t *mask, int num);
> > +
> > +/**
> > * @}
> > */
> >
> > diff --git a/platform/linux-generic/odp_cpumask.c b/platform/linux-
> generic/odp_cpumask.c
> > index a27e80c..aaf5df3 100644
> > --- a/platform/linux-generic/odp_cpumask.c
> > +++ b/platform/linux-generic/odp_cpumask.c
> > @@ -8,6 +8,7 @@
> > #define _GNU_SOURCE
> > #endif
> > #include <sched.h>
> > +#include <pthread.h>
> >
> > #include <odp/cpumask.h>
> > #include <odp_debug_internal.h>
> > @@ -204,3 +205,40 @@ int odp_cpumask_next(const odp_cpumask_t *mask, int
> cpu)
> > return cpu;
> > return -1;
> > }
> > +
> > +int odp_cpumask_def_worker(odp_cpumask_t *mask, int num)
> > +{
> > + int ret, cpu, i;
> > + cpu_set_t cpuset;
> > +
> > + ret = pthread_getaffinity_np(pthread_self(),
> > + sizeof(cpu_set_t), &cpuset);
> > + if (ret != 0)
> > + ODP_ABORT("failed to read CPU affinity value\n");
> > +
> > + odp_cpumask_zero(mask);
> > +
> > + /*
> > + * If no user supplied number or it's too large, then attempt
> > + * to use all CPUs
> > + */
> > + if (0 == num || CPU_SETSIZE < num)
> > + num = CPU_COUNT(&cpuset);
> > +
> > + /* build the mask, allocating down from highest numbered CPU */
> > + for (cpu = 0, i = CPU_SETSIZE - 1; i >= 0 && cpu < num; --i) {
> > + if (CPU_ISSET(i, &cpuset)) {
> > + odp_cpumask_set(mask, i);
> > + cpu++;
> > + }
> > + }
>
> This allocates all cpus but it should be excluding the control cpus
> (except for if there's only 1 cpu?).
Copy-paste from linux helper. We could update the implementation once this is
integrated and proven to work with existing apps.
-Petri
>
> > +
> > + return cpu;
> > +}
> > +
> > +int odp_cpumask_def_control(odp_cpumask_t *mask, int num ODP_UNUSED)
> > +{
> > + /* By default all control threads on CPU 0 */
> > + odp_cpumask_set(mask, 0);
> > + return 1;
> > +}
> > --
> > 2.4.2
> >
>
> --
> Stuart.
_______________________________________________
lng-odp mailing list
[email protected]
https://lists.linaro.org/mailman/listinfo/lng-odp