On 09/09/15 09:51, Alexandru Badicioiu wrote:


On 9 September 2015 at 09:29, Maxim Uvarov <[email protected] <mailto:[email protected]>> wrote:

    On 09/08/15 16:04, Alexandru Badicioiu wrote:

        I agree, at least in my case some CPUs are assigned at boot
        time for dataplane work and cannot be changed.

        Alex


    Alex, how do you run validation tests then? Do you have your own
    variant of odp helper to create threads?

    Maxim.
    [Alex] Some scheduling tests are constantly failing due to the
    thread creation an assignment to cores. We are looking forward to
propose changes to

That sound like we need api to start workers to fix your case and dpdk. Something like odp_task_start(enum type {CONTOL, WORKER}, int num_tasks, cpumask); and get rid of pthread launch helpers. As I remember Zoltan wanted to do some API proposal.


Maxim.

    scheduling tests to take into consideration coremasks for worker
    threads as well as scheduling groups.


        On 8 September 2015 at 15:57, Savolainen, Petri (Nokia -
        FI/Espoo) <[email protected]
        <mailto:[email protected]>
        <mailto:[email protected]
        <mailto:[email protected]>>> wrote:

            I'm wondering the use case for asking only the default
        number of
            CPUs suitable for worker threads (without asking CPU IDs).

            I think application should ask the mask (and number of
        cpus) once
            and store the information for later use.


            -Petri


            > -----Original Message-----
            > From: lng-odp [mailto:[email protected]
        <mailto:[email protected]>
            <mailto:[email protected]
        <mailto:[email protected]>>] On Behalf Of
            > ext Maxim Uvarov
            > Sent: Tuesday, September 08, 2015 1:31 PM
            > To: [email protected]
        <mailto:[email protected]>
        <mailto:[email protected]
        <mailto:[email protected]>>
            > Subject: [lng-odp] [API-NEXT PATCH 3/4] api:
            odp_cpumask_default_ mask
            > argument can be null
            >
            > Functions odp_cpumask_default_worker and
        odp_cpumask_default_control
            > can be used for calculation number of worker and control
        threads. In
            > that case mask parameter can be optional.
            >
            > Signed-off-by: Maxim Uvarov <[email protected]
        <mailto:[email protected]>
            <mailto:[email protected]
        <mailto:[email protected]>>>

            > ---
            >  include/odp/api/cpumask.h  |  2 +-
            >  platform/linux-generic/odp_cpumask_task.c | 14
        +++++++++-----
            >  2 files changed, 10 insertions(+), 6 deletions(-)
            >
            > diff --git a/include/odp/api/cpumask.h
        b/include/odp/api/cpumask.h
            > index 4835a6c..633e106 100644
            > --- a/include/odp/api/cpumask.h
            > +++ b/include/odp/api/cpumask.h
            > @@ -199,7 +199,7 @@ int odp_cpumask_next(const odp_cpumask_t
            *mask, int
            > cpu);
            >   * Initializes cpumask with CPUs available for worker
        threads.
            Sets up
            > to 'num'
            >   * CPUs and returns the count actually set. Use zero
        for all
            available
            > CPUs.
            >   *
            > - * @param[out] mask      CPU mask to initialize
            > + * @param[out] mask      CPU mask to initialize or NULL.
            >   * @param      num       Number of worker threads, zero
        for all
            > available CPUs
            >   * @return Actual number of CPUs used to create the mask
            >   */
            > diff --git a/platform/linux-generic/odp_cpumask_task.c
            > b/platform/linux-generic/odp_cpumask_task.c
            > index 535891c..f8e4da4 100644
            > --- a/platform/linux-generic/odp_cpumask_task.c
            > +++ b/platform/linux-generic/odp_cpumask_task.c
            > @@ -23,7 +23,8 @@ int
        odp_cpumask_default_worker(odp_cpumask_t
            *mask,
            > int num)
            >       if (ret != 0)
            >               ODP_ABORT("failed to read CPU affinity
        value\n");
            >
            > -     odp_cpumask_zero(mask);
            > +     if (mask)
            > +             odp_cpumask_zero(mask);
            >
            >       /*
            >        * If no user supplied number or it's too large,
        then attempt
            > @@ -35,7 +36,8 @@ int
        odp_cpumask_default_worker(odp_cpumask_t
            *mask,
            > int num)
            >       /* build the mask, allocating down from highest
        numbered
            CPU */
            >       for (cpu = 0, i = CPU_SETSIZE - 1; i >= 0 && cpu <
        num; --i) {
            >               if (CPU_ISSET(i, &cpuset)) {
            > -                     odp_cpumask_set(mask, i);
            > +                     if (mask)
            > +  odp_cpumask_set(mask, i);
            >                       cpu++;
            >               }
            >       }
            > @@ -45,8 +47,10 @@ int
        odp_cpumask_default_worker(odp_cpumask_t
            *mask,
            > int num)
            >
            >  int odp_cpumask_default_control(odp_cpumask_t *mask,
        int num
            > ODP_UNUSED)
            >  {
            > -     odp_cpumask_zero(mask);
            > -     /* By default all control threads on CPU 0 */
            > -     odp_cpumask_set(mask, 0);
            > +     if (mask) {
            > +             odp_cpumask_zero(mask);
            > +             /* By default all control threads on CPU 0 */
            > +             odp_cpumask_set(mask, 0);
            > +     }
            >       return 1;
            >  }
            > --
            > 1.9.1
            >
            > _______________________________________________
            > lng-odp mailing list
            > [email protected]
        <mailto:[email protected]>
        <mailto:[email protected]
        <mailto:[email protected]>>
            > https://lists.linaro.org/mailman/listinfo/lng-odp
            _______________________________________________
            lng-odp mailing list
        [email protected] <mailto:[email protected]>
        <mailto:[email protected]
        <mailto:[email protected]>>
        https://lists.linaro.org/mailman/listinfo/lng-odp





_______________________________________________
lng-odp mailing list
[email protected]
https://lists.linaro.org/mailman/listinfo/lng-odp

Reply via email to