On 06-Aug 17:39, Patrick Bellasi wrote:

[...]

> +/**
> + * uclamp_cpu_get_id(): increase reference count for a clamp group on a CPU
> + * @p: the task being enqueued on a CPU
> + * @rq: the CPU's rq where the clamp group has to be reference counted
> + * @clamp_id: the utilization clamp (e.g. min or max utilization) to 
> reference
> + *
> + * Once a task is enqueued on a CPU's RQ, the clamp group currently defined 
> by
> + * the task's uclamp.group_id is reference counted on that CPU.
> + */
> +static inline void uclamp_cpu_get_id(struct task_struct *p,
> +                                  struct rq *rq, int clamp_id)
> +{
> +     struct uclamp_group *uc_grp;
> +     struct uclamp_cpu *uc_cpu;
> +     int clamp_value;
> +     int group_id;
> +
> +     /* No task specific clamp values: nothing to do */
> +     group_id = p->uclamp[clamp_id].group_id;
> +     if (group_id == UCLAMP_NOT_VALID)
> +             return;

This is broken for util_max aggression.

By not refcounting tasks without a task specific clamp value, we end
up enforcing a max_util to these tasks when they are co-scheduled with
another max clamped task.

I need to fix this by removing this "optimization" (working just for
util_min) and refcount all the tasks.

> +
> +     /* Reference count the task into its current group_id */
> +     uc_grp = &rq->uclamp.group[clamp_id][0];
> +     uc_grp[group_id].tasks += 1;
> +
> +     /*
> +      * If this is the new max utilization clamp value, then we can update
> +      * straight away the CPU clamp value. Otherwise, the current CPU clamp
> +      * value is still valid and we are done.
> +      */
> +     uc_cpu = &rq->uclamp;
> +     clamp_value = p->uclamp[clamp_id].value;
> +     if (uc_cpu->value[clamp_id] < clamp_value)
> +             uc_cpu->value[clamp_id] = clamp_value;
> +}
> +

-- 
#include <best/regards.h>

Patrick Bellasi

Reply via email to