* Thomas Gleixner <[email protected]> wrote:

> On Thu, 6 Apr 2017, Ingo Molnar wrote:
> > CPU hotplug and changing the affinity mask are the more complex cases, 
> > because 
> > there migrating or not migrating is a correctness issue:
> > 
> >  - CPU hotplug has to be aware of this anyway, regardless of whether it's 
> > solved 
> >    via a counter of the affinity mask.
> 
> You have to prevent CPU hotplug simply as long as there are migration 
> disabled 
> tasks on the fly. Making that depend on whether they are on a CPU which is 
> about 
> to be unplugged or not would be complete overkill as you still have to solve 
> the 
> case that a task sets the migrate_disable() AFTER the cpu down machinery 
> started.
>
> [...]
>
> The counter alone might be enough for the scheduler placement decisions, but 
> it 
> cannot solve the hotplug issue. You still need something like I sketched out 
> in 
> my previous reply.

Yes, so what you outlined:

void migrate_disable(void)
{
        if (in_atomic() || irqs_disabled())
                return;

        if (!current->migration_disabled) {
                percpu_down_read_preempt_disable(hotplug_rwsem);
                current->migration_disabled++;
                preempt_enable();
        } else {
                current->migration_disabled++;
        }
}

Would solve it?

I.e. my point is: whether migrate_disable()/enable() is implemented via a 
counter 
or a pointer to a cpumask does not materially change how the CPU-hotplug 
solution 
looks like, right?

I.e. we could just use the counter and avoid the whole wrapping of cpumask 
complexity.

Thanks,

        Ingo

Reply via email to