On 03/16, Tejun Heo wrote:
>
> > --- x/kernel/kthread.c
> > +++ x/kernel/kthread.c
> > @@ -226,6 +226,7 @@
> >     ret = -EINTR;
> >     if (!test_bit(KTHREAD_SHOULD_STOP, &self->flags)) {
> >             __kthread_parkme(self);
> > +           current->flags &= ~PF_IDONTLIKECGROUPS;
> >             ret = threadfn(data);
> >     }
> >     do_exit(ret);
> > @@ -537,7 +538,7 @@
> >     set_cpus_allowed_ptr(tsk, cpu_all_mask);
> >     set_mems_allowed(node_states[N_MEMORY]);
> >
> > -   current->flags |= PF_NOFREEZE;
> > +   current->flags |= (PF_NOFREEZE | PF_IDONTLIKECGROUPS);
> >
> >     for (;;) {
> >             set_current_state(TASK_INTERRUPTIBLE);
> > --- x/kernel/cgroup/cgroup.c
> > +++ x/kernel/cgroup/cgroup.c
> > @@ -2429,7 +2429,7 @@
> >      * trapped in a cpuset, or RT worker may be born in a cgroup
> >      * with no rt_runtime allocated.  Just say no.
> >      */
> > -   if (tsk == kthreadd_task || (tsk->flags & PF_NO_SETAFFINITY)) {
> > +   if (tsk->flags & (PF_NO_SETAFFINITY | PF_IDONTLIKECGROUPS)) {
> >             ret = -EINVAL;
> >             goto out_unlock_rcu;
> >     }
>
> Absolutely.  If we're willing to spend a PF flag on it, we can
> properly wait for it too instead of failing it.

Or we can add another "unsigned no_cgroups:1" bit into task_struct,
not sure.

Anyway, I do not understand the PF_NO_SETAFFINITY check in
__cgroup_procs_write(). task_can_attach() checks it too, so cgroups
can't change the affinity. Imo something explicit like no_cgroups
makes more sense.

Oleg.

Reply via email to