> +     task_lock(tsk);
>       cs = tsk->cpuset;
>       tsk->cpuset = &top_cpuset;      /* the_top_cpuset_hack - see above */
> +     atomic_dec(&cs->count);
> +     task_unlock(tsk);
>  
>       if (notify_on_release(cs)) {
>               char *pathbuf = NULL;
>  
>               mutex_lock(&manage_mutex);
> -             if (atomic_dec_and_test(&cs->count))
> +             if (!atomic_read(&cs->count))
>                       check_for_release(cs, &pathbuf);

Is there perhaps another race here?  Could it happen that:
 1) the atomic_dec() lowers the count to say one (any value > zero)
 2) after we drop the task lock, some other task or tasks decrement
    the count to zero
 3) we catch that zero when we atomic_read the count, and issue a spurious
    check_for_release().

I'm thinking that we should use the same oldcs_tobe_released logic
here as we used in attach_task, so that we do an atomic_dec_and_test()
inside the task lock, and if that hit zero, then we know that our
pointer to this cpuset is the last remaining reference, so we can
release that pointer at our convenience, knowing no one else can
reference or mess with that cpuset any more.

-- 
                  I won't rest till it's the best ...
                  Programmer, Linux Scalability
                  Paul Jackson <[EMAIL PROTECTED]> 1.925.600.0401
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to