> OK...  So here's what we really want:
>       * we know that nobody will set cpu_writer->mnt to mnt from now on
>       * all changes to that sucker are done under cpu_writer->lock
>       * we want the laziest equivalent of
>               spin_lock(&cpu_writer->lock);
>               if (likely(cpu_writer->mnt != mnt)) {
>                       spin_unlock(&cpu_writer->lock);
>                       continue;
>               }
>               /* do stuff */
> that would make sure we won't miss earlier setting of ->mnt done by another
> CPU.
> 

If this is done, I'll be available to test it.

> Anyway, for now (HEAD and all -stable starting with 2.6.26) we want this:
> 

And here is my:

Tested-by: Li Zefan <l...@cn.fujitsu.com>

> --- fs/namespace.c    2009-01-25 21:45:31.000000000 -0500
> +++ fs/namespace.c    2009-02-15 21:31:14.000000000 -0500
> @@ -614,9 +614,11 @@
>        */
>       for_each_possible_cpu(cpu) {
>               struct mnt_writer *cpu_writer = &per_cpu(mnt_writers, cpu);
> -             if (cpu_writer->mnt != mnt)
> -                     continue;
>               spin_lock(&cpu_writer->lock);
> +             if (cpu_writer->mnt != mnt) {
> +                     spin_unlock(&cpu_writer->lock);
> +                     continue;
> +             }
>               atomic_add(cpu_writer->count, &mnt->__mnt_writers);
>               cpu_writer->count = 0;
>               /*

_______________________________________________
Containers mailing list
contain...@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/containers

_______________________________________________
Devel mailing list
Devel@openvz.org
https://openvz.org/mailman/listinfo/devel

Reply via email to