On Wed, 12 Aug 2015, Mel Gorman wrote:

> There is a seqcounter that protects against spurious allocation failures
> when a task is changing the allowed nodes in a cpuset. There is no need
> to check the seqcounter until a cpuset exists.
> 
> Signed-off-by: Mel Gorman <mgor...@techsingularity.net>
> Acked-by: David Rientjes <rient...@google.com>
> Acked-by: Vlastimil Babka <vba...@suse.cz>
> ---
>  include/linux/cpuset.h | 6 ++++++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h
> index 1b357997cac5..6eb27cb480b7 100644
> --- a/include/linux/cpuset.h
> +++ b/include/linux/cpuset.h
> @@ -104,6 +104,9 @@ extern void cpuset_print_task_mems_allowed(struct 
> task_struct *p);
>   */
>  static inline unsigned int read_mems_allowed_begin(void)
>  {
> +     if (!cpusets_enabled())
> +             return 0;
> +
>       return read_seqcount_begin(&current->mems_allowed_seq);
>  }
>  
> @@ -115,6 +118,9 @@ static inline unsigned int read_mems_allowed_begin(void)
>   */
>  static inline bool read_mems_allowed_retry(unsigned int seq)
>  {
> +     if (!cpusets_enabled())
> +             return false;
> +
>       return read_seqcount_retry(&current->mems_allowed_seq, seq);
>  }
>  

This patch is an obvious improvement, but I think it's also possible to 
change this to be

        if (nr_cpusets() <= 1)
                return false;

and likewise in the existing cpusets_enabled() check in 
get_page_from_freelist().  A root cpuset may not exclude mems on the 
system so, even if mounted, there's no need to check or be worried about 
concurrent change when there is only one cpuset.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to