Re: [PATCH 04/10] mm, page_alloc: Remove unnecessary taking of a seqlock when cpusets are disabled
On Wed, Aug 12, 2015 at 05:16:50PM -0700, David Rientjes wrote: > On Wed, 12 Aug 2015, Mel Gorman wrote: > > > There is a seqcounter that protects against spurious allocation failures > > when a task is changing the allowed nodes in a cpuset. There is no need > > to check the seqcounter until a cpuset exists. > > > > Signed-off-by: Mel Gorman > > Acked-by: David Rientjes > > Acked-by: Vlastimil Babka > > --- > > include/linux/cpuset.h | 6 ++ > > 1 file changed, 6 insertions(+) > > > > diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h > > index 1b357997cac5..6eb27cb480b7 100644 > > --- a/include/linux/cpuset.h > > +++ b/include/linux/cpuset.h > > @@ -104,6 +104,9 @@ extern void cpuset_print_task_mems_allowed(struct > > task_struct *p); > > */ > > static inline unsigned int read_mems_allowed_begin(void) > > { > > + if (!cpusets_enabled()) > > + return 0; > > + > > return read_seqcount_begin(>mems_allowed_seq); > > } > > > > @@ -115,6 +118,9 @@ static inline unsigned int read_mems_allowed_begin(void) > > */ > > static inline bool read_mems_allowed_retry(unsigned int seq) > > { > > + if (!cpusets_enabled()) > > + return false; > > + > > return read_seqcount_retry(>mems_allowed_seq, seq); > > } > > > > This patch is an obvious improvement, but I think it's also possible to > change this to be > > if (nr_cpusets() <= 1) > return false; > > and likewise in the existing cpusets_enabled() check in > get_page_from_freelist(). A root cpuset may not exclude mems on the > system so, even if mounted, there's no need to check or be worried about > concurrent change when there is only one cpuset. Good idea. I'll make this a separate patch on top and rename cpuset_enabled to cpuset_mems_enabled to be clear about what it's checking. Thanks. -- Mel Gorman SUSE Labs -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH 04/10] mm, page_alloc: Remove unnecessary taking of a seqlock when cpusets are disabled
On Wed, Aug 12, 2015 at 05:16:50PM -0700, David Rientjes wrote: On Wed, 12 Aug 2015, Mel Gorman wrote: There is a seqcounter that protects against spurious allocation failures when a task is changing the allowed nodes in a cpuset. There is no need to check the seqcounter until a cpuset exists. Signed-off-by: Mel Gorman mgor...@techsingularity.net Acked-by: David Rientjes rient...@google.com Acked-by: Vlastimil Babka vba...@suse.cz --- include/linux/cpuset.h | 6 ++ 1 file changed, 6 insertions(+) diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h index 1b357997cac5..6eb27cb480b7 100644 --- a/include/linux/cpuset.h +++ b/include/linux/cpuset.h @@ -104,6 +104,9 @@ extern void cpuset_print_task_mems_allowed(struct task_struct *p); */ static inline unsigned int read_mems_allowed_begin(void) { + if (!cpusets_enabled()) + return 0; + return read_seqcount_begin(current-mems_allowed_seq); } @@ -115,6 +118,9 @@ static inline unsigned int read_mems_allowed_begin(void) */ static inline bool read_mems_allowed_retry(unsigned int seq) { + if (!cpusets_enabled()) + return false; + return read_seqcount_retry(current-mems_allowed_seq, seq); } This patch is an obvious improvement, but I think it's also possible to change this to be if (nr_cpusets() = 1) return false; and likewise in the existing cpusets_enabled() check in get_page_from_freelist(). A root cpuset may not exclude mems on the system so, even if mounted, there's no need to check or be worried about concurrent change when there is only one cpuset. Good idea. I'll make this a separate patch on top and rename cpuset_enabled to cpuset_mems_enabled to be clear about what it's checking. Thanks. -- Mel Gorman SUSE Labs -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH 04/10] mm, page_alloc: Remove unnecessary taking of a seqlock when cpusets are disabled
On Wed, 12 Aug 2015, Mel Gorman wrote: > There is a seqcounter that protects against spurious allocation failures > when a task is changing the allowed nodes in a cpuset. There is no need > to check the seqcounter until a cpuset exists. > > Signed-off-by: Mel Gorman > Acked-by: David Rientjes > Acked-by: Vlastimil Babka > --- > include/linux/cpuset.h | 6 ++ > 1 file changed, 6 insertions(+) > > diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h > index 1b357997cac5..6eb27cb480b7 100644 > --- a/include/linux/cpuset.h > +++ b/include/linux/cpuset.h > @@ -104,6 +104,9 @@ extern void cpuset_print_task_mems_allowed(struct > task_struct *p); > */ > static inline unsigned int read_mems_allowed_begin(void) > { > + if (!cpusets_enabled()) > + return 0; > + > return read_seqcount_begin(>mems_allowed_seq); > } > > @@ -115,6 +118,9 @@ static inline unsigned int read_mems_allowed_begin(void) > */ > static inline bool read_mems_allowed_retry(unsigned int seq) > { > + if (!cpusets_enabled()) > + return false; > + > return read_seqcount_retry(>mems_allowed_seq, seq); > } > This patch is an obvious improvement, but I think it's also possible to change this to be if (nr_cpusets() <= 1) return false; and likewise in the existing cpusets_enabled() check in get_page_from_freelist(). A root cpuset may not exclude mems on the system so, even if mounted, there's no need to check or be worried about concurrent change when there is only one cpuset. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH 04/10] mm, page_alloc: Remove unnecessary taking of a seqlock when cpusets are disabled
There is a seqcounter that protects against spurious allocation failures when a task is changing the allowed nodes in a cpuset. There is no need to check the seqcounter until a cpuset exists. Signed-off-by: Mel Gorman Acked-by: David Rientjes Acked-by: Vlastimil Babka --- include/linux/cpuset.h | 6 ++ 1 file changed, 6 insertions(+) diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h index 1b357997cac5..6eb27cb480b7 100644 --- a/include/linux/cpuset.h +++ b/include/linux/cpuset.h @@ -104,6 +104,9 @@ extern void cpuset_print_task_mems_allowed(struct task_struct *p); */ static inline unsigned int read_mems_allowed_begin(void) { + if (!cpusets_enabled()) + return 0; + return read_seqcount_begin(>mems_allowed_seq); } @@ -115,6 +118,9 @@ static inline unsigned int read_mems_allowed_begin(void) */ static inline bool read_mems_allowed_retry(unsigned int seq) { + if (!cpusets_enabled()) + return false; + return read_seqcount_retry(>mems_allowed_seq, seq); } -- 2.4.6 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH 04/10] mm, page_alloc: Remove unnecessary taking of a seqlock when cpusets are disabled
On Wed, 12 Aug 2015, Mel Gorman wrote: There is a seqcounter that protects against spurious allocation failures when a task is changing the allowed nodes in a cpuset. There is no need to check the seqcounter until a cpuset exists. Signed-off-by: Mel Gorman mgor...@techsingularity.net Acked-by: David Rientjes rient...@google.com Acked-by: Vlastimil Babka vba...@suse.cz --- include/linux/cpuset.h | 6 ++ 1 file changed, 6 insertions(+) diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h index 1b357997cac5..6eb27cb480b7 100644 --- a/include/linux/cpuset.h +++ b/include/linux/cpuset.h @@ -104,6 +104,9 @@ extern void cpuset_print_task_mems_allowed(struct task_struct *p); */ static inline unsigned int read_mems_allowed_begin(void) { + if (!cpusets_enabled()) + return 0; + return read_seqcount_begin(current-mems_allowed_seq); } @@ -115,6 +118,9 @@ static inline unsigned int read_mems_allowed_begin(void) */ static inline bool read_mems_allowed_retry(unsigned int seq) { + if (!cpusets_enabled()) + return false; + return read_seqcount_retry(current-mems_allowed_seq, seq); } This patch is an obvious improvement, but I think it's also possible to change this to be if (nr_cpusets() = 1) return false; and likewise in the existing cpusets_enabled() check in get_page_from_freelist(). A root cpuset may not exclude mems on the system so, even if mounted, there's no need to check or be worried about concurrent change when there is only one cpuset. -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH 04/10] mm, page_alloc: Remove unnecessary taking of a seqlock when cpusets are disabled
There is a seqcounter that protects against spurious allocation failures when a task is changing the allowed nodes in a cpuset. There is no need to check the seqcounter until a cpuset exists. Signed-off-by: Mel Gorman mgor...@techsingularity.net Acked-by: David Rientjes rient...@google.com Acked-by: Vlastimil Babka vba...@suse.cz --- include/linux/cpuset.h | 6 ++ 1 file changed, 6 insertions(+) diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h index 1b357997cac5..6eb27cb480b7 100644 --- a/include/linux/cpuset.h +++ b/include/linux/cpuset.h @@ -104,6 +104,9 @@ extern void cpuset_print_task_mems_allowed(struct task_struct *p); */ static inline unsigned int read_mems_allowed_begin(void) { + if (!cpusets_enabled()) + return 0; + return read_seqcount_begin(current-mems_allowed_seq); } @@ -115,6 +118,9 @@ static inline unsigned int read_mems_allowed_begin(void) */ static inline bool read_mems_allowed_retry(unsigned int seq) { + if (!cpusets_enabled()) + return false; + return read_seqcount_retry(current-mems_allowed_seq, seq); } -- 2.4.6 -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH 04/10] mm, page_alloc: Remove unnecessary taking of a seqlock when cpusets are disabled
On 07/20/2015 10:00 AM, Mel Gorman wrote: From: Mel Gorman There is a seqcounter that protects spurious allocation fails when a task is changing the allowed nodes in a cpuset. There is no need to check the seqcounter until a cpuset exists. If cpusets become enabled betwen _begin and _retry, then it will retry due to comparing with 0, but not crash, so it's safe. Signed-off-by: Mel Gorman Acked-by: Vlastimil Babka --- include/linux/cpuset.h | 6 ++ 1 file changed, 6 insertions(+) diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h index 1b357997cac5..6eb27cb480b7 100644 --- a/include/linux/cpuset.h +++ b/include/linux/cpuset.h @@ -104,6 +104,9 @@ extern void cpuset_print_task_mems_allowed(struct task_struct *p); */ static inline unsigned int read_mems_allowed_begin(void) { + if (!cpusets_enabled()) + return 0; + return read_seqcount_begin(>mems_allowed_seq); } @@ -115,6 +118,9 @@ static inline unsigned int read_mems_allowed_begin(void) */ static inline bool read_mems_allowed_retry(unsigned int seq) { + if (!cpusets_enabled()) + return false; + return read_seqcount_retry(>mems_allowed_seq, seq); } -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH 04/10] mm, page_alloc: Remove unnecessary taking of a seqlock when cpusets are disabled
On 07/20/2015 10:00 AM, Mel Gorman wrote: From: Mel Gorman mgor...@suse.de There is a seqcounter that protects spurious allocation fails when a task is changing the allowed nodes in a cpuset. There is no need to check the seqcounter until a cpuset exists. If cpusets become enabled betwen _begin and _retry, then it will retry due to comparing with 0, but not crash, so it's safe. Signed-off-by: Mel Gorman mgor...@sujse.de Acked-by: Vlastimil Babka vba...@suse.cz --- include/linux/cpuset.h | 6 ++ 1 file changed, 6 insertions(+) diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h index 1b357997cac5..6eb27cb480b7 100644 --- a/include/linux/cpuset.h +++ b/include/linux/cpuset.h @@ -104,6 +104,9 @@ extern void cpuset_print_task_mems_allowed(struct task_struct *p); */ static inline unsigned int read_mems_allowed_begin(void) { + if (!cpusets_enabled()) + return 0; + return read_seqcount_begin(current-mems_allowed_seq); } @@ -115,6 +118,9 @@ static inline unsigned int read_mems_allowed_begin(void) */ static inline bool read_mems_allowed_retry(unsigned int seq) { + if (!cpusets_enabled()) + return false; + return read_seqcount_retry(current-mems_allowed_seq, seq); } -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH 04/10] mm, page_alloc: Remove unnecessary taking of a seqlock when cpusets are disabled
On Mon, 20 Jul 2015, Mel Gorman wrote: > From: Mel Gorman > > There is a seqcounter that protects spurious allocation fails when a task > is changing the allowed nodes in a cpuset. There is no need to check the > seqcounter until a cpuset exists. > > Signed-off-by: Mel Gorman Acked-by: David Rientjes but there's a typo in your email address in the signed-off-by line. Nice to know you actually type them by hand though :) -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH 04/10] mm, page_alloc: Remove unnecessary taking of a seqlock when cpusets are disabled
On Mon, 20 Jul 2015, Mel Gorman wrote: From: Mel Gorman mgor...@suse.de There is a seqcounter that protects spurious allocation fails when a task is changing the allowed nodes in a cpuset. There is no need to check the seqcounter until a cpuset exists. Signed-off-by: Mel Gorman mgor...@sujse.de Acked-by: David Rientjes rient...@google.com but there's a typo in your email address in the signed-off-by line. Nice to know you actually type them by hand though :) -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH 04/10] mm, page_alloc: Remove unnecessary taking of a seqlock when cpusets are disabled
From: Mel Gorman There is a seqcounter that protects spurious allocation fails when a task is changing the allowed nodes in a cpuset. There is no need to check the seqcounter until a cpuset exists. Signed-off-by: Mel Gorman --- include/linux/cpuset.h | 6 ++ 1 file changed, 6 insertions(+) diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h index 1b357997cac5..6eb27cb480b7 100644 --- a/include/linux/cpuset.h +++ b/include/linux/cpuset.h @@ -104,6 +104,9 @@ extern void cpuset_print_task_mems_allowed(struct task_struct *p); */ static inline unsigned int read_mems_allowed_begin(void) { + if (!cpusets_enabled()) + return 0; + return read_seqcount_begin(>mems_allowed_seq); } @@ -115,6 +118,9 @@ static inline unsigned int read_mems_allowed_begin(void) */ static inline bool read_mems_allowed_retry(unsigned int seq) { + if (!cpusets_enabled()) + return false; + return read_seqcount_retry(>mems_allowed_seq, seq); } -- 2.4.3 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH 04/10] mm, page_alloc: Remove unnecessary taking of a seqlock when cpusets are disabled
From: Mel Gorman mgor...@suse.de There is a seqcounter that protects spurious allocation fails when a task is changing the allowed nodes in a cpuset. There is no need to check the seqcounter until a cpuset exists. Signed-off-by: Mel Gorman mgor...@sujse.de --- include/linux/cpuset.h | 6 ++ 1 file changed, 6 insertions(+) diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h index 1b357997cac5..6eb27cb480b7 100644 --- a/include/linux/cpuset.h +++ b/include/linux/cpuset.h @@ -104,6 +104,9 @@ extern void cpuset_print_task_mems_allowed(struct task_struct *p); */ static inline unsigned int read_mems_allowed_begin(void) { + if (!cpusets_enabled()) + return 0; + return read_seqcount_begin(current-mems_allowed_seq); } @@ -115,6 +118,9 @@ static inline unsigned int read_mems_allowed_begin(void) */ static inline bool read_mems_allowed_retry(unsigned int seq) { + if (!cpusets_enabled()) + return false; + return read_seqcount_retry(current-mems_allowed_seq, seq); } -- 2.4.3 -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/