Re: [RESEND 5.1-stable PATCH] mm: mmu_gather: remove __tlb_reset_range() for force flush

2019-06-20 Thread Greg KH
On Tue, Jun 18, 2019 at 04:57:17AM +0800, Yang Shi wrote: > commit 7a30df49f63ad92318ddf1f7498d1129a77dd4bd upstream THanks for the backport, now queued up. greg k-h

[RESEND 5.1-stable PATCH] mm: mmu_gather: remove __tlb_reset_range() for force flush

2019-06-17 Thread Yang Shi
commit 7a30df49f63ad92318ddf1f7498d1129a77dd4bd upstream A few new fields were added to mmu_gather to make TLB flush smarter for huge page by telling what level of page table is changed. __tlb_reset_range() is used to reset all these page table state to unchanged, which is called by TLB flush

Re: [5.1-stable PATCH] mm: mmu_gather: remove __tlb_reset_range() for force flush

2019-06-17 Thread Yang Shi
This patch is wrong, please disregard this one. The corrected one will be posted soon. Sorry for the inconvenience. Yang On 6/17/19 1:46 PM, Yang Shi wrote: A few new fields were added to mmu_gather to make TLB flush smarter for huge page by telling what level of page table is changed.

[5.1-stable PATCH] mm: mmu_gather: remove __tlb_reset_range() for force flush

2019-06-17 Thread Yang Shi
A few new fields were added to mmu_gather to make TLB flush smarter for huge page by telling what level of page table is changed. __tlb_reset_range() is used to reset all these page table state to unchanged, which is called by TLB flush for parallel mapping changes for the same range under

Re: [v3 PATCH] mm: mmu_gather: remove __tlb_reset_range() for force flush

2019-05-21 Thread Yang Shi
On 5/22/19 7:18 AM, Andrew Morton wrote: On Mon, 20 May 2019 11:17:32 +0800 Yang Shi wrote: A few new fields were added to mmu_gather to make TLB flush smarter for huge page by telling what level of page table is changed. __tlb_reset_range() is used to reset all these page table state to

Re: [v3 PATCH] mm: mmu_gather: remove __tlb_reset_range() for force flush

2019-05-21 Thread Andrew Morton
On Mon, 20 May 2019 11:17:32 +0800 Yang Shi wrote: > A few new fields were added to mmu_gather to make TLB flush smarter for > huge page by telling what level of page table is changed. > > __tlb_reset_range() is used to reset all these page table state to > unchanged, which is called by TLB

[v3 PATCH] mm: mmu_gather: remove __tlb_reset_range() for force flush

2019-05-19 Thread Yang Shi
A few new fields were added to mmu_gather to make TLB flush smarter for huge page by telling what level of page table is changed. __tlb_reset_range() is used to reset all these page table state to unchanged, which is called by TLB flush for parallel mapping changes for the same range under

Re: [v2 PATCH] mm: mmu_gather: remove __tlb_reset_range() for force flush

2019-05-19 Thread Yang Shi
On 5/16/19 11:29 PM, Jan Stancek wrote: - Original Message - On Mon, May 13, 2019 at 04:01:09PM -0700, Yang Shi wrote: On 5/13/19 9:38 AM, Will Deacon wrote: On Fri, May 10, 2019 at 07:26:54AM +0800, Yang Shi wrote: diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c index

Re: [v2 PATCH] mm: mmu_gather: remove __tlb_reset_range() for force flush

2019-05-16 Thread Jan Stancek
- Original Message - > On Mon, May 13, 2019 at 04:01:09PM -0700, Yang Shi wrote: > > > > > > On 5/13/19 9:38 AM, Will Deacon wrote: > > > On Fri, May 10, 2019 at 07:26:54AM +0800, Yang Shi wrote: > > > > diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c > > > > index 99740e1..469492d

Re: [v2 PATCH] mm: mmu_gather: remove __tlb_reset_range() for force flush

2019-05-14 Thread Yang Shi
On 5/14/19 7:54 AM, Will Deacon wrote: On Mon, May 13, 2019 at 04:01:09PM -0700, Yang Shi wrote: On 5/13/19 9:38 AM, Will Deacon wrote: On Fri, May 10, 2019 at 07:26:54AM +0800, Yang Shi wrote: diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c index 99740e1..469492d 100644 ---

Re: [v2 PATCH] mm: mmu_gather: remove __tlb_reset_range() for force flush

2019-05-14 Thread Will Deacon
On Mon, May 13, 2019 at 04:01:09PM -0700, Yang Shi wrote: > > > On 5/13/19 9:38 AM, Will Deacon wrote: > > On Fri, May 10, 2019 at 07:26:54AM +0800, Yang Shi wrote: > > > diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c > > > index 99740e1..469492d 100644 > > > --- a/mm/mmu_gather.c > > > +++

Re: [v2 PATCH] mm: mmu_gather: remove __tlb_reset_range() for force flush

2019-05-14 Thread Will Deacon
On Tue, May 14, 2019 at 01:52:23PM +0200, Peter Zijlstra wrote: > On Mon, May 13, 2019 at 05:38:04PM +0100, Will Deacon wrote: > > On Fri, May 10, 2019 at 07:26:54AM +0800, Yang Shi wrote: > > > diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c > > > index 99740e1..469492d 100644 > > > ---

Re: [v2 PATCH] mm: mmu_gather: remove __tlb_reset_range() for force flush

2019-05-14 Thread Peter Zijlstra
On Mon, May 13, 2019 at 05:38:04PM +0100, Will Deacon wrote: > On Fri, May 10, 2019 at 07:26:54AM +0800, Yang Shi wrote: > > diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c > > index 99740e1..469492d 100644 > > --- a/mm/mmu_gather.c > > +++ b/mm/mmu_gather.c > > @@ -245,14 +245,39 @@ void

Re: [v2 PATCH] mm: mmu_gather: remove __tlb_reset_range() for force flush

2019-05-14 Thread Peter Zijlstra
On Tue, May 14, 2019 at 07:21:33AM +, Nadav Amit wrote: > > On May 14, 2019, at 12:15 AM, Jan Stancek wrote: > > Replacing fullmm with need_flush_all, brings the problem back / reproducer > > hangs. > > Maybe setting need_flush_all does not have the right effect, but setting > fullmm and

Re: [v2 PATCH] mm: mmu_gather: remove __tlb_reset_range() for force flush

2019-05-14 Thread Peter Zijlstra
On Tue, May 14, 2019 at 02:01:34AM +, Nadav Amit wrote: > > diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c > > index 99740e1dd273..cc251422d307 100644 > > --- a/mm/mmu_gather.c > > +++ b/mm/mmu_gather.c > > @@ -251,8 +251,9 @@ void tlb_finish_mmu(struct mmu_gather *tlb, > > *

Re: [PATCH] mm: mmu_gather: remove __tlb_reset_range() for force flush

2019-05-14 Thread Mel Gorman
On Mon, May 13, 2019 at 05:06:03PM +, Nadav Amit wrote: > > On May 13, 2019, at 9:37 AM, Will Deacon wrote: > > > > On Mon, May 13, 2019 at 09:11:38AM +, Nadav Amit wrote: > >>> On May 13, 2019, at 1:36 AM, Peter Zijlstra wrote: > >>> > >>> On Thu, May 09, 2019 at 09:21:35PM +,

Re: [v2 PATCH] mm: mmu_gather: remove __tlb_reset_range() for force flush

2019-05-14 Thread Nadav Amit
> On May 14, 2019, at 12:15 AM, Jan Stancek wrote: > > > - Original Message - >> On May 13, 2019 4:01 PM, Yang Shi wrote: >> >> >> On 5/13/19 9:38 AM, Will Deacon wrote: >>> On Fri, May 10, 2019 at 07:26:54AM +0800, Yang Shi wrote: diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c

Re: [v2 PATCH] mm: mmu_gather: remove __tlb_reset_range() for force flush

2019-05-14 Thread Jan Stancek
- Original Message - > > > On May 13, 2019 4:01 PM, Yang Shi wrote: > > > On 5/13/19 9:38 AM, Will Deacon wrote: > > On Fri, May 10, 2019 at 07:26:54AM +0800, Yang Shi wrote: > >> diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c > >> index 99740e1..469492d 100644 > >> ---

Re: [v2 PATCH] mm: mmu_gather: remove __tlb_reset_range() for force flush

2019-05-13 Thread Yang Shi
On 5/13/19 9:38 AM, Will Deacon wrote: On Fri, May 10, 2019 at 07:26:54AM +0800, Yang Shi wrote: diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c index 99740e1..469492d 100644 --- a/mm/mmu_gather.c +++ b/mm/mmu_gather.c @@ -245,14 +245,39 @@ void tlb_finish_mmu(struct mmu_gather *tlb, {

Re: [PATCH] mm: mmu_gather: remove __tlb_reset_range() for force flush

2019-05-13 Thread Nadav Amit
> On May 13, 2019, at 4:27 AM, Peter Zijlstra wrote: > > On Mon, May 13, 2019 at 09:21:01AM +, Nadav Amit wrote: >>> On May 13, 2019, at 2:12 AM, Peter Zijlstra wrote: > The other thing I was thinking of is trying to detect overlap through the page-tables themselves, but we have

Re: [PATCH] mm: mmu_gather: remove __tlb_reset_range() for force flush

2019-05-13 Thread Nadav Amit
> On May 13, 2019, at 9:37 AM, Will Deacon wrote: > > On Mon, May 13, 2019 at 09:11:38AM +, Nadav Amit wrote: >>> On May 13, 2019, at 1:36 AM, Peter Zijlstra wrote: >>> >>> On Thu, May 09, 2019 at 09:21:35PM +, Nadav Amit wrote: >>> >>> And we can fix that by having

Re: [PATCH] mm: mmu_gather: remove __tlb_reset_range() for force flush

2019-05-13 Thread Will Deacon
On Mon, May 13, 2019 at 09:11:38AM +, Nadav Amit wrote: > > On May 13, 2019, at 1:36 AM, Peter Zijlstra wrote: > > > > On Thu, May 09, 2019 at 09:21:35PM +, Nadav Amit wrote: > > > > And we can fix that by having tlb_finish_mmu() sync up. Never let a > > concurrent

Re: [v2 PATCH] mm: mmu_gather: remove __tlb_reset_range() for force flush

2019-05-13 Thread Will Deacon
On Fri, May 10, 2019 at 07:26:54AM +0800, Yang Shi wrote: > diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c > index 99740e1..469492d 100644 > --- a/mm/mmu_gather.c > +++ b/mm/mmu_gather.c > @@ -245,14 +245,39 @@ void tlb_finish_mmu(struct mmu_gather *tlb, > { > /* >* If there are

Re: [PATCH] mm: mmu_gather: remove __tlb_reset_range() for force flush

2019-05-13 Thread Peter Zijlstra
On Mon, May 13, 2019 at 09:11:38AM +, Nadav Amit wrote: > BTW: sometimes you don’t see the effect of these full TLB flushes as much in > VMs. I encountered a strange phenomenon at the time - INVLPG for an > arbitrary page cause my Haswell machine flush the entire TLB, when the > INVLPG was

Re: [PATCH] mm: mmu_gather: remove __tlb_reset_range() for force flush

2019-05-13 Thread Peter Zijlstra
On Mon, May 13, 2019 at 09:21:01AM +, Nadav Amit wrote: > > On May 13, 2019, at 2:12 AM, Peter Zijlstra wrote: > >> The other thing I was thinking of is trying to detect overlap through > >> the page-tables themselves, but we have a distinct lack of storage > >> there. > > > > We might just

Re: [PATCH] mm: mmu_gather: remove __tlb_reset_range() for force flush

2019-05-13 Thread Nadav Amit
> On May 13, 2019, at 2:12 AM, Peter Zijlstra wrote: > > On Mon, May 13, 2019 at 10:36:06AM +0200, Peter Zijlstra wrote: >> On Thu, May 09, 2019 at 09:21:35PM +, Nadav Amit wrote: >>> It may be possible to avoid false-positive nesting indications (when the >>> flushes do not overlap) by

Re: [PATCH] mm: mmu_gather: remove __tlb_reset_range() for force flush

2019-05-13 Thread Peter Zijlstra
On Mon, May 13, 2019 at 10:36:06AM +0200, Peter Zijlstra wrote: > On Thu, May 09, 2019 at 09:21:35PM +, Nadav Amit wrote: > > It may be possible to avoid false-positive nesting indications (when the > > flushes do not overlap) by creating a new struct mmu_gather_pending, with > > something

Re: [PATCH] mm: mmu_gather: remove __tlb_reset_range() for force flush

2019-05-13 Thread Nadav Amit
> On May 13, 2019, at 1:36 AM, Peter Zijlstra wrote: > > On Thu, May 09, 2019 at 09:21:35PM +, Nadav Amit wrote: > > And we can fix that by having tlb_finish_mmu() sync up. Never let a > concurrent tlb_finish_mmu() complete until all concurrenct mmu_gathers > have completed.

Re: [PATCH] mm: mmu_gather: remove __tlb_reset_range() for force flush

2019-05-13 Thread Peter Zijlstra
On Thu, May 09, 2019 at 09:21:35PM +, Nadav Amit wrote: > >>> And we can fix that by having tlb_finish_mmu() sync up. Never let a > >>> concurrent tlb_finish_mmu() complete until all concurrenct mmu_gathers > >>> have completed. > >>> > >>> This should not be too hard to make happen. > >> >

[v2 PATCH] mm: mmu_gather: remove __tlb_reset_range() for force flush

2019-05-09 Thread Yang Shi
A few new fields were added to mmu_gather to make TLB flush smarter for huge page by telling what level of page table is changed. __tlb_reset_range() is used to reset all these page table state to unchanged, which is called by TLB flush for parallel mapping changes for the same range under

Re: [PATCH] mm: mmu_gather: remove __tlb_reset_range() for force flush

2019-05-09 Thread Jan Stancek
- Original Message - > > > On 5/9/19 2:06 PM, Jan Stancek wrote: > > - Original Message - > >> > >> On 5/9/19 11:24 AM, Peter Zijlstra wrote: > >>> On Thu, May 09, 2019 at 05:36:29PM +, Nadav Amit wrote: > > On May 9, 2019, at 3:38 AM, Peter Zijlstra > > wrote: >

Re: [PATCH] mm: mmu_gather: remove __tlb_reset_range() for force flush

2019-05-09 Thread Yang Shi
On 5/9/19 2:06 PM, Jan Stancek wrote: - Original Message - On 5/9/19 11:24 AM, Peter Zijlstra wrote: On Thu, May 09, 2019 at 05:36:29PM +, Nadav Amit wrote: On May 9, 2019, at 3:38 AM, Peter Zijlstra wrote: diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c index

Re: [PATCH] mm: mmu_gather: remove __tlb_reset_range() for force flush

2019-05-09 Thread Nadav Amit
[ Restoring the recipients after mistakenly pressing reply instead of reply-all ] > On May 9, 2019, at 12:11 PM, Peter Zijlstra wrote: > > On Thu, May 09, 2019 at 06:50:00PM +, Nadav Amit wrote: >>> On May 9, 2019, at 11:24 AM, Peter Zijlstra wrote: >>> >>> On Thu, May 09, 2019 at

Re: [PATCH] mm: mmu_gather: remove __tlb_reset_range() for force flush

2019-05-09 Thread Jan Stancek
- Original Message - > > > On 5/9/19 11:24 AM, Peter Zijlstra wrote: > > On Thu, May 09, 2019 at 05:36:29PM +, Nadav Amit wrote: > >>> On May 9, 2019, at 3:38 AM, Peter Zijlstra wrote: > >>> diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c > >>> index 99740e1dd273..fe768f8d612e

Re: [PATCH] mm: mmu_gather: remove __tlb_reset_range() for force flush

2019-05-09 Thread Peter Zijlstra
On Thu, May 09, 2019 at 12:38:13PM +0200, Peter Zijlstra wrote: > diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c > index 99740e1dd273..fe768f8d612e 100644 > --- a/mm/mmu_gather.c > +++ b/mm/mmu_gather.c > @@ -244,15 +244,20 @@ void tlb_finish_mmu(struct mmu_gather *tlb, > unsigned

Re: [PATCH] mm: mmu_gather: remove __tlb_reset_range() for force flush

2019-05-09 Thread Yang Shi
On 5/9/19 11:24 AM, Peter Zijlstra wrote: On Thu, May 09, 2019 at 05:36:29PM +, Nadav Amit wrote: On May 9, 2019, at 3:38 AM, Peter Zijlstra wrote: diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c index 99740e1dd273..fe768f8d612e 100644 --- a/mm/mmu_gather.c +++ b/mm/mmu_gather.c @@

Re: [PATCH] mm: mmu_gather: remove __tlb_reset_range() for force flush

2019-05-09 Thread Peter Zijlstra
On Thu, May 09, 2019 at 11:35:55AM -0700, Yang Shi wrote: > > > On 5/9/19 3:54 AM, Peter Zijlstra wrote: > > On Thu, May 09, 2019 at 12:38:13PM +0200, Peter Zijlstra wrote: > > > > > That's tlb->cleared_p*, and yes agreed. That is, right until some > > > architecture has level dependent TLBI

Re: [PATCH] mm: mmu_gather: remove __tlb_reset_range() for force flush

2019-05-09 Thread Yang Shi
On 5/9/19 3:54 AM, Peter Zijlstra wrote: On Thu, May 09, 2019 at 12:38:13PM +0200, Peter Zijlstra wrote: That's tlb->cleared_p*, and yes agreed. That is, right until some architecture has level dependent TLBI instructions, at which point we'll need to have them all set instead of cleared.

Re: [PATCH] mm: mmu_gather: remove __tlb_reset_range() for force flush

2019-05-09 Thread Peter Zijlstra
On Thu, May 09, 2019 at 05:36:29PM +, Nadav Amit wrote: > > On May 9, 2019, at 3:38 AM, Peter Zijlstra wrote: > > diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c > > index 99740e1dd273..fe768f8d612e 100644 > > --- a/mm/mmu_gather.c > > +++ b/mm/mmu_gather.c > > @@ -244,15 +244,20 @@ void

Re: [PATCH] mm: mmu_gather: remove __tlb_reset_range() for force flush

2019-05-09 Thread Yang Shi
On 5/9/19 3:38 AM, Peter Zijlstra wrote: On Thu, May 09, 2019 at 09:37:26AM +0100, Will Deacon wrote: Hi all, [+Peter] Right, mm/mmu_gather.c has a MAINTAINERS entry; use it. Sorry for that, I didn't realize we have mmu_gather maintainers. I should run maintainer.pl. Also added Nadav

Re: [PATCH] mm: mmu_gather: remove __tlb_reset_range() for force flush

2019-05-09 Thread Nadav Amit
> On May 9, 2019, at 3:38 AM, Peter Zijlstra wrote: > > On Thu, May 09, 2019 at 09:37:26AM +0100, Will Deacon wrote: >> Hi all, [+Peter] > > Right, mm/mmu_gather.c has a MAINTAINERS entry; use it. > > Also added Nadav and Minchan who've poked at this issue before. And Mel, > because he loves

Re: [PATCH] mm: mmu_gather: remove __tlb_reset_range() for force flush

2019-05-09 Thread Jan Stancek
> > I don't think we can elide the call __tlb_reset_range() entirely, since I > > think we do want to clear the freed_pXX bits to ensure that we walk the > > range with the smallest mapping granule that we have. Otherwise couldn't we > > have a problem if we hit a PMD that had been cleared, but

Re: [PATCH] mm: mmu_gather: remove __tlb_reset_range() for force flush

2019-05-09 Thread Peter Zijlstra
On Thu, May 09, 2019 at 12:38:13PM +0200, Peter Zijlstra wrote: > That's tlb->cleared_p*, and yes agreed. That is, right until some > architecture has level dependent TLBI instructions, at which point we'll > need to have them all set instead of cleared. > Anyway; am I correct in understanding

Re: [PATCH] mm: mmu_gather: remove __tlb_reset_range() for force flush

2019-05-09 Thread Peter Zijlstra
On Thu, May 09, 2019 at 09:37:26AM +0100, Will Deacon wrote: > Hi all, [+Peter] Right, mm/mmu_gather.c has a MAINTAINERS entry; use it. Also added Nadav and Minchan who've poked at this issue before. And Mel, because he loves these things :-) > Apologies for the delay; I'm attending a

Re: [PATCH] mm: mmu_gather: remove __tlb_reset_range() for force flush

2019-05-09 Thread Will Deacon
Hi all, [+Peter] Apologies for the delay; I'm attending a conference this week so it's tricky to keep up with email. On Wed, May 08, 2019 at 05:34:49AM +0800, Yang Shi wrote: > A few new fields were added to mmu_gather to make TLB flush smarter for > huge page by telling what level of page table

[PATCH] mm: mmu_gather: remove __tlb_reset_range() for force flush

2019-05-07 Thread Yang Shi
A few new fields were added to mmu_gather to make TLB flush smarter for huge page by telling what level of page table is changed. __tlb_reset_range() is used to reset all these page table state to unchanged, which is called by TLB flush for parallel mapping changes for the same range under