On Tue, Jun 18, 2019 at 04:57:17AM +0800, Yang Shi wrote:
> commit 7a30df49f63ad92318ddf1f7498d1129a77dd4bd upstream
THanks for the backport, now queued up.
greg k-h
commit 7a30df49f63ad92318ddf1f7498d1129a77dd4bd upstream
A few new fields were added to mmu_gather to make TLB flush smarter for
huge page by telling what level of page table is changed.
__tlb_reset_range() is used to reset all these page table state to
unchanged, which is called by TLB flush
This patch is wrong, please disregard this one. The corrected one will
be posted soon. Sorry for the inconvenience.
Yang
On 6/17/19 1:46 PM, Yang Shi wrote:
A few new fields were added to mmu_gather to make TLB flush smarter for
huge page by telling what level of page table is changed.
A few new fields were added to mmu_gather to make TLB flush smarter for
huge page by telling what level of page table is changed.
__tlb_reset_range() is used to reset all these page table state to
unchanged, which is called by TLB flush for parallel mapping changes for
the same range under
On 5/22/19 7:18 AM, Andrew Morton wrote:
On Mon, 20 May 2019 11:17:32 +0800 Yang Shi wrote:
A few new fields were added to mmu_gather to make TLB flush smarter for
huge page by telling what level of page table is changed.
__tlb_reset_range() is used to reset all these page table state to
On Mon, 20 May 2019 11:17:32 +0800 Yang Shi wrote:
> A few new fields were added to mmu_gather to make TLB flush smarter for
> huge page by telling what level of page table is changed.
>
> __tlb_reset_range() is used to reset all these page table state to
> unchanged, which is called by TLB
A few new fields were added to mmu_gather to make TLB flush smarter for
huge page by telling what level of page table is changed.
__tlb_reset_range() is used to reset all these page table state to
unchanged, which is called by TLB flush for parallel mapping changes for
the same range under
On 5/16/19 11:29 PM, Jan Stancek wrote:
- Original Message -
On Mon, May 13, 2019 at 04:01:09PM -0700, Yang Shi wrote:
On 5/13/19 9:38 AM, Will Deacon wrote:
On Fri, May 10, 2019 at 07:26:54AM +0800, Yang Shi wrote:
diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
index
- Original Message -
> On Mon, May 13, 2019 at 04:01:09PM -0700, Yang Shi wrote:
> >
> >
> > On 5/13/19 9:38 AM, Will Deacon wrote:
> > > On Fri, May 10, 2019 at 07:26:54AM +0800, Yang Shi wrote:
> > > > diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
> > > > index 99740e1..469492d
On 5/14/19 7:54 AM, Will Deacon wrote:
On Mon, May 13, 2019 at 04:01:09PM -0700, Yang Shi wrote:
On 5/13/19 9:38 AM, Will Deacon wrote:
On Fri, May 10, 2019 at 07:26:54AM +0800, Yang Shi wrote:
diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
index 99740e1..469492d 100644
---
On Mon, May 13, 2019 at 04:01:09PM -0700, Yang Shi wrote:
>
>
> On 5/13/19 9:38 AM, Will Deacon wrote:
> > On Fri, May 10, 2019 at 07:26:54AM +0800, Yang Shi wrote:
> > > diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
> > > index 99740e1..469492d 100644
> > > --- a/mm/mmu_gather.c
> > > +++
On Tue, May 14, 2019 at 01:52:23PM +0200, Peter Zijlstra wrote:
> On Mon, May 13, 2019 at 05:38:04PM +0100, Will Deacon wrote:
> > On Fri, May 10, 2019 at 07:26:54AM +0800, Yang Shi wrote:
> > > diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
> > > index 99740e1..469492d 100644
> > > ---
On Mon, May 13, 2019 at 05:38:04PM +0100, Will Deacon wrote:
> On Fri, May 10, 2019 at 07:26:54AM +0800, Yang Shi wrote:
> > diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
> > index 99740e1..469492d 100644
> > --- a/mm/mmu_gather.c
> > +++ b/mm/mmu_gather.c
> > @@ -245,14 +245,39 @@ void
On Tue, May 14, 2019 at 07:21:33AM +, Nadav Amit wrote:
> > On May 14, 2019, at 12:15 AM, Jan Stancek wrote:
> > Replacing fullmm with need_flush_all, brings the problem back / reproducer
> > hangs.
>
> Maybe setting need_flush_all does not have the right effect, but setting
> fullmm and
On Tue, May 14, 2019 at 02:01:34AM +, Nadav Amit wrote:
> > diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
> > index 99740e1dd273..cc251422d307 100644
> > --- a/mm/mmu_gather.c
> > +++ b/mm/mmu_gather.c
> > @@ -251,8 +251,9 @@ void tlb_finish_mmu(struct mmu_gather *tlb,
> > *
On Mon, May 13, 2019 at 05:06:03PM +, Nadav Amit wrote:
> > On May 13, 2019, at 9:37 AM, Will Deacon wrote:
> >
> > On Mon, May 13, 2019 at 09:11:38AM +, Nadav Amit wrote:
> >>> On May 13, 2019, at 1:36 AM, Peter Zijlstra wrote:
> >>>
> >>> On Thu, May 09, 2019 at 09:21:35PM +,
> On May 14, 2019, at 12:15 AM, Jan Stancek wrote:
>
>
> - Original Message -
>> On May 13, 2019 4:01 PM, Yang Shi wrote:
>>
>>
>> On 5/13/19 9:38 AM, Will Deacon wrote:
>>> On Fri, May 10, 2019 at 07:26:54AM +0800, Yang Shi wrote:
diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
- Original Message -
>
>
> On May 13, 2019 4:01 PM, Yang Shi wrote:
>
>
> On 5/13/19 9:38 AM, Will Deacon wrote:
> > On Fri, May 10, 2019 at 07:26:54AM +0800, Yang Shi wrote:
> >> diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
> >> index 99740e1..469492d 100644
> >> ---
On 5/13/19 9:38 AM, Will Deacon wrote:
On Fri, May 10, 2019 at 07:26:54AM +0800, Yang Shi wrote:
diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
index 99740e1..469492d 100644
--- a/mm/mmu_gather.c
+++ b/mm/mmu_gather.c
@@ -245,14 +245,39 @@ void tlb_finish_mmu(struct mmu_gather *tlb,
{
> On May 13, 2019, at 4:27 AM, Peter Zijlstra wrote:
>
> On Mon, May 13, 2019 at 09:21:01AM +, Nadav Amit wrote:
>>> On May 13, 2019, at 2:12 AM, Peter Zijlstra wrote:
>
The other thing I was thinking of is trying to detect overlap through
the page-tables themselves, but we have
> On May 13, 2019, at 9:37 AM, Will Deacon wrote:
>
> On Mon, May 13, 2019 at 09:11:38AM +, Nadav Amit wrote:
>>> On May 13, 2019, at 1:36 AM, Peter Zijlstra wrote:
>>>
>>> On Thu, May 09, 2019 at 09:21:35PM +, Nadav Amit wrote:
>>>
>>> And we can fix that by having
On Mon, May 13, 2019 at 09:11:38AM +, Nadav Amit wrote:
> > On May 13, 2019, at 1:36 AM, Peter Zijlstra wrote:
> >
> > On Thu, May 09, 2019 at 09:21:35PM +, Nadav Amit wrote:
> >
> > And we can fix that by having tlb_finish_mmu() sync up. Never let a
> > concurrent
On Fri, May 10, 2019 at 07:26:54AM +0800, Yang Shi wrote:
> diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
> index 99740e1..469492d 100644
> --- a/mm/mmu_gather.c
> +++ b/mm/mmu_gather.c
> @@ -245,14 +245,39 @@ void tlb_finish_mmu(struct mmu_gather *tlb,
> {
> /*
>* If there are
On Mon, May 13, 2019 at 09:11:38AM +, Nadav Amit wrote:
> BTW: sometimes you don’t see the effect of these full TLB flushes as much in
> VMs. I encountered a strange phenomenon at the time - INVLPG for an
> arbitrary page cause my Haswell machine flush the entire TLB, when the
> INVLPG was
On Mon, May 13, 2019 at 09:21:01AM +, Nadav Amit wrote:
> > On May 13, 2019, at 2:12 AM, Peter Zijlstra wrote:
> >> The other thing I was thinking of is trying to detect overlap through
> >> the page-tables themselves, but we have a distinct lack of storage
> >> there.
> >
> > We might just
> On May 13, 2019, at 2:12 AM, Peter Zijlstra wrote:
>
> On Mon, May 13, 2019 at 10:36:06AM +0200, Peter Zijlstra wrote:
>> On Thu, May 09, 2019 at 09:21:35PM +, Nadav Amit wrote:
>>> It may be possible to avoid false-positive nesting indications (when the
>>> flushes do not overlap) by
On Mon, May 13, 2019 at 10:36:06AM +0200, Peter Zijlstra wrote:
> On Thu, May 09, 2019 at 09:21:35PM +, Nadav Amit wrote:
> > It may be possible to avoid false-positive nesting indications (when the
> > flushes do not overlap) by creating a new struct mmu_gather_pending, with
> > something
> On May 13, 2019, at 1:36 AM, Peter Zijlstra wrote:
>
> On Thu, May 09, 2019 at 09:21:35PM +, Nadav Amit wrote:
>
> And we can fix that by having tlb_finish_mmu() sync up. Never let a
> concurrent tlb_finish_mmu() complete until all concurrenct mmu_gathers
> have completed.
On Thu, May 09, 2019 at 09:21:35PM +, Nadav Amit wrote:
> >>> And we can fix that by having tlb_finish_mmu() sync up. Never let a
> >>> concurrent tlb_finish_mmu() complete until all concurrenct mmu_gathers
> >>> have completed.
> >>>
> >>> This should not be too hard to make happen.
> >>
>
A few new fields were added to mmu_gather to make TLB flush smarter for
huge page by telling what level of page table is changed.
__tlb_reset_range() is used to reset all these page table state to
unchanged, which is called by TLB flush for parallel mapping changes for
the same range under
- Original Message -
>
>
> On 5/9/19 2:06 PM, Jan Stancek wrote:
> > - Original Message -
> >>
> >> On 5/9/19 11:24 AM, Peter Zijlstra wrote:
> >>> On Thu, May 09, 2019 at 05:36:29PM +, Nadav Amit wrote:
> > On May 9, 2019, at 3:38 AM, Peter Zijlstra
> > wrote:
>
On 5/9/19 2:06 PM, Jan Stancek wrote:
- Original Message -
On 5/9/19 11:24 AM, Peter Zijlstra wrote:
On Thu, May 09, 2019 at 05:36:29PM +, Nadav Amit wrote:
On May 9, 2019, at 3:38 AM, Peter Zijlstra wrote:
diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
index
[ Restoring the recipients after mistakenly pressing reply instead of
reply-all ]
> On May 9, 2019, at 12:11 PM, Peter Zijlstra wrote:
>
> On Thu, May 09, 2019 at 06:50:00PM +, Nadav Amit wrote:
>>> On May 9, 2019, at 11:24 AM, Peter Zijlstra wrote:
>>>
>>> On Thu, May 09, 2019 at
- Original Message -
>
>
> On 5/9/19 11:24 AM, Peter Zijlstra wrote:
> > On Thu, May 09, 2019 at 05:36:29PM +, Nadav Amit wrote:
> >>> On May 9, 2019, at 3:38 AM, Peter Zijlstra wrote:
> >>> diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
> >>> index 99740e1dd273..fe768f8d612e
On Thu, May 09, 2019 at 12:38:13PM +0200, Peter Zijlstra wrote:
> diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
> index 99740e1dd273..fe768f8d612e 100644
> --- a/mm/mmu_gather.c
> +++ b/mm/mmu_gather.c
> @@ -244,15 +244,20 @@ void tlb_finish_mmu(struct mmu_gather *tlb,
> unsigned
On 5/9/19 11:24 AM, Peter Zijlstra wrote:
On Thu, May 09, 2019 at 05:36:29PM +, Nadav Amit wrote:
On May 9, 2019, at 3:38 AM, Peter Zijlstra wrote:
diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
index 99740e1dd273..fe768f8d612e 100644
--- a/mm/mmu_gather.c
+++ b/mm/mmu_gather.c
@@
On Thu, May 09, 2019 at 11:35:55AM -0700, Yang Shi wrote:
>
>
> On 5/9/19 3:54 AM, Peter Zijlstra wrote:
> > On Thu, May 09, 2019 at 12:38:13PM +0200, Peter Zijlstra wrote:
> >
> > > That's tlb->cleared_p*, and yes agreed. That is, right until some
> > > architecture has level dependent TLBI
On 5/9/19 3:54 AM, Peter Zijlstra wrote:
On Thu, May 09, 2019 at 12:38:13PM +0200, Peter Zijlstra wrote:
That's tlb->cleared_p*, and yes agreed. That is, right until some
architecture has level dependent TLBI instructions, at which point we'll
need to have them all set instead of cleared.
On Thu, May 09, 2019 at 05:36:29PM +, Nadav Amit wrote:
> > On May 9, 2019, at 3:38 AM, Peter Zijlstra wrote:
> > diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
> > index 99740e1dd273..fe768f8d612e 100644
> > --- a/mm/mmu_gather.c
> > +++ b/mm/mmu_gather.c
> > @@ -244,15 +244,20 @@ void
On 5/9/19 3:38 AM, Peter Zijlstra wrote:
On Thu, May 09, 2019 at 09:37:26AM +0100, Will Deacon wrote:
Hi all, [+Peter]
Right, mm/mmu_gather.c has a MAINTAINERS entry; use it.
Sorry for that, I didn't realize we have mmu_gather maintainers. I
should run maintainer.pl.
Also added Nadav
> On May 9, 2019, at 3:38 AM, Peter Zijlstra wrote:
>
> On Thu, May 09, 2019 at 09:37:26AM +0100, Will Deacon wrote:
>> Hi all, [+Peter]
>
> Right, mm/mmu_gather.c has a MAINTAINERS entry; use it.
>
> Also added Nadav and Minchan who've poked at this issue before. And Mel,
> because he loves
> > I don't think we can elide the call __tlb_reset_range() entirely, since I
> > think we do want to clear the freed_pXX bits to ensure that we walk the
> > range with the smallest mapping granule that we have. Otherwise couldn't we
> > have a problem if we hit a PMD that had been cleared, but
On Thu, May 09, 2019 at 12:38:13PM +0200, Peter Zijlstra wrote:
> That's tlb->cleared_p*, and yes agreed. That is, right until some
> architecture has level dependent TLBI instructions, at which point we'll
> need to have them all set instead of cleared.
> Anyway; am I correct in understanding
On Thu, May 09, 2019 at 09:37:26AM +0100, Will Deacon wrote:
> Hi all, [+Peter]
Right, mm/mmu_gather.c has a MAINTAINERS entry; use it.
Also added Nadav and Minchan who've poked at this issue before. And Mel,
because he loves these things :-)
> Apologies for the delay; I'm attending a
Hi all, [+Peter]
Apologies for the delay; I'm attending a conference this week so it's tricky
to keep up with email.
On Wed, May 08, 2019 at 05:34:49AM +0800, Yang Shi wrote:
> A few new fields were added to mmu_gather to make TLB flush smarter for
> huge page by telling what level of page table
A few new fields were added to mmu_gather to make TLB flush smarter for
huge page by telling what level of page table is changed.
__tlb_reset_range() is used to reset all these page table state to
unchanged, which is called by TLB flush for parallel mapping changes for
the same range under
46 matches
Mail list logo