On Thu, Dec 10, 2020 at 05:28:08PM +0800, Alex Shi wrote:
> Hi Yu,
>
> btw, after this patchset, to do cacheline alignment on each of lru lists
> are possible, so did you try that to see performance changes?
I ran a Chrome-based performance benchmark without memcg and with one
memcg many times. T
On Mon, Dec 07, 2020 at 10:24:29PM +, Matthew Wilcox wrote:
> On Mon, Dec 07, 2020 at 03:09:45PM -0700, Yu Zhao wrote:
> > Move scattered VM_BUG_ONs to two essential places that cover all
> > lru list additions and deletions.
>
> I'd like to see these converted int
eboot my devices everyday,
and therefore:
Acked-by: Yu Zhao
> Cc:
> Fixes: 76c714be0e5e ("arm64: pgtable: implement pte_accessible()")
> Reported-by: Yu Zhao
> Signed-off-by: Will Deacon
> ---
> arch/arm64/include/asm/pgtable.h | 4 +---
> 1 file changed,
On Fri, Nov 20, 2020 at 02:35:55PM +, Will Deacon wrote:
> Since commit 0758cd830494 ("asm-generic/tlb: avoid potential double flush"),
> TLB invalidation is elided in tlb_finish_mmu() if no entries were batched
> via the tlb_remove_*() functions. Consequently, the page-table modifications
> pe
On Fri, Nov 20, 2020 at 02:35:57PM +, Will Deacon wrote:
> clear_refs_write() uses the 'fullmm' API for invalidating TLBs after
> updating the page-tables for the current mm. However, since the mm is not
> being freed, this can result in stale TLB entries on architectures which
> elide 'fullmm'
On Fri, Nov 20, 2020 at 01:22:53PM -0700, Yu Zhao wrote:
> On Fri, Nov 20, 2020 at 02:35:55PM +, Will Deacon wrote:
> > Since commit 0758cd830494 ("asm-generic/tlb: avoid potential double flush"),
> > TLB invalidation is elided in tlb_finish_mmu() if no entries
offsets in the PTE, it does not actually impose any new restrictions on
the maximum size of swap files, as that is currently limited by the use
of 32bit values in other parts of the swap code.
Signed-off-by: Suleiman Souhlal
Signed-off-by: Yu Zhao
---
arch/x86/include/asm/pgtable_64.h | 62
nsert
private swap files onto swap_list; this improves the performance of
get_swap_page() in such cases, at the cost of making
swap_store_swap_device() and swapoff() minutely slower (both of which
are non-critical).
Signed-off-by: Jamie Liu
Signed-off-by: Yu Zhao
---
mm/swapfile.c
This series of patches adds support to configure a cgroup to swap to a
particular file by using control file memory.swapfile.
Originally, cgroups share system-wide swap space and limiting cgroup swapping
is not possible. This patchset solves the problem by adding mechanism that
isolates cgroup swa
they go up the
hierarchy until someone who has swap file set up is found).
The path of the swap file is set by writing to memory.swapfile. Details
of the API can be found in Documentation/cgroups/memory.txt.
Signed-off-by: Suleiman Souhlal
Signed-off-by: Yu Zhao
---
Documentation/cgroups/memor
On Wed, Apr 02, 2014 at 04:54:33PM -0400, Johannes Weiner wrote:
> On Wed, Apr 02, 2014 at 01:34:06PM -0700, Yu Zhao wrote:
> > This series of patches adds support to configure a cgroup to swap to a
> > particular file by using control file memory.swapfile.
> >
> >
This series of patches adds support to configure a cgroup to swap to a
particular file by using control file memory.swapfile.
A value of "default" in memory.swapfile indicates that this cgroup should
use the default, system-wide, swap files. A value of "none" indicates that
this cgroup should neve
nsert
private swap files onto swap_list; this improves the performance of
get_swap_page() in such cases, at the cost of making
swap_store_swap_device() and swapoff() minutely slower (both of which
are non-critical).
Signed-off-by: Jamie Liu
Signed-off-by: Yu Zhao
---
mm/swapfile.c
offsets in the PTE, it does not actually impose any new restrictions on
the maximum size of swap files, as that is currently limited by the use
of 32bit values in other parts of the swap code.
Signed-off-by: Suleiman Souhlal
Signed-off-by: Yu Zhao
---
arch/x86/include/asm/pgtable_64.h | 63
they go up the
hierarchy until someone who has swap file set up is found).
The path of the swap file is set by writing to memory.swapfile. Details
of the API can be found in Documentation/cgroups/memory.txt.
Signed-off-by: Suleiman Souhlal
Signed-off-by: Yu Zhao
---
Documentation/cgroups/memor
more general.
Signed-off-by: Yu Zhao
---
mm/huge_memory.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 74c78aa..780d12c 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -200,7 +200,7 @@ retry:
preempt_disable
This allows us to easily catch the bug fixed in previous patch.
Here we also verify whether a page is tail page or not -- tail
pages are supposed to be freed along with their head, not by
themselves.
Signed-off-by: Yu Zhao
---
mm/page_alloc.c | 3 +++
1 file changed, 3 insertions(+)
diff
This allows us to catch the bug fixed in the previous patch
(mm: free compound page with correct order).
Here we also verify whether a page is tail page or not -- tail
pages are supposed to be freed along with their head, not by
themselves.
Reviewed-by: Kirill A. Shutemov
Signed-off-by: Yu Zhao
general.
Acked-by: Kirill A. Shutemov
Fixes: 97ae17497e99 ("thp: implement refcounting for huge zero page")
Cc: sta...@vger.kernel.org (v3.8+)
Signed-off-by: Yu Zhao
---
mm/huge_memory.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/huge_memory.c b/mm/hug
On Wed, Oct 15, 2014 at 12:30:44PM -0700, Andrew Morton wrote:
> On Wed, 15 Oct 2014 12:20:04 -0700 Yu Zhao wrote:
>
> > Compound page should be freed by put_page() or free_pages() with
> > correct order. Not doing so will cause tail pages leaked.
> >
> > The com
r overflow in expression
[-Werror=overflow]
Signed-off-by: Yu Zhao
---
include/linux/page-flags.h | 18 +-
1 file changed, 9 insertions(+), 9 deletions(-)
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index f4ed4f1b..a5c 100644
--- a/include/linux/page-
On Mon, Apr 25, 2016 at 05:20:10PM -0400, Dan Streetman wrote:
> Add a work_struct to struct zpool, and change zpool_destroy_pool to
> defer calling the pool implementation destroy.
>
> The zsmalloc pool destroy function, which is one of the zpool
> implementations, may sleep during destruction of
On Thu, Mar 31, 2016 at 05:46:39PM +0900, Sergey Senozhatsky wrote:
> On (03/30/16 08:59), Minchan Kim wrote:
> > On Tue, Mar 29, 2016 at 03:02:57PM -0700, Yu Zhao wrote:
> > > zs_destroy_pool() might sleep so it shouldn't be used in zpool
> > > destroy callback
->list_lock)->rlock){-.-.}, at: __kmem_cache_shutdown+0x81/0x3cb
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0
lock(&(&n->list_lock)->rlock);
lock(&(&n->list_lock)->rlock);
*** DEADLOCK
On Tue, Sep 10, 2019 at 05:57:22AM +0900, Tetsuo Handa wrote:
> On 2019/09/10 1:00, Kirill A. Shutemov wrote:
> > On Mon, Sep 09, 2019 at 12:10:16AM -0600, Yu Zhao wrote:
> >> If we are already under list_lock, don't call kmalloc(). Otherwise we
> >> will run into
On Tue, Sep 10, 2019 at 10:41:31AM +0900, Tetsuo Handa wrote:
> Yu Zhao wrote:
> > I think we can safely assume PAGE_SIZE is unsigned long aligned and
> > page->objects is non-zero. But if you don't feel comfortable with these
> > assumptions, I'd be happy to e
, and AFAIK nobody
complains about it.
Signed-off-by: Yu Zhao
---
mm/memory.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/memory.c b/mm/memory.c
index e2bb51b6242e..ea3c74855b23 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -654,7 +654,7 @@ struct page *vm_normal_page_pmd(s
s not what we have in vm_normal_page_pmd().
> > > pmd_trans_huge_lock() makes sure of it.
> > >
> > > This is a trivial fix for /proc/pid/numa_maps, and AFAIK nobody
> > > complains about it.
> > >
> > > Signed-off-by: Yu Zhao
> > > ---
>
Though I have no idea what the side effect of a race would be,
apparently we want to prevent the free list from being changed
while debugging objects in general.
Signed-off-by: Yu Zhao
---
mm/slub.c | 4
1 file changed, 4 insertions(+)
diff --git a/mm/slub.c b/mm/slub.c
index f28072c9f2ce
quot;slub: Do not use frozen page flag but a bit in the page
counters")
Signed-off-by: Yu Zhao
---
mm/slub.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/mm/slub.c b/mm/slub.c
index 8834563cdb4b..62053ceb4464 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1
s already holding lock:
(&(&n->list_lock)->rlock){-.-.}, at: __kmem_cache_shutdown+0x81/0x3cb
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0
lock(&(&n->list_lock)->rlock);
lock(&(&n->lis
On Thu, Sep 12, 2019 at 03:44:01AM +0300, Kirill A. Shutemov wrote:
> On Wed, Sep 11, 2019 at 06:29:28PM -0600, Yu Zhao wrote:
> > If we are already under list_lock, don't call kmalloc(). Otherwise we
> > will run into deadlock because kmalloc() also tries to gr
l
behavior isn't intended anyway.
Signed-off-by: Yu Zhao
---
mm/slub.c | 21 -
1 file changed, 8 insertions(+), 13 deletions(-)
diff --git a/mm/slub.c b/mm/slub.c
index 62053ceb4464..7b7e1ee264ef 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -4386,31 +4386,26 @@ static int cou
Though I have no idea what the side effect of such race would be,
apparently we want to prevent the free list from being changed
while debugging the objects.
Signed-off-by: Yu Zhao
---
mm/slub.c | 4
1 file changed, 4 insertions(+)
diff --git a/mm/slub.c b/mm/slub.c
index baa60dd73942
quot;slub: Do not use frozen page flag but a bit in the page
counters")
Signed-off-by: Yu Zhao
---
mm/slub.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/mm/slub.c b/mm/slub.c
index 8834563cdb4b..62053ceb4464 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1
Possible unsafe locking scenario:
CPU0
lock(&(&n->list_lock)->rlock);
lock(&(&n->list_lock)->rlock);
*** DEADLOCK ***
Signed-off-by: Yu Zhao
---
mm/slub.c | 88 +--
1 file changed
On Thu, Sep 12, 2019 at 12:40:35PM +0300, Kirill A. Shutemov wrote:
> On Wed, Sep 11, 2019 at 08:31:08PM -0600, Yu Zhao wrote:
> > Mask of slub objects per page shouldn't be larger than what
> > page->objects can hold.
> >
> > It requires more than 2^15 obje
On Thu, Sep 12, 2019 at 01:06:42PM +0300, Kirill A. Shutemov wrote:
> On Wed, Sep 11, 2019 at 08:31:11PM -0600, Yu Zhao wrote:
> > Though I have no idea what the side effect of such race would be,
> > apparently we want to prevent the free list from being changed
> > while
Possible unsafe locking scenario:
CPU0
lock(&(&n->list_lock)->rlock);
lock(&(&n->list_lock)->rlock);
*** DEADLOCK ***
Acked-by: Kirill A. Shutemov
Signed-off-by: Yu Zhao
---
mm/slub.c | 88 +-
l
behavior isn't intended anyway.
Signed-off-by: Yu Zhao
---
mm/slub.c | 21 -
1 file changed, 8 insertions(+), 13 deletions(-)
diff --git a/mm/slub.c b/mm/slub.c
index 8834563cdb4b..445ef8b2aec0 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -4384,31 +4384,26 @@ static int cou
wapBacked() to change.
Signed-off-by: Yu Zhao
---
kernel/events/uprobes.c | 2 ++
mm/huge_memory.c| 4
mm/khugepaged.c | 2 ++
mm/memory.c | 10 +-
mm/migrate.c| 2 ++
mm/swapfile.c | 6 --
mm/userfaultfd.c| 2 ++
On Tue, Oct 01, 2019 at 03:31:51PM -0700, John Hubbard wrote:
> On 9/26/19 10:06 PM, Yu Zhao wrote:
> > On Thu, Sep 26, 2019 at 08:26:46PM -0700, John Hubbard wrote:
> >> On 9/26/19 3:20 AM, Kirill A. Shutemov wrote:
> >>> On Wed, Sep 25, 2019 at 04:26:54PM -0600, Yu
Slub doesn't use PG_active and PG_error anymore.
Signed-off-by: Yu Zhao
---
mm/slub.c | 6 ++
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/mm/slub.c b/mm/slub.c
index 320a1c375e1b..cfbc839dc2ea 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -93,9 +93,7 @@
* minimal so we
On Tue, Sep 24, 2019 at 02:23:16PM +0300, Kirill A. Shutemov wrote:
> On Sat, Sep 14, 2019 at 01:05:18AM -0600, Yu Zhao wrote:
> > We don't want to expose page to fast gup running on a remote CPU
> > before all local non-atomic ops on page flags are visible first.
> >
&g
cache() serves
as a valid write barrier.
Signed-off-by: Yu Zhao
---
mm/hugetlb.c | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 6d7296dd11b8..0be5b7937085 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -3693,7 +3693,7 @@ static vm_fa
previous patch.
The one in shmem_mfill_atomic_pte() doesn't need a explicit write
barrier because of the following shmem_add_to_page_cache().
Signed-off-by: Yu Zhao
---
include/linux/page-flags.h | 6 +-
kernel/events/uprobes.c| 2 +-
mm/huge_memory.c | 11 +++---
an existing smp_wmb().
Signed-off-by: Yu Zhao
---
kernel/events/uprobes.c | 2 ++
mm/huge_memory.c| 6 ++
mm/khugepaged.c | 2 ++
mm/memory.c | 10 +-
mm/migrate.c| 2 ++
mm/swapfile.c | 6 --
mm/userfaultfd.c| 2 ++
__SetPageUptodate() always has a built-in smp_wmb() to make sure
user data copied to a new page appears before set_pmd_at().
Signed-off-by: Yu Zhao
---
mm/khugepaged.c | 7 ---
1 file changed, 7 deletions(-)
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index ccede2425c3f..70ff98e1414d
On Tue, Sep 24, 2019 at 04:50:36PM -0700, Matthew Wilcox wrote:
> On Tue, Sep 24, 2019 at 05:24:59PM -0600, Yu Zhao wrote:
> > +/*
> > + * Only use this function when there is a following write barrier, e.g.,
> > + * an explicit smp_wmb() and/or the page will be added to page
On Wed, Sep 25, 2019 at 10:25:30AM +0200, Peter Zijlstra wrote:
> On Tue, Sep 24, 2019 at 05:24:58PM -0600, Yu Zhao wrote:
> > We don't want to expose a non-hugetlb page to the fast gup running
> > on a remote CPU before all local non-atomic ops on the page flags
&
On Wed, Sep 25, 2019 at 03:17:50PM +0300, Kirill A. Shutemov wrote:
> On Tue, Sep 24, 2019 at 04:05:50PM -0600, Yu Zhao wrote:
> > On Tue, Sep 24, 2019 at 02:23:16PM +0300, Kirill A. Shutemov wrote:
> > > On Sat, Sep 14, 2019 at 01:05:18AM -0600, Yu Zhao wrote:
> > > >
This is a cleanup patch that replaces two historical uses of
list_move_tail() with relatively recent add_page_to_lru_list_tail().
Signed-off-by: Yu Zhao
---
mm/swap.c | 14 ++
1 file changed, 6 insertions(+), 8 deletions(-)
diff --git a/mm/swap.c b/mm/swap.c
index ae300397dfda
t enable it now.
Signed-off-by: Yu Zhao
---
arch/arm64/Kconfig | 3 +++
arch/arm64/include/asm/pgalloc.h | 12 +++-
arch/arm64/include/asm/tlb.h | 5 -
3 files changed, 18 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconf
On Tue, Feb 19, 2019 at 11:47:12AM +0530, Anshuman Khandual wrote:
> + Matthew Wilcox
>
> On 02/19/2019 11:02 AM, Yu Zhao wrote:
> > On Tue, Feb 19, 2019 at 09:51:01AM +0530, Anshuman Khandual wrote:
> >>
> >>
> >> On 02/19/2019 04:43 AM, Yu Zhao wrote:
On Wed, Feb 20, 2019 at 03:57:59PM +0530, Anshuman Khandual wrote:
>
>
> On 02/20/2019 03:58 AM, Yu Zhao wrote:
> > On Tue, Feb 19, 2019 at 11:47:12AM +0530, Anshuman Khandual wrote:
> >> + Matthew Wilcox
> >>
> >> On 02/19/2019 11:02 AM, Yu Zhao wrote
Declaration of struct node is required regardless. On UMA system,
including compaction.h without proceeding node.h shouldn't cause
build error.
Signed-off-by: Yu Zhao
---
include/linux/compaction.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/compactio
On Tue, May 14, 2019 at 02:25:27PM -0700, Andrew Morton wrote:
> On Tue, 9 Jan 2018 02:10:50 -0800 Yu Zhao wrote:
>
> > > Also what prevents reordering here? There do not seem to be any barriers
> > > to prevent __SetPageSwapBacked leak after set_pte_at with your p
em because, AFAIK, nobody calls
these functions on (huge) shmem. Fix them anyway just in case.
Signed-off-by: Yu Zhao
---
mm/filemap.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/filemap.c b/mm/filemap.c
index 81adec8ee02c..cf5fd773314a 100644
--- a/mm/filemap.
On Tue, Feb 26, 2019 at 03:13:07PM +, Mark Rutland wrote:
> Hi,
>
> On Mon, Feb 18, 2019 at 04:13:18PM -0700, Yu Zhao wrote:
> > init_mm doesn't require page table lock to be initialized at
> > any level. Add a separate page table allocator for it, and the
> >
On Tue, Feb 26, 2019 at 03:12:31PM +, Mark Rutland wrote:
> Hi,
>
> On Mon, Feb 18, 2019 at 04:13:17PM -0700, Yu Zhao wrote:
> > For pte page, use pgtable_page_ctor(); for pmd page, use
> > pgtable_pmd_page_ctor() if not folded; and for the rest (pud,
> >
t enable it now.
We only do so when pmd is not folded, so we don't mistakenly call
pgtable_pmd_page_ctor() on pud or p4d in pgd_pgtable_alloc(). (We
check shift against PMD_SHIFT, which is same as PUD_SHIFT when pmd
is folded).
Signed-off-by: Yu Zhao
---
arch/arm64/Kconfig |
o we won't mistakenly call
pgtable_pmd_page_ctor() on pud or p4d.
Acked-by: Mark Rutland
Signed-off-by: Yu Zhao
---
arch/arm64/mm/mmu.c | 36
1 file changed, 24 insertions(+), 12 deletions(-)
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
ind
it_mm.
Acked-by: Mark Rutland
Signed-off-by: Yu Zhao
---
arch/arm64/mm/mmu.c | 15 +--
1 file changed, 13 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index f704b291f2c5..d1dc2a2777aa 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/
On Mon, Mar 11, 2019 at 01:58:27PM +0530, Anshuman Khandual wrote:
> On 03/10/2019 06:49 AM, Yu Zhao wrote:
> > Switch from per mm_struct to per pmd page table lock by enabling
> > ARCH_ENABLE_SPLIT_PMD_PTLOCK. This provides better granularity for
> > large system.
> >
On Mon, Mar 11, 2019 at 12:12:28PM +, Mark Rutland wrote:
> Hi,
>
> On Sat, Mar 09, 2019 at 06:19:06PM -0700, Yu Zhao wrote:
> > Switch from per mm_struct to per pmd page table lock by enabling
> > ARCH_ENABLE_SPLIT_PMD_PTLOCK. This provides better granularity
this series attempts to enable ARCH_ENABLE_SPLIT_PMD_PTLOCK
> with
> some minimal changes to existing kernel pgtable page allocation code. Hence
> just
> trying to re-evaluate the series in that isolation.
>
> On 03/10/2019 06:49 AM, Yu Zhao wrote:
>
> > For pte page, u
t enable it now.
We only do so when pmd is not folded, so we don't mistakenly call
pgtable_pmd_page_ctor() on pud or p4d in pgd_pgtable_alloc().
Signed-off-by: Yu Zhao
---
arch/arm64/Kconfig | 3 +++
arch/arm64/include/asm/pgalloc.h | 12 +++-
arch/arm64/include/asm/t
Call pgtable_pmd_page_dtor() for pmd page allocated by
mmu_memory_cache_alloc() so kernel won't crash when it's freed
through stage2_pmd_free()->pmd_free()->pgtable_pmd_page_dtor().
This is needed if we are going to enable split pmd pt lock.
Signed-off-by: Yu Zhao
---
arch/a
o we won't mistakenly call
pgtable_pmd_page_ctor() on pud or p4d.
Acked-by: Mark Rutland
Signed-off-by: Yu Zhao
---
arch/arm64/mm/mmu.c | 36
1 file changed, 24 insertions(+), 12 deletions(-)
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
ind
it_mm.
Acked-by: Mark Rutland
Signed-off-by: Yu Zhao
---
arch/arm64/mm/mmu.c | 15 +--
1 file changed, 13 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index f704b291f2c5..d1dc2a2777aa 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/
On Tue, Feb 12, 2019 at 01:03:58PM +0100, Michal Hocko wrote:
> On Fri 08-02-19 01:04:37, Yu Zhao wrote:
> > Declaration of struct node is required regardless. On UMA system,
> > including compaction.h without proceeding node.h shouldn't cause
> > build error.
>
&g
On Thu, Jan 10, 2019 at 04:43:57AM -0700, William Kucharski wrote:
>
>
> > On Jan 9, 2019, at 8:08 PM, Yu Zhao wrote:
> >
> > find_get_pages_range() and find_get_pages_range_tag() already
> > correctly increment reference count on head when seeing compound
> &g
On Mon, Feb 18, 2019 at 03:12:23PM +, Will Deacon wrote:
> [+Mark]
>
> On Thu, Feb 14, 2019 at 02:16:42PM -0700, Yu Zhao wrote:
> > Switch from per mm_struct to per pmd page table lock by enabling
> > ARCH_ENABLE_SPLIT_PMD_PTLOCK. This provides better granularity
On Mon, Feb 18, 2019 at 12:49:38PM -0700, Yu Zhao wrote:
> On Mon, Feb 18, 2019 at 03:12:23PM +, Will Deacon wrote:
> > [+Mark]
> >
> > On Thu, Feb 14, 2019 at 02:16:42PM -0700, Yu Zhao wrote:
> > > Switch from per mm_struct to per p
it_mm.
Signed-off-by: Yu Zhao
---
arch/arm64/mm/mmu.c | 15 +--
1 file changed, 13 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index fa7351877af3..e8bf8a6300e8 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -370,6 +370,16 @@ s
For pte page, use pgtable_page_ctor(); for pmd page, use
pgtable_pmd_page_ctor() if not folded; and for the rest (pud,
p4d and pgd), don't use any.
Signed-off-by: Yu Zhao
---
arch/arm64/mm/mmu.c | 33 +
1 file changed, 21 insertions(+), 12 deletions(-)
t enable it now.
Signed-off-by: Yu Zhao
---
arch/arm64/Kconfig | 3 +++
arch/arm64/include/asm/pgalloc.h | 12 +++-
arch/arm64/include/asm/tlb.h | 5 -
3 files changed, 18 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconf
On Tue, Feb 19, 2019 at 09:51:01AM +0530, Anshuman Khandual wrote:
>
>
> On 02/19/2019 04:43 AM, Yu Zhao wrote:
> > For pte page, use pgtable_page_ctor(); for pmd page, use
> > pgtable_pmd_page_ctor() if not folded; and for the rest (pud,
> > p4d and pgd), don
For dax pmd, pmd_trans_huge() returns false but pmd_huge() returns
true on x86. So the function works as long as hugetlb is configured.
However, dax doesn't depend on hugetlb.
Signed-off-by: Yu Zhao
---
mm/gup.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/mm/gup.c
On Wed, Sep 12, 2018 at 11:20:20AM +0100, Mark Brown wrote:
> On Tue, Sep 11, 2018 at 03:12:46PM -0600, Yu Zhao wrote:
> > This reverts commit 12eeeb4f4733bbc4481d01df35933fc15beb8b19.
> >
> > The patch doesn't fix accessing memory with null pointer in
> > skl_int
once on null dma buffer pointer during the
initialization.
Reviewed-by: Takashi Iwai
Signed-off-by: Yu Zhao
---
sound/hda/hdac_controller.c | 8 ++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/sound/hda/hdac_controller.c b/sound/hda/hdac_controller.c
index 560ec0986e1a
shi Iwai
Signed-off-by: Yu Zhao
---
include/sound/hdaudio.h | 1 +
sound/hda/hdac_controller.c | 7 ---
sound/soc/intel/skylake/skl.c | 2 +-
3 files changed, 6 insertions(+), 4 deletions(-)
diff --git a/include/sound/hdaudio.h b/include/sound/hdaudio.h
index 6f1e1f3b3063..cd1773d0e0
eb4f4733b ("ASoC: Intel: Skylake: Acquire irq after RIRB allocation")
Signed-off-by: Yu Zhao
---
sound/soc/intel/skylake/skl.c | 10 --
1 file changed, 4 insertions(+), 6 deletions(-)
diff --git a/sound/soc/intel/skylake/skl.c b/sound/soc/intel/skylake/skl.c
index e7fd14daeb4f..
+0x184/0x1bb
[ 25.824804] entry_SYSCALL_64_after_hwframe+0x3d/0xa2
[ 25.895502] RIP: rdev_get_name+0x29/0xa5 RSP: 8801d45779f0
[ 26.550863] ---[ end trace fb2a7bb4f63aeba5 ]---
Signed-off-by: Yu Zhao
---
drivers/regulator/core.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff
Host ctl2: 0x0008
mmc1: sdhci: ADMA Err: 0x | ADMA Ptr: 0x
mmc1: sdhci:
The problem happens during wakeup from S3. Adding a delay quirk
after power up reliably fixes the problem.
Signed-off-by: Yu Zhao
---
drivers/mmc/host/sdhc
fset: 0xc80 from 0x8100 (relocation
range: 0x8000-0xbfff)
Signed-off-by: Yu Zhao
---
sound/soc/intel/skylake/skl.c | 10 --
1 file changed, 4 insertions(+), 6 deletions(-)
diff --git a/sound/soc/intel/skylake/skl.c b/sound/soc/intel/skylake/skl.c
index cf0972
once on null dma buffer pointer during the
initialization.
Signed-off-by: Yu Zhao
---
sound/hda/hdac_controller.c | 8 ++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/sound/hda/hdac_controller.c b/sound/hda/hdac_controller.c
index 560ec0986e1a..11057d9f84ec 100644
--- a
x27;t be able to do
so; 2) we are ready to handle interrupt yet, and kernel crashes when
interrupt comes in.
Rename azx_reset() to snd_hdac_bus_reset_link(), and use it to reset
device properly.
Fixes: 60767abcea3d ("ASoC: Intel: Skylake: Reset the controller in probe")
Signed-off-by
On Tue, Sep 11, 2018 at 05:36:36PM +0100, Mark Brown wrote:
> On Tue, Sep 11, 2018 at 08:03:21AM +0200, Takashi Iwai wrote:
> > Yu Zhao wrote:
>
> > > Will fix the problems in the following patches. Also attaching the
> > > crash for future reference.
>
On Tue, Sep 11, 2018 at 08:06:49AM +0200, Takashi Iwai wrote:
> On Mon, 10 Sep 2018 23:21:50 +0200,
> Yu Zhao wrote:
> >
> > In snd_hdac_bus_init_chip(), we enable interrupt before
> > snd_hdac_bus_init_cmd_io() initializing dma buffers. If irq has
> > been acquire
cation")
Signed-off-by: Yu Zhao
---
sound/soc/intel/skylake/skl.c | 10 --
1 file changed, 4 insertions(+), 6 deletions(-)
diff --git a/sound/soc/intel/skylake/skl.c b/sound/soc/intel/skylake/skl.c
index e7fd14daeb4f..d174cbe35f7a 100644
--- a/sound/soc/intel/skylake/skl.c
+++ b
once on null dma buffer pointer during the
initialization.
Reviewed-by: Takashi Iwai
Signed-off-by: Yu Zhao
---
sound/hda/hdac_controller.c | 8 ++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/sound/hda/hdac_controller.c b/sound/hda/hdac_controller.c
index 560ec0986e1a
shi Iwai
Signed-off-by: Yu Zhao
---
include/sound/hdaudio.h | 1 +
sound/hda/hdac_controller.c | 7 ---
sound/soc/intel/skylake/skl.c | 2 +-
3 files changed, 6 insertions(+), 4 deletions(-)
diff --git a/include/sound/hdaudio.h b/include/sound/hdaudio.h
index 6f1e1f3b3063..cd1773d0e0
AM, Michal Hocko wrote:
> > > On Fri 02-12-16 15:38:48, Michal Hocko wrote:
> > >> On Fri 02-12-16 09:24:35, Dan Streetman wrote:
> > >> > On Fri, Dec 2, 2016 at 8:46 AM, Michal Hocko wrote:
> > >> > > On Wed 30-11-16 13:15:16, Yu Zhao wrote:
On Fri, Dec 02, 2016 at 02:46:06PM +0100, Michal Hocko wrote:
> On Wed 30-11-16 13:15:16, Yu Zhao wrote:
> > __unregister_cpu_notifier() only removes registered notifier from its
> > linked list when CPU hotplug is configured. If we free registered CPU
> > notifier when HOTP
simply disable CPU notifier when CPU hotplug
is not configured (which is perfectly safe because the code in question
is called after all possible CPUs are online and will remain online
until power off).
Signed-off-by: Yu Zhao
---
mm/zswap.c | 12
1 file changed, 12 insertions(+)
diff
simply disable CPU notifier when CPU hotplug
is not configured (which is perfectly safe because the code in question
is called after all possible CPUs are online and will remain online
until power off).
v2: #ifdef for cpu_notifier_register_done during cleanup.
Signe-off-by: Yu Zhao
---
mm/zswap.c
off-by: Yu Zhao
---
mm/shmem.c | 16
1 file changed, 16 insertions(+)
diff --git a/mm/shmem.c b/mm/shmem.c
index 4caf8ed..37e7933 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -542,6 +542,21 @@ void shmem_truncate_range(struct inode *inode, loff_t
lstart, loff_t l
This is a trivial but worth having clean-up patch. There should be
no side effects except page->lru is temporarily poisoned after it's
deleted but before it's added to the new list in move_pages_to_lru()
(which is not a problem).
Signed-off-by: Yu Zhao
---
mm/swap.c | 4 +---
This is a trivial clean-up patch. Take it or leave it.
Signed-off-by: Yu Zhao
---
include/linux/mmzone.h | 12
include/linux/vmstat.h | 2 +-
mm/vmscan.c| 2 +-
3 files changed, 10 insertions(+), 6 deletions(-)
diff --git a/include/linux/mmzone.h b/include/linux
201 - 300 of 337 matches
Mail list logo