Re: [PATCH 00/11] mm: lru related cleanups

2020-12-15 Thread Yu Zhao
On Thu, Dec 10, 2020 at 05:28:08PM +0800, Alex Shi wrote: > Hi Yu, > > btw, after this patchset, to do cacheline alignment on each of lru lists > are possible, so did you try that to see performance changes? I ran a Chrome-based performance benchmark without memcg and with one memcg many times. T

Re: [PATCH 07/11] mm: VM_BUG_ON lru page flags

2020-12-15 Thread Yu Zhao
On Mon, Dec 07, 2020 at 10:24:29PM +, Matthew Wilcox wrote: > On Mon, Dec 07, 2020 at 03:09:45PM -0700, Yu Zhao wrote: > > Move scattered VM_BUG_ONs to two essential places that cover all > > lru list additions and deletions. > > I'd like to see these converted int

Re: [PATCH 1/6] arm64: pgtable: Fix pte_accessible()

2020-11-20 Thread Yu Zhao
eboot my devices everyday, and therefore: Acked-by: Yu Zhao > Cc: > Fixes: 76c714be0e5e ("arm64: pgtable: implement pte_accessible()") > Reported-by: Yu Zhao > Signed-off-by: Will Deacon > --- > arch/arm64/include/asm/pgtable.h | 4 +--- > 1 file changed,

Re: [PATCH 4/6] mm: proc: Invalidate TLB after clearing soft-dirty page state

2020-11-20 Thread Yu Zhao
On Fri, Nov 20, 2020 at 02:35:55PM +, Will Deacon wrote: > Since commit 0758cd830494 ("asm-generic/tlb: avoid potential double flush"), > TLB invalidation is elided in tlb_finish_mmu() if no entries were batched > via the tlb_remove_*() functions. Consequently, the page-table modifications > pe

Re: [PATCH 6/6] mm: proc: Avoid fullmm flush for young/dirty bit toggling

2020-11-20 Thread Yu Zhao
On Fri, Nov 20, 2020 at 02:35:57PM +, Will Deacon wrote: > clear_refs_write() uses the 'fullmm' API for invalidating TLBs after > updating the page-tables for the current mm. However, since the mm is not > being freed, this can result in stale TLB entries on architectures which > elide 'fullmm'

Re: [PATCH 4/6] mm: proc: Invalidate TLB after clearing soft-dirty page state

2020-11-20 Thread Yu Zhao
On Fri, Nov 20, 2020 at 01:22:53PM -0700, Yu Zhao wrote: > On Fri, Nov 20, 2020 at 02:35:55PM +, Will Deacon wrote: > > Since commit 0758cd830494 ("asm-generic/tlb: avoid potential double flush"), > > TLB invalidation is elided in tlb_finish_mmu() if no entries

[PATCH v2 3/3] swap: Increase the max swap files to 8192 on x86_64

2014-04-02 Thread Yu Zhao
offsets in the PTE, it does not actually impose any new restrictions on the maximum size of swap files, as that is currently limited by the use of 32bit values in other parts of the swap code. Signed-off-by: Suleiman Souhlal Signed-off-by: Yu Zhao --- arch/x86/include/asm/pgtable_64.h | 62

[PATCH v2 2/3] swap: do not store private swap files on swap_list

2014-04-02 Thread Yu Zhao
nsert private swap files onto swap_list; this improves the performance of get_swap_page() in such cases, at the cost of making swap_store_swap_device() and swapoff() minutely slower (both of which are non-critical). Signed-off-by: Jamie Liu Signed-off-by: Yu Zhao --- mm/swapfile.c

[PATCH v2 0/3] Per-cgroup swap file support

2014-04-02 Thread Yu Zhao
This series of patches adds support to configure a cgroup to swap to a particular file by using control file memory.swapfile. Originally, cgroups share system-wide swap space and limiting cgroup swapping is not possible. This patchset solves the problem by adding mechanism that isolates cgroup swa

[PATCH v2 1/3] mm/swap: support per memory cgroup swapfiles

2014-04-02 Thread Yu Zhao
they go up the hierarchy until someone who has swap file set up is found). The path of the swap file is set by writing to memory.swapfile. Details of the API can be found in Documentation/cgroups/memory.txt. Signed-off-by: Suleiman Souhlal Signed-off-by: Yu Zhao --- Documentation/cgroups/memor

Re: [PATCH v2 0/3] Per-cgroup swap file support

2014-04-02 Thread Yu Zhao
On Wed, Apr 02, 2014 at 04:54:33PM -0400, Johannes Weiner wrote: > On Wed, Apr 02, 2014 at 01:34:06PM -0700, Yu Zhao wrote: > > This series of patches adds support to configure a cgroup to swap to a > > particular file by using control file memory.swapfile. > > > >

[PATCH 0/3] Per cgroup swap file support

2014-03-21 Thread Yu Zhao
This series of patches adds support to configure a cgroup to swap to a particular file by using control file memory.swapfile. A value of "default" in memory.swapfile indicates that this cgroup should use the default, system-wide, swap files. A value of "none" indicates that this cgroup should neve

[PATCH 2/3] swap: do not store private swap files on swap_list

2014-03-21 Thread Yu Zhao
nsert private swap files onto swap_list; this improves the performance of get_swap_page() in such cases, at the cost of making swap_store_swap_device() and swapoff() minutely slower (both of which are non-critical). Signed-off-by: Jamie Liu Signed-off-by: Yu Zhao --- mm/swapfile.c

[PATCH 3/3] swap: Increase the maximum number of swap files to 8192.

2014-03-21 Thread Yu Zhao
offsets in the PTE, it does not actually impose any new restrictions on the maximum size of swap files, as that is currently limited by the use of 32bit values in other parts of the swap code. Signed-off-by: Suleiman Souhlal Signed-off-by: Yu Zhao --- arch/x86/include/asm/pgtable_64.h | 63

[PATCH 1/3] mm/swap: support per memory cgroup swapfiles

2014-03-21 Thread Yu Zhao
they go up the hierarchy until someone who has swap file set up is found). The path of the swap file is set by writing to memory.swapfile. Details of the API can be found in Documentation/cgroups/memory.txt. Signed-off-by: Suleiman Souhlal Signed-off-by: Yu Zhao --- Documentation/cgroups/memor

[PATCH 1/2] mm: free compound page with correct order

2014-10-14 Thread Yu Zhao
more general. Signed-off-by: Yu Zhao --- mm/huge_memory.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 74c78aa..780d12c 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -200,7 +200,7 @@ retry: preempt_disable

[PATCH 2/2] mm: verify compound order when freeing a page

2014-10-14 Thread Yu Zhao
This allows us to easily catch the bug fixed in previous patch. Here we also verify whether a page is tail page or not -- tail pages are supposed to be freed along with their head, not by themselves. Signed-off-by: Yu Zhao --- mm/page_alloc.c | 3 +++ 1 file changed, 3 insertions(+) diff

[PATCH v2 2/2] mm: verify compound order when freeing a page

2014-10-15 Thread Yu Zhao
This allows us to catch the bug fixed in the previous patch (mm: free compound page with correct order). Here we also verify whether a page is tail page or not -- tail pages are supposed to be freed along with their head, not by themselves. Reviewed-by: Kirill A. Shutemov Signed-off-by: Yu Zhao

[PATCH v2 1/2] mm: free compound page with correct order

2014-10-15 Thread Yu Zhao
general. Acked-by: Kirill A. Shutemov Fixes: 97ae17497e99 ("thp: implement refcounting for huge zero page") Cc: sta...@vger.kernel.org (v3.8+) Signed-off-by: Yu Zhao --- mm/huge_memory.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/huge_memory.c b/mm/hug

Re: [PATCH v2 1/2] mm: free compound page with correct order

2014-10-24 Thread Yu Zhao
On Wed, Oct 15, 2014 at 12:30:44PM -0700, Andrew Morton wrote: > On Wed, 15 Oct 2014 12:20:04 -0700 Yu Zhao wrote: > > > Compound page should be freed by put_page() or free_pages() with > > correct order. Not doing so will cause tail pages leaked. > > > > The com

[PATCH] mm: use unsigned long constant for page flags

2016-04-29 Thread Yu Zhao
r overflow in expression [-Werror=overflow] Signed-off-by: Yu Zhao --- include/linux/page-flags.h | 18 +- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index f4ed4f1b..a5c 100644 --- a/include/linux/page-

Re: [PATCH] mm/zpool: use workqueue for zpool_destroy

2016-04-25 Thread Yu Zhao
On Mon, Apr 25, 2016 at 05:20:10PM -0400, Dan Streetman wrote: > Add a work_struct to struct zpool, and change zpool_destroy_pool to > defer calling the pool implementation destroy. > > The zsmalloc pool destroy function, which is one of the zpool > implementations, may sleep during destruction of

Re: [PATCH] zsmalloc: use workqueue to destroy pool in zpool callback

2016-03-31 Thread Yu Zhao
On Thu, Mar 31, 2016 at 05:46:39PM +0900, Sergey Senozhatsky wrote: > On (03/30/16 08:59), Minchan Kim wrote: > > On Tue, Mar 29, 2016 at 03:02:57PM -0700, Yu Zhao wrote: > > > zs_destroy_pool() might sleep so it shouldn't be used in zpool > > > destroy callback

[PATCH] mm: avoid slub allocation while holding list_lock

2019-09-08 Thread Yu Zhao
->list_lock)->rlock){-.-.}, at: __kmem_cache_shutdown+0x81/0x3cb other info that might help us debug this: Possible unsafe locking scenario: CPU0 lock(&(&n->list_lock)->rlock); lock(&(&n->list_lock)->rlock); *** DEADLOCK

Re: [PATCH] mm: avoid slub allocation while holding list_lock

2019-09-09 Thread Yu Zhao
On Tue, Sep 10, 2019 at 05:57:22AM +0900, Tetsuo Handa wrote: > On 2019/09/10 1:00, Kirill A. Shutemov wrote: > > On Mon, Sep 09, 2019 at 12:10:16AM -0600, Yu Zhao wrote: > >> If we are already under list_lock, don't call kmalloc(). Otherwise we > >> will run into

Re: [PATCH] mm: avoid slub allocation while holding list_lock

2019-09-09 Thread Yu Zhao
On Tue, Sep 10, 2019 at 10:41:31AM +0900, Tetsuo Handa wrote: > Yu Zhao wrote: > > I think we can safely assume PAGE_SIZE is unsigned long aligned and > > page->objects is non-zero. But if you don't feel comfortable with these > > assumptions, I'd be happy to e

[PATCH] mm: replace is_zero_pfn with is_huge_zero_pmd for thp

2019-08-25 Thread Yu Zhao
, and AFAIK nobody complains about it. Signed-off-by: Yu Zhao --- mm/memory.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/memory.c b/mm/memory.c index e2bb51b6242e..ea3c74855b23 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -654,7 +654,7 @@ struct page *vm_normal_page_pmd(s

Re: [PATCH] mm: replace is_zero_pfn with is_huge_zero_pmd for thp

2019-09-04 Thread Yu Zhao
s not what we have in vm_normal_page_pmd(). > > > pmd_trans_huge_lock() makes sure of it. > > > > > > This is a trivial fix for /proc/pid/numa_maps, and AFAIK nobody > > > complains about it. > > > > > > Signed-off-by: Yu Zhao > > > --- >

[PATCH 3/3] mm: lock slub page when listing objects

2019-09-11 Thread Yu Zhao
Though I have no idea what the side effect of a race would be, apparently we want to prevent the free list from being changed while debugging objects in general. Signed-off-by: Yu Zhao --- mm/slub.c | 4 1 file changed, 4 insertions(+) diff --git a/mm/slub.c b/mm/slub.c index f28072c9f2ce

[PATCH 1/3] mm: correct mask size for slub page->objects

2019-09-11 Thread Yu Zhao
quot;slub: Do not use frozen page flag but a bit in the page counters") Signed-off-by: Yu Zhao --- mm/slub.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/mm/slub.c b/mm/slub.c index 8834563cdb4b..62053ceb4464 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1

[PATCH 2/3] mm: avoid slub allocation while holding list_lock

2019-09-11 Thread Yu Zhao
s already holding lock: (&(&n->list_lock)->rlock){-.-.}, at: __kmem_cache_shutdown+0x81/0x3cb other info that might help us debug this: Possible unsafe locking scenario: CPU0 lock(&(&n->list_lock)->rlock); lock(&(&n->lis

Re: [PATCH 2/3] mm: avoid slub allocation while holding list_lock

2019-09-11 Thread Yu Zhao
On Thu, Sep 12, 2019 at 03:44:01AM +0300, Kirill A. Shutemov wrote: > On Wed, Sep 11, 2019 at 06:29:28PM -0600, Yu Zhao wrote: > > If we are already under list_lock, don't call kmalloc(). Otherwise we > > will run into deadlock because kmalloc() also tries to gr

[PATCH v2 2/4] mm: clean up validate_slab()

2019-09-11 Thread Yu Zhao
l behavior isn't intended anyway. Signed-off-by: Yu Zhao --- mm/slub.c | 21 - 1 file changed, 8 insertions(+), 13 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 62053ceb4464..7b7e1ee264ef 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -4386,31 +4386,26 @@ static int cou

[PATCH v2 4/4] mm: lock slub page when listing objects

2019-09-11 Thread Yu Zhao
Though I have no idea what the side effect of such race would be, apparently we want to prevent the free list from being changed while debugging the objects. Signed-off-by: Yu Zhao --- mm/slub.c | 4 1 file changed, 4 insertions(+) diff --git a/mm/slub.c b/mm/slub.c index baa60dd73942

[PATCH v2 1/4] mm: correct mask size for slub page->objects

2019-09-11 Thread Yu Zhao
quot;slub: Do not use frozen page flag but a bit in the page counters") Signed-off-by: Yu Zhao --- mm/slub.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/mm/slub.c b/mm/slub.c index 8834563cdb4b..62053ceb4464 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1

[PATCH v2 3/4] mm: avoid slub allocation while holding list_lock

2019-09-11 Thread Yu Zhao
Possible unsafe locking scenario: CPU0 lock(&(&n->list_lock)->rlock); lock(&(&n->list_lock)->rlock); *** DEADLOCK *** Signed-off-by: Yu Zhao --- mm/slub.c | 88 +-- 1 file changed

Re: [PATCH v2 1/4] mm: correct mask size for slub page->objects

2019-09-12 Thread Yu Zhao
On Thu, Sep 12, 2019 at 12:40:35PM +0300, Kirill A. Shutemov wrote: > On Wed, Sep 11, 2019 at 08:31:08PM -0600, Yu Zhao wrote: > > Mask of slub objects per page shouldn't be larger than what > > page->objects can hold. > > > > It requires more than 2^15 obje

Re: [PATCH v2 4/4] mm: lock slub page when listing objects

2019-09-12 Thread Yu Zhao
On Thu, Sep 12, 2019 at 01:06:42PM +0300, Kirill A. Shutemov wrote: > On Wed, Sep 11, 2019 at 08:31:11PM -0600, Yu Zhao wrote: > > Though I have no idea what the side effect of such race would be, > > apparently we want to prevent the free list from being changed > > while

[PATCH v3 2/2] mm: avoid slub allocation while holding list_lock

2019-09-13 Thread Yu Zhao
Possible unsafe locking scenario: CPU0 lock(&(&n->list_lock)->rlock); lock(&(&n->list_lock)->rlock); *** DEADLOCK *** Acked-by: Kirill A. Shutemov Signed-off-by: Yu Zhao --- mm/slub.c | 88 +-

[PATCH v3 1/2] mm: clean up validate_slab()

2019-09-13 Thread Yu Zhao
l behavior isn't intended anyway. Signed-off-by: Yu Zhao --- mm/slub.c | 21 - 1 file changed, 8 insertions(+), 13 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 8834563cdb4b..445ef8b2aec0 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -4384,31 +4384,26 @@ static int cou

[PATCH v2] mm: don't expose page to fast gup prematurely

2019-09-14 Thread Yu Zhao
wapBacked() to change. Signed-off-by: Yu Zhao --- kernel/events/uprobes.c | 2 ++ mm/huge_memory.c| 4 mm/khugepaged.c | 2 ++ mm/memory.c | 10 +- mm/migrate.c| 2 ++ mm/swapfile.c | 6 -- mm/userfaultfd.c| 2 ++

Re: [PATCH v3 3/4] mm: don't expose non-hugetlb page to fast gup prematurely

2019-10-01 Thread Yu Zhao
On Tue, Oct 01, 2019 at 03:31:51PM -0700, John Hubbard wrote: > On 9/26/19 10:06 PM, Yu Zhao wrote: > > On Thu, Sep 26, 2019 at 08:26:46PM -0700, John Hubbard wrote: > >> On 9/26/19 3:20 AM, Kirill A. Shutemov wrote: > >>> On Wed, Sep 25, 2019 at 04:26:54PM -0600, Yu

[PATCH] mm: update comments in slub.c

2019-10-07 Thread Yu Zhao
Slub doesn't use PG_active and PG_error anymore. Signed-off-by: Yu Zhao --- mm/slub.c | 6 ++ 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 320a1c375e1b..cfbc839dc2ea 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -93,9 +93,7 @@ * minimal so we

Re: [PATCH v2] mm: don't expose page to fast gup prematurely

2019-09-24 Thread Yu Zhao
On Tue, Sep 24, 2019 at 02:23:16PM +0300, Kirill A. Shutemov wrote: > On Sat, Sep 14, 2019 at 01:05:18AM -0600, Yu Zhao wrote: > > We don't want to expose page to fast gup running on a remote CPU > > before all local non-atomic ops on page flags are visible first. > > &g

[PATCH v3 2/4] mm: don't expose hugetlb page to fast gup prematurely

2019-09-24 Thread Yu Zhao
cache() serves as a valid write barrier. Signed-off-by: Yu Zhao --- mm/hugetlb.c | 8 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 6d7296dd11b8..0be5b7937085 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -3693,7 +3693,7 @@ static vm_fa

[PATCH v3 4/4] mm: remove unnecessary smp_wmb() in __SetPageUptodate()

2019-09-24 Thread Yu Zhao
previous patch. The one in shmem_mfill_atomic_pte() doesn't need a explicit write barrier because of the following shmem_add_to_page_cache(). Signed-off-by: Yu Zhao --- include/linux/page-flags.h | 6 +- kernel/events/uprobes.c| 2 +- mm/huge_memory.c | 11 +++---

[PATCH v3 3/4] mm: don't expose non-hugetlb page to fast gup prematurely

2019-09-24 Thread Yu Zhao
an existing smp_wmb(). Signed-off-by: Yu Zhao --- kernel/events/uprobes.c | 2 ++ mm/huge_memory.c| 6 ++ mm/khugepaged.c | 2 ++ mm/memory.c | 10 +- mm/migrate.c| 2 ++ mm/swapfile.c | 6 -- mm/userfaultfd.c| 2 ++

[PATCH v3 1/4] mm: remove unnecessary smp_wmb() in collapse_huge_page()

2019-09-24 Thread Yu Zhao
__SetPageUptodate() always has a built-in smp_wmb() to make sure user data copied to a new page appears before set_pmd_at(). Signed-off-by: Yu Zhao --- mm/khugepaged.c | 7 --- 1 file changed, 7 deletions(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index ccede2425c3f..70ff98e1414d

Re: [PATCH v3 4/4] mm: remove unnecessary smp_wmb() in __SetPageUptodate()

2019-09-25 Thread Yu Zhao
On Tue, Sep 24, 2019 at 04:50:36PM -0700, Matthew Wilcox wrote: > On Tue, Sep 24, 2019 at 05:24:59PM -0600, Yu Zhao wrote: > > +/* > > + * Only use this function when there is a following write barrier, e.g., > > + * an explicit smp_wmb() and/or the page will be added to page

Re: [PATCH v3 3/4] mm: don't expose non-hugetlb page to fast gup prematurely

2019-09-25 Thread Yu Zhao
On Wed, Sep 25, 2019 at 10:25:30AM +0200, Peter Zijlstra wrote: > On Tue, Sep 24, 2019 at 05:24:58PM -0600, Yu Zhao wrote: > > We don't want to expose a non-hugetlb page to the fast gup running > > on a remote CPU before all local non-atomic ops on the page flags &

Re: [PATCH v2] mm: don't expose page to fast gup prematurely

2019-09-25 Thread Yu Zhao
On Wed, Sep 25, 2019 at 03:17:50PM +0300, Kirill A. Shutemov wrote: > On Tue, Sep 24, 2019 at 04:05:50PM -0600, Yu Zhao wrote: > > On Tue, Sep 24, 2019 at 02:23:16PM +0300, Kirill A. Shutemov wrote: > > > On Sat, Sep 14, 2019 at 01:05:18AM -0600, Yu Zhao wrote: > > > >

[PATCH] mm: replace list_move_tail() with add_page_to_lru_list_tail()

2019-07-16 Thread Yu Zhao
This is a cleanup patch that replaces two historical uses of list_move_tail() with relatively recent add_page_to_lru_list_tail(). Signed-off-by: Yu Zhao --- mm/swap.c | 14 ++ 1 file changed, 6 insertions(+), 8 deletions(-) diff --git a/mm/swap.c b/mm/swap.c index ae300397dfda

[PATCH] arm64: mm: enable per pmd page table lock

2019-02-14 Thread Yu Zhao
t enable it now. Signed-off-by: Yu Zhao --- arch/arm64/Kconfig | 3 +++ arch/arm64/include/asm/pgalloc.h | 12 +++- arch/arm64/include/asm/tlb.h | 5 - 3 files changed, 18 insertions(+), 2 deletions(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconf

Re: [PATCH v2 1/3] arm64: mm: use appropriate ctors for page tables

2019-02-19 Thread Yu Zhao
On Tue, Feb 19, 2019 at 11:47:12AM +0530, Anshuman Khandual wrote: > + Matthew Wilcox > > On 02/19/2019 11:02 AM, Yu Zhao wrote: > > On Tue, Feb 19, 2019 at 09:51:01AM +0530, Anshuman Khandual wrote: > >> > >> > >> On 02/19/2019 04:43 AM, Yu Zhao wrote:

Re: [PATCH v2 1/3] arm64: mm: use appropriate ctors for page tables

2019-02-20 Thread Yu Zhao
On Wed, Feb 20, 2019 at 03:57:59PM +0530, Anshuman Khandual wrote: > > > On 02/20/2019 03:58 AM, Yu Zhao wrote: > > On Tue, Feb 19, 2019 at 11:47:12AM +0530, Anshuman Khandual wrote: > >> + Matthew Wilcox > >> > >> On 02/19/2019 11:02 AM, Yu Zhao wrote

[PATCH] mm: fix potential build error in compaction.h

2019-02-08 Thread Yu Zhao
Declaration of struct node is required regardless. On UMA system, including compaction.h without proceeding node.h shouldn't cause build error. Signed-off-by: Yu Zhao --- include/linux/compaction.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/linux/compactio

Re: [PATCH] mm: don't expose page to fast gup before it's ready

2019-05-14 Thread Yu Zhao
On Tue, May 14, 2019 at 02:25:27PM -0700, Andrew Morton wrote: > On Tue, 9 Jan 2018 02:10:50 -0800 Yu Zhao wrote: > > > > Also what prevents reordering here? There do not seem to be any barriers > > > to prevent __SetPageSwapBacked leak after set_pte_at with your p

[PATCH] mm/shmem: make find_get_pages_range() work for huge page

2019-01-09 Thread Yu Zhao
em because, AFAIK, nobody calls these functions on (huge) shmem. Fix them anyway just in case. Signed-off-by: Yu Zhao --- mm/filemap.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index 81adec8ee02c..cf5fd773314a 100644 --- a/mm/filemap.

Re: [PATCH v2 2/3] arm64: mm: don't call page table ctors for init_mm

2019-03-08 Thread Yu Zhao
On Tue, Feb 26, 2019 at 03:13:07PM +, Mark Rutland wrote: > Hi, > > On Mon, Feb 18, 2019 at 04:13:18PM -0700, Yu Zhao wrote: > > init_mm doesn't require page table lock to be initialized at > > any level. Add a separate page table allocator for it, and the > >

Re: [PATCH v2 1/3] arm64: mm: use appropriate ctors for page tables

2019-03-08 Thread Yu Zhao
On Tue, Feb 26, 2019 at 03:12:31PM +, Mark Rutland wrote: > Hi, > > On Mon, Feb 18, 2019 at 04:13:17PM -0700, Yu Zhao wrote: > > For pte page, use pgtable_page_ctor(); for pmd page, use > > pgtable_pmd_page_ctor() if not folded; and for the rest (pud, > >

[PATCH v3 3/3] arm64: mm: enable per pmd page table lock

2019-03-09 Thread Yu Zhao
t enable it now. We only do so when pmd is not folded, so we don't mistakenly call pgtable_pmd_page_ctor() on pud or p4d in pgd_pgtable_alloc(). (We check shift against PMD_SHIFT, which is same as PUD_SHIFT when pmd is folded). Signed-off-by: Yu Zhao --- arch/arm64/Kconfig |

[PATCH v3 1/3] arm64: mm: use appropriate ctors for page tables

2019-03-09 Thread Yu Zhao
o we won't mistakenly call pgtable_pmd_page_ctor() on pud or p4d. Acked-by: Mark Rutland Signed-off-by: Yu Zhao --- arch/arm64/mm/mmu.c | 36 1 file changed, 24 insertions(+), 12 deletions(-) diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c ind

[PATCH v3 2/3] arm64: mm: don't call page table ctors for init_mm

2019-03-09 Thread Yu Zhao
it_mm. Acked-by: Mark Rutland Signed-off-by: Yu Zhao --- arch/arm64/mm/mmu.c | 15 +-- 1 file changed, 13 insertions(+), 2 deletions(-) diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index f704b291f2c5..d1dc2a2777aa 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/

Re: [PATCH v3 3/3] arm64: mm: enable per pmd page table lock

2019-03-11 Thread Yu Zhao
On Mon, Mar 11, 2019 at 01:58:27PM +0530, Anshuman Khandual wrote: > On 03/10/2019 06:49 AM, Yu Zhao wrote: > > Switch from per mm_struct to per pmd page table lock by enabling > > ARCH_ENABLE_SPLIT_PMD_PTLOCK. This provides better granularity for > > large system. > >

Re: [PATCH v3 3/3] arm64: mm: enable per pmd page table lock

2019-03-11 Thread Yu Zhao
On Mon, Mar 11, 2019 at 12:12:28PM +, Mark Rutland wrote: > Hi, > > On Sat, Mar 09, 2019 at 06:19:06PM -0700, Yu Zhao wrote: > > Switch from per mm_struct to per pmd page table lock by enabling > > ARCH_ENABLE_SPLIT_PMD_PTLOCK. This provides better granularity

Re: [PATCH v3 1/3] arm64: mm: use appropriate ctors for page tables

2019-03-11 Thread Yu Zhao
this series attempts to enable ARCH_ENABLE_SPLIT_PMD_PTLOCK > with > some minimal changes to existing kernel pgtable page allocation code. Hence > just > trying to re-evaluate the series in that isolation. > > On 03/10/2019 06:49 AM, Yu Zhao wrote: > > > For pte page, u

[PATCH v4 4/4] arm64: mm: enable per pmd page table lock

2019-03-11 Thread Yu Zhao
t enable it now. We only do so when pmd is not folded, so we don't mistakenly call pgtable_pmd_page_ctor() on pud or p4d in pgd_pgtable_alloc(). Signed-off-by: Yu Zhao --- arch/arm64/Kconfig | 3 +++ arch/arm64/include/asm/pgalloc.h | 12 +++- arch/arm64/include/asm/t

[PATCH v4 3/4] arm64: mm: call ctor for stage2 pmd page

2019-03-11 Thread Yu Zhao
Call pgtable_pmd_page_dtor() for pmd page allocated by mmu_memory_cache_alloc() so kernel won't crash when it's freed through stage2_pmd_free()->pmd_free()->pgtable_pmd_page_dtor(). This is needed if we are going to enable split pmd pt lock. Signed-off-by: Yu Zhao --- arch/a

[PATCH v4 1/4] arm64: mm: use appropriate ctors for page tables

2019-03-11 Thread Yu Zhao
o we won't mistakenly call pgtable_pmd_page_ctor() on pud or p4d. Acked-by: Mark Rutland Signed-off-by: Yu Zhao --- arch/arm64/mm/mmu.c | 36 1 file changed, 24 insertions(+), 12 deletions(-) diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c ind

[PATCH v4 2/4] arm64: mm: don't call page table ctors for init_mm

2019-03-11 Thread Yu Zhao
it_mm. Acked-by: Mark Rutland Signed-off-by: Yu Zhao --- arch/arm64/mm/mmu.c | 15 +-- 1 file changed, 13 insertions(+), 2 deletions(-) diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index f704b291f2c5..d1dc2a2777aa 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/

Re: [PATCH] mm: fix potential build error in compaction.h

2019-02-12 Thread Yu Zhao
On Tue, Feb 12, 2019 at 01:03:58PM +0100, Michal Hocko wrote: > On Fri 08-02-19 01:04:37, Yu Zhao wrote: > > Declaration of struct node is required regardless. On UMA system, > > including compaction.h without proceeding node.h shouldn't cause > > build error. > &g

Re: [PATCH] mm/shmem: make find_get_pages_range() work for huge page

2019-02-12 Thread Yu Zhao
On Thu, Jan 10, 2019 at 04:43:57AM -0700, William Kucharski wrote: > > > > On Jan 9, 2019, at 8:08 PM, Yu Zhao wrote: > > > > find_get_pages_range() and find_get_pages_range_tag() already > > correctly increment reference count on head when seeing compound > &g

Re: [PATCH] arm64: mm: enable per pmd page table lock

2019-02-18 Thread Yu Zhao
On Mon, Feb 18, 2019 at 03:12:23PM +, Will Deacon wrote: > [+Mark] > > On Thu, Feb 14, 2019 at 02:16:42PM -0700, Yu Zhao wrote: > > Switch from per mm_struct to per pmd page table lock by enabling > > ARCH_ENABLE_SPLIT_PMD_PTLOCK. This provides better granularity

Re: [PATCH] arm64: mm: enable per pmd page table lock

2019-02-18 Thread Yu Zhao
On Mon, Feb 18, 2019 at 12:49:38PM -0700, Yu Zhao wrote: > On Mon, Feb 18, 2019 at 03:12:23PM +, Will Deacon wrote: > > [+Mark] > > > > On Thu, Feb 14, 2019 at 02:16:42PM -0700, Yu Zhao wrote: > > > Switch from per mm_struct to per p

[PATCH v2 2/3] arm64: mm: don't call page table ctors for init_mm

2019-02-18 Thread Yu Zhao
it_mm. Signed-off-by: Yu Zhao --- arch/arm64/mm/mmu.c | 15 +-- 1 file changed, 13 insertions(+), 2 deletions(-) diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index fa7351877af3..e8bf8a6300e8 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -370,6 +370,16 @@ s

[PATCH v2 1/3] arm64: mm: use appropriate ctors for page tables

2019-02-18 Thread Yu Zhao
For pte page, use pgtable_page_ctor(); for pmd page, use pgtable_pmd_page_ctor() if not folded; and for the rest (pud, p4d and pgd), don't use any. Signed-off-by: Yu Zhao --- arch/arm64/mm/mmu.c | 33 + 1 file changed, 21 insertions(+), 12 deletions(-)

[PATCH v2 3/3] arm64: mm: enable per pmd page table lock

2019-02-18 Thread Yu Zhao
t enable it now. Signed-off-by: Yu Zhao --- arch/arm64/Kconfig | 3 +++ arch/arm64/include/asm/pgalloc.h | 12 +++- arch/arm64/include/asm/tlb.h | 5 - 3 files changed, 18 insertions(+), 2 deletions(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconf

Re: [PATCH v2 1/3] arm64: mm: use appropriate ctors for page tables

2019-02-18 Thread Yu Zhao
On Tue, Feb 19, 2019 at 09:51:01AM +0530, Anshuman Khandual wrote: > > > On 02/19/2019 04:43 AM, Yu Zhao wrote: > > For pte page, use pgtable_page_ctor(); for pmd page, use > > pgtable_pmd_page_ctor() if not folded; and for the rest (pud, > > p4d and pgd), don

[PATCH] mm/gup: fix gup_pmd_range() for dax

2019-01-10 Thread Yu Zhao
For dax pmd, pmd_trans_huge() returns false but pmd_huge() returns true on x86. So the function works as long as hugetlb is configured. However, dax doesn't depend on hugetlb. Signed-off-by: Yu Zhao --- mm/gup.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/mm/gup.c

Re: [PATCH v2 1/3] Revert "ASoC: Intel: Skylake: Acquire irq after RIRB allocation"

2018-09-12 Thread Yu Zhao
On Wed, Sep 12, 2018 at 11:20:20AM +0100, Mark Brown wrote: > On Tue, Sep 11, 2018 at 03:12:46PM -0600, Yu Zhao wrote: > > This reverts commit 12eeeb4f4733bbc4481d01df35933fc15beb8b19. > > > > The patch doesn't fix accessing memory with null pointer in > > skl_int

[PATCH v3 2/3] ASoC: enable interrupt after dma buffer initialization

2018-09-12 Thread Yu Zhao
once on null dma buffer pointer during the initialization. Reviewed-by: Takashi Iwai Signed-off-by: Yu Zhao --- sound/hda/hdac_controller.c | 8 ++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/sound/hda/hdac_controller.c b/sound/hda/hdac_controller.c index 560ec0986e1a

[PATCH v3 3/3] ASoC: don't call skl_init_chip() to reset intel skl soc

2018-09-12 Thread Yu Zhao
shi Iwai Signed-off-by: Yu Zhao --- include/sound/hdaudio.h | 1 + sound/hda/hdac_controller.c | 7 --- sound/soc/intel/skylake/skl.c | 2 +- 3 files changed, 6 insertions(+), 4 deletions(-) diff --git a/include/sound/hdaudio.h b/include/sound/hdaudio.h index 6f1e1f3b3063..cd1773d0e0

[PATCH v3 1/3] ASoC: Revert "ASoC: Intel: Skylake: Acquire irq after RIRB allocation"

2018-09-12 Thread Yu Zhao
eb4f4733b ("ASoC: Intel: Skylake: Acquire irq after RIRB allocation") Signed-off-by: Yu Zhao --- sound/soc/intel/skylake/skl.c | 10 -- 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/sound/soc/intel/skylake/skl.c b/sound/soc/intel/skylake/skl.c index e7fd14daeb4f..

[PATCH] regulator: fix crash caused by null driver data

2018-09-19 Thread Yu Zhao
+0x184/0x1bb [ 25.824804] entry_SYSCALL_64_after_hwframe+0x3d/0xa2 [ 25.895502] RIP: rdev_get_name+0x29/0xa5 RSP: 8801d45779f0 [ 26.550863] ---[ end trace fb2a7bb4f63aeba5 ]--- Signed-off-by: Yu Zhao --- drivers/regulator/core.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff

[PATCH] mmc: add quirk for O2 Micro dev 0x8620 rev 0x01

2018-09-23 Thread Yu Zhao
Host ctl2: 0x0008 mmc1: sdhci: ADMA Err: 0x | ADMA Ptr: 0x mmc1: sdhci: The problem happens during wakeup from S3. Adding a delay quirk after power up reliably fixes the problem. Signed-off-by: Yu Zhao --- drivers/mmc/host/sdhc

[PATCH 1/3] Revert "ASoC: Intel: Skylake: Acquire irq after RIRB allocation"

2018-09-10 Thread Yu Zhao
fset: 0xc80 from 0x8100 (relocation range: 0x8000-0xbfff) Signed-off-by: Yu Zhao --- sound/soc/intel/skylake/skl.c | 10 -- 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/sound/soc/intel/skylake/skl.c b/sound/soc/intel/skylake/skl.c index cf0972

[PATCH 2/3] sound: enable interrupt after dma buffer initialization

2018-09-10 Thread Yu Zhao
once on null dma buffer pointer during the initialization. Signed-off-by: Yu Zhao --- sound/hda/hdac_controller.c | 8 ++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/sound/hda/hdac_controller.c b/sound/hda/hdac_controller.c index 560ec0986e1a..11057d9f84ec 100644 --- a

[PATCH 3/3] sound: don't call skl_init_chip() to reset intel skl soc

2018-09-10 Thread Yu Zhao
x27;t be able to do so; 2) we are ready to handle interrupt yet, and kernel crashes when interrupt comes in. Rename azx_reset() to snd_hdac_bus_reset_link(), and use it to reset device properly. Fixes: 60767abcea3d ("ASoC: Intel: Skylake: Reset the controller in probe") Signed-off-by

Re: [PATCH 1/3] Revert "ASoC: Intel: Skylake: Acquire irq after RIRB allocation"

2018-09-11 Thread Yu Zhao
On Tue, Sep 11, 2018 at 05:36:36PM +0100, Mark Brown wrote: > On Tue, Sep 11, 2018 at 08:03:21AM +0200, Takashi Iwai wrote: > > Yu Zhao wrote: > > > > Will fix the problems in the following patches. Also attaching the > > > crash for future reference. >

Re: [PATCH 2/3] sound: enable interrupt after dma buffer initialization

2018-09-11 Thread Yu Zhao
On Tue, Sep 11, 2018 at 08:06:49AM +0200, Takashi Iwai wrote: > On Mon, 10 Sep 2018 23:21:50 +0200, > Yu Zhao wrote: > > > > In snd_hdac_bus_init_chip(), we enable interrupt before > > snd_hdac_bus_init_cmd_io() initializing dma buffers. If irq has > > been acquire

[PATCH v2 1/3] Revert "ASoC: Intel: Skylake: Acquire irq after RIRB allocation"

2018-09-11 Thread Yu Zhao
cation") Signed-off-by: Yu Zhao --- sound/soc/intel/skylake/skl.c | 10 -- 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/sound/soc/intel/skylake/skl.c b/sound/soc/intel/skylake/skl.c index e7fd14daeb4f..d174cbe35f7a 100644 --- a/sound/soc/intel/skylake/skl.c +++ b

[PATCH v2 2/3] sound: enable interrupt after dma buffer initialization

2018-09-11 Thread Yu Zhao
once on null dma buffer pointer during the initialization. Reviewed-by: Takashi Iwai Signed-off-by: Yu Zhao --- sound/hda/hdac_controller.c | 8 ++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/sound/hda/hdac_controller.c b/sound/hda/hdac_controller.c index 560ec0986e1a

[PATCH v2 3/3] sound: don't call skl_init_chip() to reset intel skl soc

2018-09-11 Thread Yu Zhao
shi Iwai Signed-off-by: Yu Zhao --- include/sound/hdaudio.h | 1 + sound/hda/hdac_controller.c | 7 --- sound/soc/intel/skylake/skl.c | 2 +- 3 files changed, 6 insertions(+), 4 deletions(-) diff --git a/include/sound/hdaudio.h b/include/sound/hdaudio.h index 6f1e1f3b3063..cd1773d0e0

Re: [PATCH] hotplug: make register and unregister notifier API symmetric

2016-12-05 Thread Yu Zhao
AM, Michal Hocko wrote: > > > On Fri 02-12-16 15:38:48, Michal Hocko wrote: > > >> On Fri 02-12-16 09:24:35, Dan Streetman wrote: > > >> > On Fri, Dec 2, 2016 at 8:46 AM, Michal Hocko wrote: > > >> > > On Wed 30-11-16 13:15:16, Yu Zhao wrote:

Re: [PATCH v2] zswap: only use CPU notifier when HOTPLUG_CPU=y

2016-12-05 Thread Yu Zhao
On Fri, Dec 02, 2016 at 02:46:06PM +0100, Michal Hocko wrote: > On Wed 30-11-16 13:15:16, Yu Zhao wrote: > > __unregister_cpu_notifier() only removes registered notifier from its > > linked list when CPU hotplug is configured. If we free registered CPU > > notifier when HOTP

[PATCH] zswap: only use CPU notifier when HOTPLUG_CPU=y

2016-11-11 Thread Yu Zhao
simply disable CPU notifier when CPU hotplug is not configured (which is perfectly safe because the code in question is called after all possible CPUs are online and will remain online until power off). Signed-off-by: Yu Zhao --- mm/zswap.c | 12 1 file changed, 12 insertions(+) diff

[PATCH v2] zswap: only use CPU notifier when HOTPLUG_CPU=y

2016-11-30 Thread Yu Zhao
simply disable CPU notifier when CPU hotplug is not configured (which is perfectly safe because the code in question is called after all possible CPUs are online and will remain online until power off). v2: #ifdef for cpu_notifier_register_done during cleanup. Signe-off-by: Yu Zhao --- mm/zswap.c

[PATCH] shmem: recalculate file inode when fstat

2015-07-10 Thread Yu Zhao
off-by: Yu Zhao --- mm/shmem.c | 16 1 file changed, 16 insertions(+) diff --git a/mm/shmem.c b/mm/shmem.c index 4caf8ed..37e7933 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -542,6 +542,21 @@ void shmem_truncate_range(struct inode *inode, loff_t lstart, loff_t l

[PATCH 1/2] mm: use add_page_to_lru_list()/page_lru()/page_off_lru()

2020-08-27 Thread Yu Zhao
This is a trivial but worth having clean-up patch. There should be no side effects except page->lru is temporarily poisoned after it's deleted but before it's added to the new list in move_pages_to_lru() (which is not a problem). Signed-off-by: Yu Zhao --- mm/swap.c | 4 +---

[PATCH 2/2] mm: use self-explanatory macros rather than "2"

2020-08-27 Thread Yu Zhao
This is a trivial clean-up patch. Take it or leave it. Signed-off-by: Yu Zhao --- include/linux/mmzone.h | 12 include/linux/vmstat.h | 2 +- mm/vmscan.c| 2 +- 3 files changed, 10 insertions(+), 6 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux

<    1   2   3   4   >