Re: [PATCH] mm/hugetlbfs: Unmap pages if page fault raced with hole punch

2016-01-07 Thread Hillf Danton
> > Page faults can race with fallocate hole punch. If a page fault happens > between the unmap and remove operations, the page is not removed and > remains within the hole. This is not the desired behavior. The race > is difficult to detect in user level code as even in the non-race > case, a

Re: [patch] mm, oom: avoid attempting to kill init sharing same memory

2015-12-03 Thread Hillf Danton
gt; > [rient...@google.com: rewrote changelog] > Acked-by: Michal Hocko > Signed-off-by: Chen Jie > Signed-off-by: David Rientjes > --- Acked-by: Hillf Danton > I removed stable from this patch since the alternative would most likely > be to panic the system

Re: [patch] mm, oom: avoid attempting to kill init sharing same memory

2015-12-03 Thread Hillf Danton
unkillable processes. > > [rient...@google.com: rewrote changelog] > Acked-by: Michal Hocko <mho...@suse.com> > Signed-off-by: Chen Jie <chenj...@huawei.com> > Signed-off-by: David Rientjes <rient...@google.com> > --- Acked-by: Hillf Danton <hillf...@aliba

Re: [PATCH V2] mm/hugetlb resv map memory leak for placeholder entries

2015-12-02 Thread Hillf Danton
and only matches placeholders at the start of range. > > Fixes: feba16e25a57 ("add region_del() to delete a specific range of entries") > Cc: sta...@vger.kernel.org [4.3] > Signed-off-by: Mike Kravetz > Reported-by: Dmitry Vyukov > --- Acked-by: Hillf Danton > m

Re: [PATCH V2] mm/hugetlb resv map memory leak for placeholder entries

2015-12-02 Thread Hillf Danton
and only matches placeholders at the start of range. > > Fixes: feba16e25a57 ("add region_del() to delete a specific range of entries") > Cc: sta...@vger.kernel.org [4.3] > Signed-off-by: Mike Kravetz <mike.krav...@oracle.com> > Reported-by: Dmitry Vyukov <dvyu...@g

Re: [PATCH] mm/hugetlb resv map memory leak for placeholder entries

2015-12-01 Thread Hillf Danton
se special placeholder > entries into account in region_del. > > The region_chg error path leak is also fixed. > > Fixes: feba16e25a57 ("add region_del() to delete a specific range of entries") > Cc: sta...@vger.kernel.org [4.3] > Signed-off-by: Mike Kravetz >

Re: [PATCH] mm/hugetlb resv map memory leak for placeholder entries

2015-12-01 Thread Hillf Danton
se special placeholder > entries into account in region_del. > > The region_chg error path leak is also fixed. > > Fixes: feba16e25a57 ("add region_del() to delete a specific range of entries") > Cc: sta...@vger.kernel.org [4.3] > Signed-off-by: Mike Kravetz <mike.k

Re: [PATCH v1] mm: hugetlb: call huge_pte_alloc() only if ptep is null

2015-11-26 Thread Hillf Danton
a migration/hwpoison entry after > this block, but that's not a problem because we have another !pte_present > check later (we never go into hugetlb_no_page() in that case.) > > Fixes: 290408d4a250 ("hugetlb: hugepage migration core") > Signed-off-by: Naoya Horiguchi > C

Re: [PATCH v1] mm: hugetlb: call huge_pte_alloc() only if ptep is null

2015-11-26 Thread Hillf Danton
rigu...@ah.jp.nec.com> > Cc: <sta...@vger.kernel.org> [2.6.36+] > --- Acked-by: Hillf Danton <hillf...@alibaba-inc.com> > mm/hugetlb.c |8 > 1 files changed, 4 insertions(+), 4 deletions(-) > > diff --git next-20151123/mm/hugetlb.c next-20151123_pa

Re: [PATCH v2] mm: fix swapped Movable and Reclaimable in /proc/pagetypeinfo

2015-11-24 Thread Hillf Danton
> Fixes: 016c13daa5c9e4827eca703e2f0621c131f2cca3 > Fixes: 0aaa29a56e4fb0fc9e24edb649e2733a672ca099 The correct format of the tag is Fixes: commit id ("commit subject") -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to

Re: [PATCH v2] mm: fix swapped Movable and Reclaimable in /proc/pagetypeinfo

2015-11-24 Thread Hillf Danton
> Fixes: 016c13daa5c9e4827eca703e2f0621c131f2cca3 > Fixes: 0aaa29a56e4fb0fc9e24edb649e2733a672ca099 The correct format of the tag is Fixes: commit id ("commit subject") -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to

Re: [PATCH v1] mm: hugetlb: fix hugepage memory leak caused by wrong reserve count

2015-11-20 Thread Hillf Danton
> > When dequeue_huge_page_vma() in alloc_huge_page() fails, we fall back to > alloc_buddy_huge_page() to directly create a hugepage from the buddy > allocator. > In that case, however, if alloc_buddy_huge_page() succeeds we don't decrement > h->resv_huge_pages, which means that successful

Re: [PATCH v1] mm: hugetlb: fix hugepage memory leak caused by wrong reserve count

2015-11-20 Thread Hillf Danton
> > When dequeue_huge_page_vma() in alloc_huge_page() fails, we fall back to > alloc_buddy_huge_page() to directly create a hugepage from the buddy > allocator. > In that case, however, if alloc_buddy_huge_page() succeeds we don't decrement > h->resv_huge_pages, which means that successful

Re: [PATCH 5/7] mm/lru: remove unused is_unevictable_lru function

2015-11-16 Thread Hillf Danton
> > Since commit a0b8cab3 ("mm: remove lru parameter from __pagevec_lru_add > and remove parts of pagevec API") there's no user of this function anymore, > so remove it. > > Signed-off-by: Yaowei Bai > --- Acked-by: Hillf Danton > include/linux/mmzon

Re: [PATCH 5/7] mm/lru: remove unused is_unevictable_lru function

2015-11-16 Thread Hillf Danton
> > Since commit a0b8cab3 ("mm: remove lru parameter from __pagevec_lru_add > and remove parts of pagevec API") there's no user of this function anymore, > so remove it. > > Signed-off-by: Yaowei Bai <baiyao...@cmss.chinamobile.com> > --- Acked-by: H

Re: [PATCH V4] mm: fix kernel crash in khugepaged thread

2015-11-13 Thread Hillf Danton
> > Instead of the condition, we could have: > > __entry->pfn = page ? page_to_pfn(page) : -1; > > > But if there's no reason to do the tracepoint if page is NULL, then > this patch is fine. I'm just throwing out this idea. > we trace only if page is valid ---

Re: [PATCH V4] mm: fix kernel crash in khugepaged thread

2015-11-13 Thread Hillf Danton
> > Instead of the condition, we could have: > > __entry->pfn = page ? page_to_pfn(page) : -1; > > > But if there's no reason to do the tracepoint if page is NULL, then > this patch is fine. I'm just throwing out this idea. > we trace only if page is valid ---

Re: [PATCH] arch:arm:mm:Correction in the boundary check for module end address.

2015-11-09 Thread Hillf Danton
work with > non-aligned addresses, but if we're going to round the start down, > then rounding the end down as well like that is also buggy. > > unsigned long start = addr; > unsigned long size = PAGE_SIZE * numpages; > unsigned long end = start + size; > >

Re: [PATCH] arch:arm:mm:Correction in the boundary check for module end address.

2015-11-09 Thread Hillf Danton
tart = addr; > unsigned long size = PAGE_SIZE * numpages; > unsigned long end = start + size; > > if (WARN_ON_ONCE(!IS_ALIGNED(addr, PAGE_SIZE)) { > start &= PAGE_MASK; > end = PAGE_ALIGN(end); > } > > would be m

Re: [patch] mm, oom: add comment for why oom_adj exists

2015-11-05 Thread Hillf Danton
d Rientjes > --- Acked-by: Hillf Danton > fs/proc/base.c | 10 ++ > 1 file changed, 10 insertions(+) > > diff --git a/fs/proc/base.c b/fs/proc/base.c > --- a/fs/proc/base.c > +++ b/fs/proc/base.c > @@ -1032,6 +1032,16 @@ static ssize_t oom_adj_read(stru

Re: [patch] mm, oom: add comment for why oom_adj exists

2015-11-05 Thread Hillf Danton
y: David Rientjes <rient...@google.com> > --- Acked-by: Hillf Danton <hillf...@alibaba-inc.com> > fs/proc/base.c | 10 ++ > 1 file changed, 10 insertions(+) > > diff --git a/fs/proc/base.c b/fs/proc/base.c > --- a/fs/proc/base.c > +++ b/fs/proc/base.c > @@

Re: [PATCH 3/4] thp: fix split vs. unmap race

2015-11-04 Thread Hillf Danton
> @@ -1135,20 +1135,12 @@ void do_page_add_anon_rmap(struct page *page, > bool compound = flags & RMAP_COMPOUND; > bool first; > > - if (PageTransCompound(page)) { > + if (compound) { > + atomic_t *mapcount; > VM_BUG_ON_PAGE(!PageLocked(page), page);

Re: [PATCH 3/4] thp: fix split vs. unmap race

2015-11-04 Thread Hillf Danton
> @@ -1135,20 +1135,12 @@ void do_page_add_anon_rmap(struct page *page, > bool compound = flags & RMAP_COMPOUND; > bool first; > > - if (PageTransCompound(page)) { > + if (compound) { > + atomic_t *mapcount; > VM_BUG_ON_PAGE(!PageLocked(page), page);

Re: [PATCH] mm/hugetlbfs Fix bugs in fallocate hole punch of areas with holes

2015-11-01 Thread Hillf Danton
Andrew, please correct me if I miss/mess anything. > > This hunk is already in the next tree, see below please. > > > > Ah, the whole series to add shmem like code to handle hole punch/fault > races is in the next tree. It has been determined that most of this > series is not necessary. For

Re: [PATCH] mm/hugetlbfs Fix bugs in fallocate hole punch of areas with holes

2015-11-01 Thread Hillf Danton
Andrew, please correct me if I miss/mess anything. > > This hunk is already in the next tree, see below please. > > > > Ah, the whole series to add shmem like code to handle hole punch/fault > races is in the next tree. It has been determined that most of this > series is not necessary. For

Re: [PATCH] mm/hugetlbfs Fix bugs in fallocate hole punch of areas with holes

2015-10-30 Thread Hillf Danton
> > Hugh Dickins pointed out problems with the new hugetlbfs fallocate > hole punch code. These problems are in the routine remove_inode_hugepages > and mostly occur in the case where there are holes in the range of > pages to be removed. These holes could be the result of a previous hole >

Re: [RFC 1/3] mm, oom: refactor oom detection

2015-10-30 Thread Hillf Danton
> On Fri 30-10-15 09:36:26, Michal Hocko wrote: > > On Fri 30-10-15 12:10:15, Hillf Danton wrote: > > [...] > > > > + for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, > > > > ac->high_zoneidx, ac->nodemask) { > > > &g

Re: [RFC 1/3] mm, oom: refactor oom detection

2015-10-30 Thread Hillf Danton
> On Fri 30-10-15 09:36:26, Michal Hocko wrote: > > On Fri 30-10-15 12:10:15, Hillf Danton wrote: > > [...] > > > > + for_each_zone_zonelist_nodemask(zone, z, ac->zonelist, > > > > ac->high_zoneidx, ac->nodemask) { > > > &g

Re: [PATCH] mm/hugetlbfs Fix bugs in fallocate hole punch of areas with holes

2015-10-30 Thread Hillf Danton
> > Hugh Dickins pointed out problems with the new hugetlbfs fallocate > hole punch code. These problems are in the routine remove_inode_hugepages > and mostly occur in the case where there are holes in the range of > pages to be removed. These holes could be the result of a previous hole >

Re: [RFC 2/3] mm: throttle on IO only when there are too many dirty and writeback pages

2015-10-29 Thread Hillf Danton
> --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -3191,8 +3191,23 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int > order, >*/ > if (__zone_watermark_ok(zone, order, min_wmark_pages(zone), > ac->high_zoneidx, alloc_flags,

Re: [RFC 1/3] mm, oom: refactor oom detection

2015-10-29 Thread Hillf Danton
> +/* > + * Number of backoff steps for potentially reclaimable pages if the direct > reclaim > + * cannot make any progress. Each step will reduce 1/MAX_STALL_BACKOFF of the > + * reclaimable memory. > + */ > +#define MAX_STALL_BACKOFF 16 > + > static inline struct page * >

Re: [RFC 2/3] mm: throttle on IO only when there are too many dirty and writeback pages

2015-10-29 Thread Hillf Danton
> --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -3191,8 +3191,23 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int > order, >*/ > if (__zone_watermark_ok(zone, order, min_wmark_pages(zone), > ac->high_zoneidx, alloc_flags,

Re: [RFC 1/3] mm, oom: refactor oom detection

2015-10-29 Thread Hillf Danton
> +/* > + * Number of backoff steps for potentially reclaimable pages if the direct > reclaim > + * cannot make any progress. Each step will reduce 1/MAX_STALL_BACKOFF of the > + * reclaimable memory. > + */ > +#define MAX_STALL_BACKOFF 16 > + > static inline struct page * >

Re: [PATCH v11 07/14] HMM: mm add helper to update page table when migrating memory v2.

2015-10-22 Thread Hillf Danton
> > > This is a multi-stage process, first we save and replace page table > > > entry with special HMM entry, also flushing tlb in the process. If > > > we run into non allocated entry we either use the zero page or we > > > allocate new page. For swaped entry we try to swap them in. > > > > >

Re: [PATCH v11 02/14] HMM: add special swap filetype for memory migrated to device v2.

2015-10-22 Thread Hillf Danton
> > > + if (cnt_hmm_entry) { > > > + int ret; > > > + > > > + ret = hmm_mm_fork(src_mm, dst_mm, dst_vma, > > > + dst_pmd, start, end); > > > > Given start, s/end/addr/, no? > > No, end is the right upper limit here. > Then in the first loop, hmm_mm_fork

Re: [PATCH v11 07/14] HMM: mm add helper to update page table when migrating memory v2.

2015-10-22 Thread Hillf Danton
> > This is a multi-stage process, first we save and replace page table > entry with special HMM entry, also flushing tlb in the process. If > we run into non allocated entry we either use the zero page or we > allocate new page. For swaped entry we try to swap them in. > Please elaborate why

Re: [PATCH v11 02/14] HMM: add special swap filetype for memory migrated to device v2.

2015-10-22 Thread Hillf Danton
> > When migrating anonymous memory from system memory to device memory > CPU pte are replaced with special HMM swap entry so that page fault, > get user page (gup), fork, ... are properly redirected to HMM helpers. > > This patch only add the new swap type entry and hooks HMM helpers >

Re: [PATCH v11 01/14] fork: pass the dst vma to copy_page_range() and its sub-functions.

2015-10-22 Thread Hillf Danton
> > -int copy_page_range(struct mm_struct *dst, struct mm_struct *src, > - struct vm_area_struct *vma); > +int copy_page_range(struct mm_struct *dst_mm, struct mm_struct *src_mm, > + struct vm_area_struct *dst_vma, > + struct vm_area_struct

Re: [PATCH v11 02/14] HMM: add special swap filetype for memory migrated to device v2.

2015-10-22 Thread Hillf Danton
> > When migrating anonymous memory from system memory to device memory > CPU pte are replaced with special HMM swap entry so that page fault, > get user page (gup), fork, ... are properly redirected to HMM helpers. > > This patch only add the new swap type entry and hooks HMM helpers >

Re: [PATCH v11 01/14] fork: pass the dst vma to copy_page_range() and its sub-functions.

2015-10-22 Thread Hillf Danton
> > -int copy_page_range(struct mm_struct *dst, struct mm_struct *src, > - struct vm_area_struct *vma); > +int copy_page_range(struct mm_struct *dst_mm, struct mm_struct *src_mm, > + struct vm_area_struct *dst_vma, > + struct vm_area_struct

Re: [PATCH v11 07/14] HMM: mm add helper to update page table when migrating memory v2.

2015-10-22 Thread Hillf Danton
> > This is a multi-stage process, first we save and replace page table > entry with special HMM entry, also flushing tlb in the process. If > we run into non allocated entry we either use the zero page or we > allocate new page. For swaped entry we try to swap them in. > Please elaborate why

Re: [PATCH v11 02/14] HMM: add special swap filetype for memory migrated to device v2.

2015-10-22 Thread Hillf Danton
> > > + if (cnt_hmm_entry) { > > > + int ret; > > > + > > > + ret = hmm_mm_fork(src_mm, dst_mm, dst_vma, > > > + dst_pmd, start, end); > > > > Given start, s/end/addr/, no? > > No, end is the right upper limit here. > Then in the first loop, hmm_mm_fork

Re: [PATCH v11 07/14] HMM: mm add helper to update page table when migrating memory v2.

2015-10-22 Thread Hillf Danton
> > > This is a multi-stage process, first we save and replace page table > > > entry with special HMM entry, also flushing tlb in the process. If > > > we run into non allocated entry we either use the zero page or we > > > allocate new page. For swaped entry we try to swap them in. > > > > >

Re: [PATCH v4 09/11] smack: namespace groundwork

2015-10-15 Thread Hillf Danton
> On czw, 2015-10-15 at 14:41 +0200, Lukasz Pawelczyk wrote: > > > No, not a typo. A regular bug. Thanks for spotting it. Also sync > > mechanism before freeing was missing: > > > Hitfix, will be integrated with the next respin: > > diff --git a/security/smack/smack.h b/security/smack/smack.h

Re: [PATCH v4 09/11] smack: namespace groundwork

2015-10-15 Thread Hillf Danton
> > +static inline void smack_userns_free(struct user_namespace *ns) > +{ > + struct smack_ns *snsp = ns->security; > + struct smack_known *skp; > + struct smack_known_ns *sknp, *n; > + > + list_for_each_entry_safe(sknp, n, >smk_mapped, smk_list_ns) { > + skp =

Re: [PATCH v4 09/11] smack: namespace groundwork

2015-10-15 Thread Hillf Danton
> > +static inline void smack_userns_free(struct user_namespace *ns) > +{ > + struct smack_ns *snsp = ns->security; > + struct smack_known *skp; > + struct smack_known_ns *sknp, *n; > + > + list_for_each_entry_safe(sknp, n, >smk_mapped, smk_list_ns) { > + skp =

Re: [PATCH v4 09/11] smack: namespace groundwork

2015-10-15 Thread Hillf Danton
> On czw, 2015-10-15 at 14:41 +0200, Lukasz Pawelczyk wrote: > > > No, not a typo. A regular bug. Thanks for spotting it. Also sync > > mechanism before freeing was missing: > > > Hitfix, will be integrated with the next respin: > > diff --git a/security/smack/smack.h b/security/smack/smack.h

Re: Silent hang up caused by pages being not scanned?

2015-10-14 Thread Hillf Danton
> > > > In particular, I think that you'll find that you will have to change > > the heuristics in __alloc_pages_slowpath() where we currently do > > > > if ((did_some_progress && order <= PAGE_ALLOC_COSTLY_ORDER) || .. > > > > when the "did_some_progress" logic changes that radically.

Re: Silent hang up caused by pages being not scanned?

2015-10-14 Thread Hillf Danton
> > > > In particular, I think that you'll find that you will have to change > > the heuristics in __alloc_pages_slowpath() where we currently do > > > > if ((did_some_progress && order <= PAGE_ALLOC_COSTLY_ORDER) || .. > > > > when the "did_some_progress" logic changes that radically.

Re: [PATCH] hugetlb: clear PG_reserved before setting PG_head on gigantic pages

2015-10-13 Thread Hillf Danton
set PG_head after PG_reserved. > > Signed-off-by: Kirill A. Shutemov > Reported-by: Sasha Levin > --- Acked-by: Hillf Danton > > Andrew, this patch can be folded into "page-flags: define PG_reserved > behavior on compound pages". > > --- > mm/huge

Re: [PATCH] hugetlb: clear PG_reserved before setting PG_head on gigantic pages

2015-10-13 Thread Hillf Danton
set PG_head after PG_reserved. > > Signed-off-by: Kirill A. Shutemov <kirill.shute...@linux.intel.com> > Reported-by: Sasha Levin <sasha.le...@oracle.com> > --- Acked-by: Hillf Danton <hillf...@alibaba-inc.com> > > Andrew, this patch can be folded into "p

Re: [PATCH v2 08/20] hugetlb: fix compile error on tile

2015-10-10 Thread Hillf Danton
> Inlude asm/pgtable.h to get the definition for pud_t to fix: > > include/linux/hugetlb.h:203:29: error: unknown type name 'pud_t' > But that type is already used in 4.3-rc4 117 struct page *follow_huge_pud(struct mm_struct *mm, unsigned long address, 118 pud_t

Re: [PATCH V6] mm: memory hot-add: memory can not be added to movable zone defaultly

2015-10-10 Thread Hillf Danton
> > From: Changsheng Liu > > After the user config CONFIG_MOVABLE_NODE, > When the memory is hot added, should_add_memory_movable() return 0 > because all zones including movable zone are empty, > so the memory that was hot added will be added to the normal zone > and the normal zone will be

Re: [PATCH V6] mm: memory hot-add: memory can not be added to movable zone defaultly

2015-10-10 Thread Hillf Danton
> > From: Changsheng Liu > > After the user config CONFIG_MOVABLE_NODE, > When the memory is hot added, should_add_memory_movable() return 0 > because all zones including movable zone are empty, > so the memory that was hot added will be added to the normal zone > and

Re: [PATCH v2 08/20] hugetlb: fix compile error on tile

2015-10-10 Thread Hillf Danton
> Inlude asm/pgtable.h to get the definition for pud_t to fix: > > include/linux/hugetlb.h:203:29: error: unknown type name 'pud_t' > But that type is already used in 4.3-rc4 117 struct page *follow_huge_pud(struct mm_struct *mm, unsigned long address, 118 pud_t

Re: [PATCH] workqueue: Allocate the unbound pool using local node memory

2015-10-09 Thread Hillf Danton
> From: Xunlei Pang > > Currently, get_unbound_pool() uses kzalloc() to allocate the > worker pool. Actually, we can use the right node to do the > allocation, achieving local memory access. > > This patch selects target node first, and uses kzalloc_node() > instead. > > Signed-off-by: Xunlei

Re: [RFC PATCH 1/2] ext4: Fix possible deadlock with local interrupts disabled and page-draining IPI

2015-10-09 Thread Hillf Danton
> >> @@ -109,8 +109,8 @@ static void ext4_finish_bio(struct bio *bio) > >>if (bio->bi_error) > >>buffer_io_error(bh); > >>} while ((bh = bh->b_this_page) != head); > >> - bit_spin_unlock(BH_Uptodate_Lock, >b_state); > >>

Re: [RFC PATCH 1/2] ext4: Fix possible deadlock with local interrupts disabled and page-draining IPI

2015-10-09 Thread Hillf Danton
> @@ -109,8 +109,8 @@ static void ext4_finish_bio(struct bio *bio) > if (bio->bi_error) > buffer_io_error(bh); > } while ((bh = bh->b_this_page) != head); > - bit_spin_unlock(BH_Uptodate_Lock, >b_state); >

Re: [PATCH] workqueue: Allocate the unbound pool using local node memory

2015-10-09 Thread Hillf Danton
> From: Xunlei Pang > > Currently, get_unbound_pool() uses kzalloc() to allocate the > worker pool. Actually, we can use the right node to do the > allocation, achieving local memory access. > > This patch selects target node first, and uses kzalloc_node() > instead. >

Re: [RFC PATCH 1/2] ext4: Fix possible deadlock with local interrupts disabled and page-draining IPI

2015-10-09 Thread Hillf Danton
> >> @@ -109,8 +109,8 @@ static void ext4_finish_bio(struct bio *bio) > >>if (bio->bi_error) > >>buffer_io_error(bh); > >>} while ((bh = bh->b_this_page) != head); > >> - bit_spin_unlock(BH_Uptodate_Lock, >b_state); > >>

Re: [RFC PATCH 1/2] ext4: Fix possible deadlock with local interrupts disabled and page-draining IPI

2015-10-09 Thread Hillf Danton
> @@ -109,8 +109,8 @@ static void ext4_finish_bio(struct bio *bio) > if (bio->bi_error) > buffer_io_error(bh); > } while ((bh = bh->b_this_page) != head); > - bit_spin_unlock(BH_Uptodate_Lock, >b_state); >

Re: [PATCH 10/44] kdbus: Use conditional operator

2015-10-08 Thread Hillf Danton
> > Signed-off-by: Sergei Zviagintsev > --- > ipc/kdbus/names.c | 5 + > 1 file changed, 1 insertion(+), 4 deletions(-) > > diff --git a/ipc/kdbus/names.c b/ipc/kdbus/names.c > index bf44ca3f12b6..6b31b38ac2ad 100644 > --- a/ipc/kdbus/names.c > +++ b/ipc/kdbus/names.c > @@ -438,10 +438,7

Re: [PATCH 07/44] kdbus: Fix comment on translation of caps between namespaces

2015-10-08 Thread Hillf Danton
> @@ -730,15 +730,21 @@ static void kdbus_meta_export_caps(struct > kdbus_meta_caps *out, > > /* >* This translates the effective capabilities of 'cred' into the given > - * user-namespace. If the given user-namespace is a child-namespace of > - * the user-namespace of

Re: [PATCH -mm v2 2/3] mm/oom_kill: cleanup the "kill sharing same memory" loop

2015-10-08 Thread Hillf Danton
if (unlikely(p->flags & PF_KTHREAD)) > + continue; Given the result of "grep -nr PF_KTHREAD linux-next/mm", it looks a helper function, like current_is_kswapd(), is needed. int task_is_kthread(struct task_struct *task) Other than that, Acked-by: Hillf Danton

Re: [PATCH -mm v2 2/3] mm/oom_kill: cleanup the "kill sharing same memory" loop

2015-10-08 Thread Hillf Danton
ue; > + if (unlikely(p->flags & PF_KTHREAD)) > + continue; Given the result of "grep -nr PF_KTHREAD linux-next/mm", it looks a helper function, like current_is_kswapd(), is needed. int task_is_kthread(struct task_struct *task) Other

Re: [PATCH 10/44] kdbus: Use conditional operator

2015-10-08 Thread Hillf Danton
> > Signed-off-by: Sergei Zviagintsev > --- > ipc/kdbus/names.c | 5 + > 1 file changed, 1 insertion(+), 4 deletions(-) > > diff --git a/ipc/kdbus/names.c b/ipc/kdbus/names.c > index bf44ca3f12b6..6b31b38ac2ad 100644 > --- a/ipc/kdbus/names.c > +++ b/ipc/kdbus/names.c > @@

Re: [PATCH 07/44] kdbus: Fix comment on translation of caps between namespaces

2015-10-08 Thread Hillf Danton
> @@ -730,15 +730,21 @@ static void kdbus_meta_export_caps(struct > kdbus_meta_caps *out, > > /* >* This translates the effective capabilities of 'cred' into the given > - * user-namespace. If the given user-namespace is a child-namespace of > - * the user-namespace of

Re: [PATCH v3] arm: Fix backtrace generation when IPI is masked

2015-09-15 Thread Hillf Danton
nt is NULL. > > Signed-off-by: Daniel Thompson > --- Acked-by: Hillf Danton > > Notes: > Changes in v3: > > * Added comments to describe how raise_nmi() and nmi_cpu_backtrace() > interact with backtrace_mask (Russell King). > > Changes in v2:

Re: [PATCH 3/3] remoteproc: add CSRatlas7 remoteproc driver

2015-09-15 Thread Hillf Danton
> > In CSRaltas7, Cortex-A7 uses this proc to communicate with Cortex-M3. > But M3 doesn't have to be a slave, it can boot indenpently or depend > on Linux to load firmware for it. > > we reserve a memory for data and resource descriptors in DRAM. > > Signed-off-by: Wei Chen > Signed-off-by:

RE: [PATCH] arm: Fix backtrace generation when IPI is masked

2015-09-15 Thread Hillf Danton
> > Better if dump_stack() is added in a separate patch, given that > > it is not mentioned in commit message. > > Adding dump_stack() is mentioned in passing ("Some small changes to the > generic code are required to support this.") but you're right that the > reason for the change is not

RE: [PATCH] arm: Fix backtrace generation when IPI is masked

2015-09-15 Thread Hillf Danton
> > Currently on ARM when is triggered from an interrupt handler > (e.g. a SysRq issued using UART or kbd) the main CPU will wedge for ten > seconds with interrupts masked before issuing a backtrace for every CPU > except itself. > > The new backtrace code introduced by commit 96f0e00378d4

Re: [PATCH v3] arm: Fix backtrace generation when IPI is masked

2015-09-15 Thread Hillf Danton
nt is NULL. > > Signed-off-by: Daniel Thompson <daniel.thomp...@linaro.org> > --- Acked-by: Hillf Danton <hillf...@alibaba-inc.com> > > Notes: > Changes in v3: > > * Added comments to describe how raise_nmi() and nmi_cpu_backtrace() > int

RE: [PATCH] arm: Fix backtrace generation when IPI is masked

2015-09-15 Thread Hillf Danton
> > Currently on ARM when is triggered from an interrupt handler > (e.g. a SysRq issued using UART or kbd) the main CPU will wedge for ten > seconds with interrupts masked before issuing a backtrace for every CPU > except itself. > > The new backtrace code introduced by commit 96f0e00378d4

RE: [PATCH] arm: Fix backtrace generation when IPI is masked

2015-09-15 Thread Hillf Danton
> > Better if dump_stack() is added in a separate patch, given that > > it is not mentioned in commit message. > > Adding dump_stack() is mentioned in passing ("Some small changes to the > generic code are required to support this.") but you're right that the > reason for the change is not

Re: [PATCH 3/3] remoteproc: add CSRatlas7 remoteproc driver

2015-09-15 Thread Hillf Danton
> > In CSRaltas7, Cortex-A7 uses this proc to communicate with Cortex-M3. > But M3 doesn't have to be a slave, it can boot indenpently or depend > on Linux to load firmware for it. > > we reserve a memory for data and resource descriptors in DRAM. > > Signed-off-by: Wei Chen > Signed-off-by:

RE: [PATCH v3 00/10] hugetlbfs: add fallocate support

2015-07-20 Thread Hillf Danton
hugetlbfs. > > v3: > Fixed issue with region_chg to recheck if there are sufficient > entries in the cache after acquiring lock. > v2: > Fixed leak in resv_map_release discovered by Hillf Danton. > Used LONG_MAX as indicator of truncate function for region_del. >

RE: [PATCH v3 00/10] hugetlbfs: add fallocate support

2015-07-20 Thread Hillf Danton
entries in the cache after acquiring lock. v2: Fixed leak in resv_map_release discovered by Hillf Danton. Used LONG_MAX as indicator of truncate function for region_del. v1: Add a cache of region descriptors to the resv_map for use by region_add in case hole punch deletes entries

RE: [patch v3 3/3] mm, oom: do not panic for oom kills triggered from sysrq

2015-07-10 Thread Hillf Danton
> > > diff --git a/Documentation/sysrq.txt b/Documentation/sysrq.txt > > > --- a/Documentation/sysrq.txt > > > +++ b/Documentation/sysrq.txt > > > @@ -75,7 +75,8 @@ On all - write a character to /proc/sysrq-trigger. > > > e.g.: > > > > > > 'e' - Send a SIGTERM to all processes, except for

RE: [patch v3 3/3] mm, oom: do not panic for oom kills triggered from sysrq

2015-07-10 Thread Hillf Danton
diff --git a/Documentation/sysrq.txt b/Documentation/sysrq.txt --- a/Documentation/sysrq.txt +++ b/Documentation/sysrq.txt @@ -75,7 +75,8 @@ On all - write a character to /proc/sysrq-trigger. e.g.: 'e' - Send a SIGTERM to all processes, except for init. -'f'

Re: [patch v3 3/3] mm, oom: do not panic for oom kills triggered from sysrq

2015-07-08 Thread Hillf Danton
> Sysrq+f is used to kill a process either for debug or when the VM is > otherwise unresponsive. > > It is not intended to trigger a panic when no process may be killed. > > Avoid panicking the system for sysrq+f when no processes are killed. > > Suggested-by: Michal Hocko > Signed-off-by:

Re: [patch v3 3/3] mm, oom: do not panic for oom kills triggered from sysrq

2015-07-08 Thread Hillf Danton
Sysrq+f is used to kill a process either for debug or when the VM is otherwise unresponsive. It is not intended to trigger a panic when no process may be killed. Avoid panicking the system for sysrq+f when no processes are killed. Suggested-by: Michal Hocko mho...@suse.cz

Re: [PATCH 02/10] mm/hugetlb: add region_del() to delete a specific range of entries

2015-07-03 Thread Hillf Danton
> fallocate hole punch will want to remove a specific range of pages. > The existing region_truncate() routine deletes all region/reserve > map entries after a specified offset. region_del() will provide > this same functionality if the end of region is specified as -1. > Hence, region_del() can

Re: [PATCH 02/10] mm/hugetlb: add region_del() to delete a specific range of entries

2015-07-03 Thread Hillf Danton
fallocate hole punch will want to remove a specific range of pages. The existing region_truncate() routine deletes all region/reserve map entries after a specified offset. region_del() will provide this same functionality if the end of region is specified as -1. Hence, region_del() can

Re: [PATCH 01/10] mm/hugetlb: add cache of descriptors to resv_map for region_add

2015-07-02 Thread Hillf Danton
> > fallocate hole punch will want to remove a specific range of > pages. When pages are removed, their associated entries in > the region/reserve map will also be removed. This will break > an assumption in the region_chg/region_add calling sequence. > If a new region descriptor must be

Re: [PATCH 01/10] mm/hugetlb: add cache of descriptors to resv_map for region_add

2015-07-02 Thread Hillf Danton
fallocate hole punch will want to remove a specific range of pages. When pages are removed, their associated entries in the region/reserve map will also be removed. This will break an assumption in the region_chg/region_add calling sequence. If a new region descriptor must be allocated,

Re: [patch v2 1/3] mm, oom: organize oom context into struct

2015-07-01 Thread Hillf Danton
> Subject: [patch v2 1/3] mm, oom: organize oom context into struct [patch v2 2/3] mm, oom: organize oom context into struct [patch v2 3/3] mm, oom: organize oom context into struct I am wondering if a redelivery is needed for the same 3 subject lines. Hillf > > There are essential elements to

Re: [patch v2 1/3] mm, oom: organize oom context into struct

2015-07-01 Thread Hillf Danton
Subject: [patch v2 1/3] mm, oom: organize oom context into struct [patch v2 2/3] mm, oom: organize oom context into struct [patch v2 3/3] mm, oom: organize oom context into struct I am wondering if a redelivery is needed for the same 3 subject lines. Hillf There are essential elements to an

Re: [PATCH 19/25] mm, vmscan: Account in vmstat for pages skipped during reclaim

2015-06-12 Thread Hillf Danton
> --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -1326,6 +1326,7 @@ static unsigned long isolate_lru_pages(unsigned long > nr_to_scan, > > for (scan = 0; scan < nr_to_scan && !list_empty(src); scan++) { > struct page *page; > + struct zone *zone; > int

Re: [PATCH 07/25] mm, vmscan: Make kswapd think of reclaim in terms of nodes

2015-06-12 Thread Hillf Danton
> - /* Reclaim above the high watermark. */ > - sc->nr_to_reclaim = max(SWAP_CLUSTER_MAX, high_wmark_pages(zone)); > + /* Aim to reclaim above all the zone high watermarks */ > + for (z = 0; z <= end_zone; z++) { > + zone = pgdat->node_zones + end_zone; s/end_zone/z/ ?

Re: [PATCH 19/25] mm, vmscan: Account in vmstat for pages skipped during reclaim

2015-06-12 Thread Hillf Danton
--- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1326,6 +1326,7 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan, for (scan = 0; scan nr_to_scan !list_empty(src); scan++) { struct page *page; + struct zone *zone; int nr_pages;

Re: [PATCH 07/25] mm, vmscan: Make kswapd think of reclaim in terms of nodes

2015-06-12 Thread Hillf Danton
- /* Reclaim above the high watermark. */ - sc-nr_to_reclaim = max(SWAP_CLUSTER_MAX, high_wmark_pages(zone)); + /* Aim to reclaim above all the zone high watermarks */ + for (z = 0; z = end_zone; z++) { + zone = pgdat-node_zones + end_zone; s/end_zone/z/ ? +

Re: [PATCH 04/25] mm, vmscan: Begin reclaiming pages on a per-node basis

2015-06-11 Thread Hillf Danton
> @@ -1319,6 +1322,7 @@ static unsigned long isolate_lru_pages(unsigned long > nr_to_scan, > struct list_head *src = >lists[lru]; > unsigned long nr_taken = 0; > unsigned long scan; > + LIST_HEAD(pages_skipped); > > for (scan = 0; scan < nr_to_scan &&

Re: [PATCH 03/25] mm, vmscan: Move LRU lists to node

2015-06-11 Thread Hillf Danton
> @@ -774,6 +764,21 @@ typedef struct pglist_data { > ZONE_PADDING(_pad1_) > spinlock_t lru_lock; > > + /* Fields commonly accessed by the page reclaim scanner */ > + struct lruvec lruvec; > + > + /* Evictions & activations on the inactive file list

Re: [PATCH 03/25] mm, vmscan: Move LRU lists to node

2015-06-11 Thread Hillf Danton
@@ -774,6 +764,21 @@ typedef struct pglist_data { ZONE_PADDING(_pad1_) spinlock_t lru_lock; + /* Fields commonly accessed by the page reclaim scanner */ + struct lruvec lruvec; + + /* Evictions activations on the inactive file list */ +

Re: [PATCH 04/25] mm, vmscan: Begin reclaiming pages on a per-node basis

2015-06-11 Thread Hillf Danton
@@ -1319,6 +1322,7 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan, struct list_head *src = lruvec-lists[lru]; unsigned long nr_taken = 0; unsigned long scan; + LIST_HEAD(pages_skipped); for (scan = 0; scan nr_to_scan !list_empty(src);

Re: [RFC] kernel random segmentation fault?

2015-05-06 Thread Hillf Danton
> > Hi all: > > I meet a kernel problem about the random segmentation fault(x86_64). In my > testcase, the size of local variables exceeds 20MB. > when run the testcase, it will cause segmentation fault(because the default > stack size limit is 8192KB). > when I increase the stack size limit

Re: [RFC] kernel random segmentation fault?

2015-05-06 Thread Hillf Danton
Hi all: I meet a kernel problem about the random segmentation fault(x86_64). In my testcase, the size of local variables exceeds 20MB. when run the testcase, it will cause segmentation fault(because the default stack size limit is 8192KB). when I increase the stack size limit to

Re: [PATCH 03/79] ovl: rearrange ovl_follow_link to it doesn't need to call ->put_link

2015-05-05 Thread Hillf Danton
> > From: NeilBrown > > ovl_follow_link current calls ->put_link on an error path. > However ->put_link is about to change in a way that it will be > impossible to call it from ovl_follow_link. > > So rearrange the code to avoid the need for that error path. > Specifically: move the kmalloc()

Re: [PATCH 03/79] ovl: rearrange ovl_follow_link to it doesn't need to call -put_link

2015-05-05 Thread Hillf Danton
From: NeilBrown ne...@suse.de ovl_follow_link current calls -put_link on an error path. However -put_link is about to change in a way that it will be impossible to call it from ovl_follow_link. So rearrange the code to avoid the need for that error path. Specifically: move the

[patch] ARM: fix module-bound check in setting page attributes

2015-05-03 Thread Hillf Danton
It was introduced in commit f2ca09f381a59 (ARM: 8311/1: Don't use is_module_addr in setting page attributes) We have no need to check start twice, but see if end is also in range. Signed-off-by: Hillf Danton Acked-by: Laura Abbott --- --- a/arch/arm/mm/pageattr.cMon May 4 10:33:49 2015

<    1   2   3   4   5   6   7   8   9   10   >