>
> Page faults can race with fallocate hole punch. If a page fault happens
> between the unmap and remove operations, the page is not removed and
> remains within the hole. This is not the desired behavior. The race
> is difficult to detect in user level code as even in the non-race
> case, a
gt;
> [rient...@google.com: rewrote changelog]
> Acked-by: Michal Hocko
> Signed-off-by: Chen Jie
> Signed-off-by: David Rientjes
> ---
Acked-by: Hillf Danton
> I removed stable from this patch since the alternative would most likely
> be to panic the system
unkillable processes.
>
> [rient...@google.com: rewrote changelog]
> Acked-by: Michal Hocko <mho...@suse.com>
> Signed-off-by: Chen Jie <chenj...@huawei.com>
> Signed-off-by: David Rientjes <rient...@google.com>
> ---
Acked-by: Hillf Danton <hillf...@aliba
and only matches placeholders at the start of range.
>
> Fixes: feba16e25a57 ("add region_del() to delete a specific range of entries")
> Cc: sta...@vger.kernel.org [4.3]
> Signed-off-by: Mike Kravetz
> Reported-by: Dmitry Vyukov
> ---
Acked-by: Hillf Danton
> m
and only matches placeholders at the start of range.
>
> Fixes: feba16e25a57 ("add region_del() to delete a specific range of entries")
> Cc: sta...@vger.kernel.org [4.3]
> Signed-off-by: Mike Kravetz <mike.krav...@oracle.com>
> Reported-by: Dmitry Vyukov <dvyu...@g
se special placeholder
> entries into account in region_del.
>
> The region_chg error path leak is also fixed.
>
> Fixes: feba16e25a57 ("add region_del() to delete a specific range of entries")
> Cc: sta...@vger.kernel.org [4.3]
> Signed-off-by: Mike Kravetz
>
se special placeholder
> entries into account in region_del.
>
> The region_chg error path leak is also fixed.
>
> Fixes: feba16e25a57 ("add region_del() to delete a specific range of entries")
> Cc: sta...@vger.kernel.org [4.3]
> Signed-off-by: Mike Kravetz <mike.k
a migration/hwpoison entry after
> this block, but that's not a problem because we have another !pte_present
> check later (we never go into hugetlb_no_page() in that case.)
>
> Fixes: 290408d4a250 ("hugetlb: hugepage migration core")
> Signed-off-by: Naoya Horiguchi
> C
rigu...@ah.jp.nec.com>
> Cc: <sta...@vger.kernel.org> [2.6.36+]
> ---
Acked-by: Hillf Danton <hillf...@alibaba-inc.com>
> mm/hugetlb.c |8
> 1 files changed, 4 insertions(+), 4 deletions(-)
>
> diff --git next-20151123/mm/hugetlb.c next-20151123_pa
> Fixes: 016c13daa5c9e4827eca703e2f0621c131f2cca3
> Fixes: 0aaa29a56e4fb0fc9e24edb649e2733a672ca099
The correct format of the tag is
Fixes: commit id ("commit subject")
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
> Fixes: 016c13daa5c9e4827eca703e2f0621c131f2cca3
> Fixes: 0aaa29a56e4fb0fc9e24edb649e2733a672ca099
The correct format of the tag is
Fixes: commit id ("commit subject")
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
>
> When dequeue_huge_page_vma() in alloc_huge_page() fails, we fall back to
> alloc_buddy_huge_page() to directly create a hugepage from the buddy
> allocator.
> In that case, however, if alloc_buddy_huge_page() succeeds we don't decrement
> h->resv_huge_pages, which means that successful
>
> When dequeue_huge_page_vma() in alloc_huge_page() fails, we fall back to
> alloc_buddy_huge_page() to directly create a hugepage from the buddy
> allocator.
> In that case, however, if alloc_buddy_huge_page() succeeds we don't decrement
> h->resv_huge_pages, which means that successful
>
> Since commit a0b8cab3 ("mm: remove lru parameter from __pagevec_lru_add
> and remove parts of pagevec API") there's no user of this function anymore,
> so remove it.
>
> Signed-off-by: Yaowei Bai
> ---
Acked-by: Hillf Danton
> include/linux/mmzon
>
> Since commit a0b8cab3 ("mm: remove lru parameter from __pagevec_lru_add
> and remove parts of pagevec API") there's no user of this function anymore,
> so remove it.
>
> Signed-off-by: Yaowei Bai <baiyao...@cmss.chinamobile.com>
> ---
Acked-by: H
>
> Instead of the condition, we could have:
>
> __entry->pfn = page ? page_to_pfn(page) : -1;
>
>
> But if there's no reason to do the tracepoint if page is NULL, then
> this patch is fine. I'm just throwing out this idea.
>
we trace only if page is valid
---
>
> Instead of the condition, we could have:
>
> __entry->pfn = page ? page_to_pfn(page) : -1;
>
>
> But if there's no reason to do the tracepoint if page is NULL, then
> this patch is fine. I'm just throwing out this idea.
>
we trace only if page is valid
---
work with
> non-aligned addresses, but if we're going to round the start down,
> then rounding the end down as well like that is also buggy.
>
> unsigned long start = addr;
> unsigned long size = PAGE_SIZE * numpages;
> unsigned long end = start + size;
>
>
tart = addr;
> unsigned long size = PAGE_SIZE * numpages;
> unsigned long end = start + size;
>
> if (WARN_ON_ONCE(!IS_ALIGNED(addr, PAGE_SIZE)) {
> start &= PAGE_MASK;
> end = PAGE_ALIGN(end);
> }
>
> would be m
d Rientjes
> ---
Acked-by: Hillf Danton
> fs/proc/base.c | 10 ++
> 1 file changed, 10 insertions(+)
>
> diff --git a/fs/proc/base.c b/fs/proc/base.c
> --- a/fs/proc/base.c
> +++ b/fs/proc/base.c
> @@ -1032,6 +1032,16 @@ static ssize_t oom_adj_read(stru
y: David Rientjes <rient...@google.com>
> ---
Acked-by: Hillf Danton <hillf...@alibaba-inc.com>
> fs/proc/base.c | 10 ++
> 1 file changed, 10 insertions(+)
>
> diff --git a/fs/proc/base.c b/fs/proc/base.c
> --- a/fs/proc/base.c
> +++ b/fs/proc/base.c
> @@
> @@ -1135,20 +1135,12 @@ void do_page_add_anon_rmap(struct page *page,
> bool compound = flags & RMAP_COMPOUND;
> bool first;
>
> - if (PageTransCompound(page)) {
> + if (compound) {
> + atomic_t *mapcount;
> VM_BUG_ON_PAGE(!PageLocked(page), page);
> @@ -1135,20 +1135,12 @@ void do_page_add_anon_rmap(struct page *page,
> bool compound = flags & RMAP_COMPOUND;
> bool first;
>
> - if (PageTransCompound(page)) {
> + if (compound) {
> + atomic_t *mapcount;
> VM_BUG_ON_PAGE(!PageLocked(page), page);
Andrew, please correct me if I miss/mess anything.
> > This hunk is already in the next tree, see below please.
> >
>
> Ah, the whole series to add shmem like code to handle hole punch/fault
> races is in the next tree. It has been determined that most of this
> series is not necessary. For
Andrew, please correct me if I miss/mess anything.
> > This hunk is already in the next tree, see below please.
> >
>
> Ah, the whole series to add shmem like code to handle hole punch/fault
> races is in the next tree. It has been determined that most of this
> series is not necessary. For
>
> Hugh Dickins pointed out problems with the new hugetlbfs fallocate
> hole punch code. These problems are in the routine remove_inode_hugepages
> and mostly occur in the case where there are holes in the range of
> pages to be removed. These holes could be the result of a previous hole
>
> On Fri 30-10-15 09:36:26, Michal Hocko wrote:
> > On Fri 30-10-15 12:10:15, Hillf Danton wrote:
> > [...]
> > > > + for_each_zone_zonelist_nodemask(zone, z, ac->zonelist,
> > > > ac->high_zoneidx, ac->nodemask) {
> > > &g
> On Fri 30-10-15 09:36:26, Michal Hocko wrote:
> > On Fri 30-10-15 12:10:15, Hillf Danton wrote:
> > [...]
> > > > + for_each_zone_zonelist_nodemask(zone, z, ac->zonelist,
> > > > ac->high_zoneidx, ac->nodemask) {
> > > &g
>
> Hugh Dickins pointed out problems with the new hugetlbfs fallocate
> hole punch code. These problems are in the routine remove_inode_hugepages
> and mostly occur in the case where there are holes in the range of
> pages to be removed. These holes could be the result of a previous hole
>
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -3191,8 +3191,23 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int
> order,
>*/
> if (__zone_watermark_ok(zone, order, min_wmark_pages(zone),
> ac->high_zoneidx, alloc_flags,
> +/*
> + * Number of backoff steps for potentially reclaimable pages if the direct
> reclaim
> + * cannot make any progress. Each step will reduce 1/MAX_STALL_BACKOFF of the
> + * reclaimable memory.
> + */
> +#define MAX_STALL_BACKOFF 16
> +
> static inline struct page *
>
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -3191,8 +3191,23 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int
> order,
>*/
> if (__zone_watermark_ok(zone, order, min_wmark_pages(zone),
> ac->high_zoneidx, alloc_flags,
> +/*
> + * Number of backoff steps for potentially reclaimable pages if the direct
> reclaim
> + * cannot make any progress. Each step will reduce 1/MAX_STALL_BACKOFF of the
> + * reclaimable memory.
> + */
> +#define MAX_STALL_BACKOFF 16
> +
> static inline struct page *
>
> > > This is a multi-stage process, first we save and replace page table
> > > entry with special HMM entry, also flushing tlb in the process. If
> > > we run into non allocated entry we either use the zero page or we
> > > allocate new page. For swaped entry we try to swap them in.
> > >
> >
> > > + if (cnt_hmm_entry) {
> > > + int ret;
> > > +
> > > + ret = hmm_mm_fork(src_mm, dst_mm, dst_vma,
> > > + dst_pmd, start, end);
> >
> > Given start, s/end/addr/, no?
>
> No, end is the right upper limit here.
>
Then in the first loop, hmm_mm_fork
>
> This is a multi-stage process, first we save and replace page table
> entry with special HMM entry, also flushing tlb in the process. If
> we run into non allocated entry we either use the zero page or we
> allocate new page. For swaped entry we try to swap them in.
>
Please elaborate why
>
> When migrating anonymous memory from system memory to device memory
> CPU pte are replaced with special HMM swap entry so that page fault,
> get user page (gup), fork, ... are properly redirected to HMM helpers.
>
> This patch only add the new swap type entry and hooks HMM helpers
>
>
> -int copy_page_range(struct mm_struct *dst, struct mm_struct *src,
> - struct vm_area_struct *vma);
> +int copy_page_range(struct mm_struct *dst_mm, struct mm_struct *src_mm,
> + struct vm_area_struct *dst_vma,
> + struct vm_area_struct
>
> When migrating anonymous memory from system memory to device memory
> CPU pte are replaced with special HMM swap entry so that page fault,
> get user page (gup), fork, ... are properly redirected to HMM helpers.
>
> This patch only add the new swap type entry and hooks HMM helpers
>
>
> -int copy_page_range(struct mm_struct *dst, struct mm_struct *src,
> - struct vm_area_struct *vma);
> +int copy_page_range(struct mm_struct *dst_mm, struct mm_struct *src_mm,
> + struct vm_area_struct *dst_vma,
> + struct vm_area_struct
>
> This is a multi-stage process, first we save and replace page table
> entry with special HMM entry, also flushing tlb in the process. If
> we run into non allocated entry we either use the zero page or we
> allocate new page. For swaped entry we try to swap them in.
>
Please elaborate why
> > > + if (cnt_hmm_entry) {
> > > + int ret;
> > > +
> > > + ret = hmm_mm_fork(src_mm, dst_mm, dst_vma,
> > > + dst_pmd, start, end);
> >
> > Given start, s/end/addr/, no?
>
> No, end is the right upper limit here.
>
Then in the first loop, hmm_mm_fork
> > > This is a multi-stage process, first we save and replace page table
> > > entry with special HMM entry, also flushing tlb in the process. If
> > > we run into non allocated entry we either use the zero page or we
> > > allocate new page. For swaped entry we try to swap them in.
> > >
> >
> On czw, 2015-10-15 at 14:41 +0200, Lukasz Pawelczyk wrote:
>
> > No, not a typo. A regular bug. Thanks for spotting it. Also sync
> > mechanism before freeing was missing:
>
>
> Hitfix, will be integrated with the next respin:
>
> diff --git a/security/smack/smack.h b/security/smack/smack.h
>
> +static inline void smack_userns_free(struct user_namespace *ns)
> +{
> + struct smack_ns *snsp = ns->security;
> + struct smack_known *skp;
> + struct smack_known_ns *sknp, *n;
> +
> + list_for_each_entry_safe(sknp, n, >smk_mapped, smk_list_ns) {
> + skp =
>
> +static inline void smack_userns_free(struct user_namespace *ns)
> +{
> + struct smack_ns *snsp = ns->security;
> + struct smack_known *skp;
> + struct smack_known_ns *sknp, *n;
> +
> + list_for_each_entry_safe(sknp, n, >smk_mapped, smk_list_ns) {
> + skp =
> On czw, 2015-10-15 at 14:41 +0200, Lukasz Pawelczyk wrote:
>
> > No, not a typo. A regular bug. Thanks for spotting it. Also sync
> > mechanism before freeing was missing:
>
>
> Hitfix, will be integrated with the next respin:
>
> diff --git a/security/smack/smack.h b/security/smack/smack.h
> >
> > In particular, I think that you'll find that you will have to change
> > the heuristics in __alloc_pages_slowpath() where we currently do
> >
> > if ((did_some_progress && order <= PAGE_ALLOC_COSTLY_ORDER) || ..
> >
> > when the "did_some_progress" logic changes that radically.
> >
> > In particular, I think that you'll find that you will have to change
> > the heuristics in __alloc_pages_slowpath() where we currently do
> >
> > if ((did_some_progress && order <= PAGE_ALLOC_COSTLY_ORDER) || ..
> >
> > when the "did_some_progress" logic changes that radically.
set PG_head after PG_reserved.
>
> Signed-off-by: Kirill A. Shutemov
> Reported-by: Sasha Levin
> ---
Acked-by: Hillf Danton
>
> Andrew, this patch can be folded into "page-flags: define PG_reserved
> behavior on compound pages".
>
> ---
> mm/huge
set PG_head after PG_reserved.
>
> Signed-off-by: Kirill A. Shutemov <kirill.shute...@linux.intel.com>
> Reported-by: Sasha Levin <sasha.le...@oracle.com>
> ---
Acked-by: Hillf Danton <hillf...@alibaba-inc.com>
>
> Andrew, this patch can be folded into "p
> Inlude asm/pgtable.h to get the definition for pud_t to fix:
>
> include/linux/hugetlb.h:203:29: error: unknown type name 'pud_t'
>
But that type is already used in 4.3-rc4
117 struct page *follow_huge_pud(struct mm_struct *mm, unsigned long address,
118 pud_t
>
> From: Changsheng Liu
>
> After the user config CONFIG_MOVABLE_NODE,
> When the memory is hot added, should_add_memory_movable() return 0
> because all zones including movable zone are empty,
> so the memory that was hot added will be added to the normal zone
> and the normal zone will be
>
> From: Changsheng Liu
>
> After the user config CONFIG_MOVABLE_NODE,
> When the memory is hot added, should_add_memory_movable() return 0
> because all zones including movable zone are empty,
> so the memory that was hot added will be added to the normal zone
> and
> Inlude asm/pgtable.h to get the definition for pud_t to fix:
>
> include/linux/hugetlb.h:203:29: error: unknown type name 'pud_t'
>
But that type is already used in 4.3-rc4
117 struct page *follow_huge_pud(struct mm_struct *mm, unsigned long address,
118 pud_t
> From: Xunlei Pang
>
> Currently, get_unbound_pool() uses kzalloc() to allocate the
> worker pool. Actually, we can use the right node to do the
> allocation, achieving local memory access.
>
> This patch selects target node first, and uses kzalloc_node()
> instead.
>
> Signed-off-by: Xunlei
> >> @@ -109,8 +109,8 @@ static void ext4_finish_bio(struct bio *bio)
> >>if (bio->bi_error)
> >>buffer_io_error(bh);
> >>} while ((bh = bh->b_this_page) != head);
> >> - bit_spin_unlock(BH_Uptodate_Lock, >b_state);
> >>
> @@ -109,8 +109,8 @@ static void ext4_finish_bio(struct bio *bio)
> if (bio->bi_error)
> buffer_io_error(bh);
> } while ((bh = bh->b_this_page) != head);
> - bit_spin_unlock(BH_Uptodate_Lock, >b_state);
>
> From: Xunlei Pang
>
> Currently, get_unbound_pool() uses kzalloc() to allocate the
> worker pool. Actually, we can use the right node to do the
> allocation, achieving local memory access.
>
> This patch selects target node first, and uses kzalloc_node()
> instead.
>
> >> @@ -109,8 +109,8 @@ static void ext4_finish_bio(struct bio *bio)
> >>if (bio->bi_error)
> >>buffer_io_error(bh);
> >>} while ((bh = bh->b_this_page) != head);
> >> - bit_spin_unlock(BH_Uptodate_Lock, >b_state);
> >>
> @@ -109,8 +109,8 @@ static void ext4_finish_bio(struct bio *bio)
> if (bio->bi_error)
> buffer_io_error(bh);
> } while ((bh = bh->b_this_page) != head);
> - bit_spin_unlock(BH_Uptodate_Lock, >b_state);
>
>
> Signed-off-by: Sergei Zviagintsev
> ---
> ipc/kdbus/names.c | 5 +
> 1 file changed, 1 insertion(+), 4 deletions(-)
>
> diff --git a/ipc/kdbus/names.c b/ipc/kdbus/names.c
> index bf44ca3f12b6..6b31b38ac2ad 100644
> --- a/ipc/kdbus/names.c
> +++ b/ipc/kdbus/names.c
> @@ -438,10 +438,7
> @@ -730,15 +730,21 @@ static void kdbus_meta_export_caps(struct
> kdbus_meta_caps *out,
>
> /*
>* This translates the effective capabilities of 'cred' into the given
> - * user-namespace. If the given user-namespace is a child-namespace of
> - * the user-namespace of
if (unlikely(p->flags & PF_KTHREAD))
> + continue;
Given the result of "grep -nr PF_KTHREAD linux-next/mm", it looks
a helper function, like current_is_kswapd(), is needed.
int task_is_kthread(struct task_struct *task)
Other than that,
Acked-by: Hillf Danton
ue;
> + if (unlikely(p->flags & PF_KTHREAD))
> + continue;
Given the result of "grep -nr PF_KTHREAD linux-next/mm", it looks
a helper function, like current_is_kswapd(), is needed.
int task_is_kthread(struct task_struct *task)
Other
>
> Signed-off-by: Sergei Zviagintsev
> ---
> ipc/kdbus/names.c | 5 +
> 1 file changed, 1 insertion(+), 4 deletions(-)
>
> diff --git a/ipc/kdbus/names.c b/ipc/kdbus/names.c
> index bf44ca3f12b6..6b31b38ac2ad 100644
> --- a/ipc/kdbus/names.c
> +++ b/ipc/kdbus/names.c
> @@
> @@ -730,15 +730,21 @@ static void kdbus_meta_export_caps(struct
> kdbus_meta_caps *out,
>
> /*
>* This translates the effective capabilities of 'cred' into the given
> - * user-namespace. If the given user-namespace is a child-namespace of
> - * the user-namespace of
nt is NULL.
>
> Signed-off-by: Daniel Thompson
> ---
Acked-by: Hillf Danton
>
> Notes:
> Changes in v3:
>
> * Added comments to describe how raise_nmi() and nmi_cpu_backtrace()
> interact with backtrace_mask (Russell King).
>
> Changes in v2:
>
> In CSRaltas7, Cortex-A7 uses this proc to communicate with Cortex-M3.
> But M3 doesn't have to be a slave, it can boot indenpently or depend
> on Linux to load firmware for it.
>
> we reserve a memory for data and resource descriptors in DRAM.
>
> Signed-off-by: Wei Chen
> Signed-off-by:
> > Better if dump_stack() is added in a separate patch, given that
> > it is not mentioned in commit message.
>
> Adding dump_stack() is mentioned in passing ("Some small changes to the
> generic code are required to support this.") but you're right that the
> reason for the change is not
>
> Currently on ARM when is triggered from an interrupt handler
> (e.g. a SysRq issued using UART or kbd) the main CPU will wedge for ten
> seconds with interrupts masked before issuing a backtrace for every CPU
> except itself.
>
> The new backtrace code introduced by commit 96f0e00378d4
nt is NULL.
>
> Signed-off-by: Daniel Thompson <daniel.thomp...@linaro.org>
> ---
Acked-by: Hillf Danton <hillf...@alibaba-inc.com>
>
> Notes:
> Changes in v3:
>
> * Added comments to describe how raise_nmi() and nmi_cpu_backtrace()
> int
>
> Currently on ARM when is triggered from an interrupt handler
> (e.g. a SysRq issued using UART or kbd) the main CPU will wedge for ten
> seconds with interrupts masked before issuing a backtrace for every CPU
> except itself.
>
> The new backtrace code introduced by commit 96f0e00378d4
> > Better if dump_stack() is added in a separate patch, given that
> > it is not mentioned in commit message.
>
> Adding dump_stack() is mentioned in passing ("Some small changes to the
> generic code are required to support this.") but you're right that the
> reason for the change is not
>
> In CSRaltas7, Cortex-A7 uses this proc to communicate with Cortex-M3.
> But M3 doesn't have to be a slave, it can boot indenpently or depend
> on Linux to load firmware for it.
>
> we reserve a memory for data and resource descriptors in DRAM.
>
> Signed-off-by: Wei Chen
> Signed-off-by:
hugetlbfs.
>
> v3:
> Fixed issue with region_chg to recheck if there are sufficient
> entries in the cache after acquiring lock.
> v2:
> Fixed leak in resv_map_release discovered by Hillf Danton.
> Used LONG_MAX as indicator of truncate function for region_del.
>
entries in the cache after acquiring lock.
v2:
Fixed leak in resv_map_release discovered by Hillf Danton.
Used LONG_MAX as indicator of truncate function for region_del.
v1:
Add a cache of region descriptors to the resv_map for use by
region_add in case hole punch deletes entries
> > > diff --git a/Documentation/sysrq.txt b/Documentation/sysrq.txt
> > > --- a/Documentation/sysrq.txt
> > > +++ b/Documentation/sysrq.txt
> > > @@ -75,7 +75,8 @@ On all - write a character to /proc/sysrq-trigger.
> > > e.g.:
> > >
> > > 'e' - Send a SIGTERM to all processes, except for
diff --git a/Documentation/sysrq.txt b/Documentation/sysrq.txt
--- a/Documentation/sysrq.txt
+++ b/Documentation/sysrq.txt
@@ -75,7 +75,8 @@ On all - write a character to /proc/sysrq-trigger.
e.g.:
'e' - Send a SIGTERM to all processes, except for init.
-'f'
> Sysrq+f is used to kill a process either for debug or when the VM is
> otherwise unresponsive.
>
> It is not intended to trigger a panic when no process may be killed.
>
> Avoid panicking the system for sysrq+f when no processes are killed.
>
> Suggested-by: Michal Hocko
> Signed-off-by:
Sysrq+f is used to kill a process either for debug or when the VM is
otherwise unresponsive.
It is not intended to trigger a panic when no process may be killed.
Avoid panicking the system for sysrq+f when no processes are killed.
Suggested-by: Michal Hocko mho...@suse.cz
> fallocate hole punch will want to remove a specific range of pages.
> The existing region_truncate() routine deletes all region/reserve
> map entries after a specified offset. region_del() will provide
> this same functionality if the end of region is specified as -1.
> Hence, region_del() can
fallocate hole punch will want to remove a specific range of pages.
The existing region_truncate() routine deletes all region/reserve
map entries after a specified offset. region_del() will provide
this same functionality if the end of region is specified as -1.
Hence, region_del() can
>
> fallocate hole punch will want to remove a specific range of
> pages. When pages are removed, their associated entries in
> the region/reserve map will also be removed. This will break
> an assumption in the region_chg/region_add calling sequence.
> If a new region descriptor must be
fallocate hole punch will want to remove a specific range of
pages. When pages are removed, their associated entries in
the region/reserve map will also be removed. This will break
an assumption in the region_chg/region_add calling sequence.
If a new region descriptor must be allocated,
> Subject: [patch v2 1/3] mm, oom: organize oom context into struct
[patch v2 2/3] mm, oom: organize oom context into struct
[patch v2 3/3] mm, oom: organize oom context into struct
I am wondering if a redelivery is needed for the same 3 subject lines.
Hillf
>
> There are essential elements to
Subject: [patch v2 1/3] mm, oom: organize oom context into struct
[patch v2 2/3] mm, oom: organize oom context into struct
[patch v2 3/3] mm, oom: organize oom context into struct
I am wondering if a redelivery is needed for the same 3 subject lines.
Hillf
There are essential elements to an
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1326,6 +1326,7 @@ static unsigned long isolate_lru_pages(unsigned long
> nr_to_scan,
>
> for (scan = 0; scan < nr_to_scan && !list_empty(src); scan++) {
> struct page *page;
> + struct zone *zone;
> int
> - /* Reclaim above the high watermark. */
> - sc->nr_to_reclaim = max(SWAP_CLUSTER_MAX, high_wmark_pages(zone));
> + /* Aim to reclaim above all the zone high watermarks */
> + for (z = 0; z <= end_zone; z++) {
> + zone = pgdat->node_zones + end_zone;
s/end_zone/z/ ?
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1326,6 +1326,7 @@ static unsigned long isolate_lru_pages(unsigned long
nr_to_scan,
for (scan = 0; scan nr_to_scan !list_empty(src); scan++) {
struct page *page;
+ struct zone *zone;
int nr_pages;
- /* Reclaim above the high watermark. */
- sc-nr_to_reclaim = max(SWAP_CLUSTER_MAX, high_wmark_pages(zone));
+ /* Aim to reclaim above all the zone high watermarks */
+ for (z = 0; z = end_zone; z++) {
+ zone = pgdat-node_zones + end_zone;
s/end_zone/z/ ?
+
> @@ -1319,6 +1322,7 @@ static unsigned long isolate_lru_pages(unsigned long
> nr_to_scan,
> struct list_head *src = >lists[lru];
> unsigned long nr_taken = 0;
> unsigned long scan;
> + LIST_HEAD(pages_skipped);
>
> for (scan = 0; scan < nr_to_scan &&
> @@ -774,6 +764,21 @@ typedef struct pglist_data {
> ZONE_PADDING(_pad1_)
> spinlock_t lru_lock;
>
> + /* Fields commonly accessed by the page reclaim scanner */
> + struct lruvec lruvec;
> +
> + /* Evictions & activations on the inactive file list
@@ -774,6 +764,21 @@ typedef struct pglist_data {
ZONE_PADDING(_pad1_)
spinlock_t lru_lock;
+ /* Fields commonly accessed by the page reclaim scanner */
+ struct lruvec lruvec;
+
+ /* Evictions activations on the inactive file list */
+
@@ -1319,6 +1322,7 @@ static unsigned long isolate_lru_pages(unsigned long
nr_to_scan,
struct list_head *src = lruvec-lists[lru];
unsigned long nr_taken = 0;
unsigned long scan;
+ LIST_HEAD(pages_skipped);
for (scan = 0; scan nr_to_scan !list_empty(src);
>
> Hi all:
>
> I meet a kernel problem about the random segmentation fault(x86_64). In my
> testcase, the size of local variables exceeds 20MB.
> when run the testcase, it will cause segmentation fault(because the default
> stack size limit is 8192KB).
> when I increase the stack size limit
Hi all:
I meet a kernel problem about the random segmentation fault(x86_64). In my
testcase, the size of local variables exceeds 20MB.
when run the testcase, it will cause segmentation fault(because the default
stack size limit is 8192KB).
when I increase the stack size limit to
>
> From: NeilBrown
>
> ovl_follow_link current calls ->put_link on an error path.
> However ->put_link is about to change in a way that it will be
> impossible to call it from ovl_follow_link.
>
> So rearrange the code to avoid the need for that error path.
> Specifically: move the kmalloc()
From: NeilBrown ne...@suse.de
ovl_follow_link current calls -put_link on an error path.
However -put_link is about to change in a way that it will be
impossible to call it from ovl_follow_link.
So rearrange the code to avoid the need for that error path.
Specifically: move the
It was introduced in commit f2ca09f381a59
(ARM: 8311/1: Don't use is_module_addr in setting page attributes)
We have no need to check start twice, but see if end is also in range.
Signed-off-by: Hillf Danton
Acked-by: Laura Abbott
---
--- a/arch/arm/mm/pageattr.cMon May 4 10:33:49 2015
401 - 500 of 1077 matches
Mail list logo