on the protection line to mark the end of each zone.
Let's revert it to avoid breaking userspace testing or applications.
Cc: # 5.8.x
Reported-by: Sonny Rao
Signed-off-by: Baoquan He
---
mm/vmstat.c | 12 ++--
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/mm/vmstat.c b/mm
Hi Mike,
On 07/23/20 at 11:21am, Mike Kravetz wrote:
> On 7/23/20 2:11 AM, Baoquan He wrote:
...
> >> But is kernel expected to warn for all such situations where the user
> >> requested resources could not be allocated completely ? Otherwise, it
> >> does no
On 08/10/20 at 05:19pm, Mike Kravetz wrote:
> On 8/9/20 7:17 PM, Baoquan He wrote:
> > On 08/07/20 at 05:12pm, Wei Yang wrote:
> >> Let's always increase surplus_huge_pages and so that free_huge_page
> >> could decrease it at free time.
> >>
> >&g
On 08/07/20 at 05:12pm, Wei Yang wrote:
> Let's always increase surplus_huge_pages and so that free_huge_page
> could decrease it at free time.
>
> Signed-off-by: Wei Yang
> ---
> mm/hugetlb.c | 14 ++
> 1 file changed, 6 insertions(+), 8 deletions(-)
>
> diff --git a/mm/hugetlb.c
On 08/07/20 at 10:28pm, Wei Yang wrote:
> On Fri, Aug 07, 2020 at 08:49:51PM +0800, Baoquan He wrote:
> >On 08/07/20 at 05:12pm, Wei Yang wrote:
> >> list_first_entry() may not return NULL even when the list is empty.
> >>
> >> Let's make sure the behavi
T_LIST_HEAD(>lru);
> set_compound_page_dtor(page, HUGETLB_PAGE_DTOR);
> - spin_lock(_lock);
> set_hugetlb_cgroup(page, NULL);
> set_hugetlb_cgroup_rsvd(page, NULL);
> + spin_lock(_lock);
Looks good to me.
Reviewed-by: Baoquan He
> h->nr
On 08/07/20 at 05:12pm, Wei Yang wrote:
> Function dequeue_huge_page_node_exact() iterates the free list and
> return the first non-isolated one.
>
> Instead of break and check the loop variant, we could return in the loop
> directly. This could reduce some redundant check.
>
> Signed-off-by:
gt; - list_move(>lru, >hugepage_activelist);
> + list_add(>lru, >hugepage_activelist);
Looks good to me.
Reviewed-by: Baoquan He
> /* Fall through */
> }
> hugetlb_cgroup_commit_charge(idx, pages_per_huge_page(h), h_cg, page);
> --
> 2.20.1 (Apple Git-117)
>
>
return VM_FAULT_OOM;
Right, seems a relic from Mike's i_mmap_rwsem handling patches.
Reviewed-by: Baoquan He
> }
>
> /*
> --
> 2.20.1 (Apple Git-117)
>
>
On 08/07/20 at 05:12pm, Wei Yang wrote:
> Migration and hwpoison entry is a subset of non_swap_entry().
>
> Remove the redundant check on non_swap_entry().
>
> Signed-off-by: Wei Yang
Hmm, I have posted one patch to do the same thing, got reivewed by
people.
meters to classify these
> two cases.
>
> Just use regions_needed to separate them.
>
> Signed-off-by: Wei Yang
Nice clean up.
Reviewed-by: Baoquan He
> ---
> mm/hugetlb.c | 33 +
> 1 file changed, 17 insertions(+), 16 deletions(-)
>
resv->region_cache_count++;
> - }
> + list_splice(_regions, >region_cache);
> + resv->region_cache_count += to_allocate;
Looks good to me.
Reviewed-by: Baoquan He
> }
>
> return 0;
> --
> 2.20.1 (Apple Git-117)
>
>
On 08/07/20 at 05:12pm, Wei Yang wrote:
> list_first_entry() may not return NULL even when the list is empty.
>
> Let's make sure the behavior by using list_first_entry_or_null(),
> otherwise it would corrupt the list.
>
> Signed-off-by: Wei Yang
> ---
> mm/hugetlb.c | 3 ++-
> 1 file changed,
coalesce_file_region, not sure if there's any reason we need to do that,
maybe Mike can give a judgement. Personally,
Reviewed-by: Baoquan He
> - return;
> }
> }
>
> --
> 2.20.1 (Apple Git-117)
>
>
lock.memblock_type.regions +
> memblock.memblock_type.cnt);\
> +/**
> + * for_each_mem_region - itereate over registered memory regions
~~~~~
Wonder why emphasize 'registered' memory.
Other than this confusion to me, this patch looks good.
Reviewed-by: Baoquan He
2 +-
> arch/arm64/kernel/setup.c| 2 +-
> drivers/irqchip/irq-gic-v3-its.c | 2 +-
> include/linux/memblock.h | 12 +++--
> mm/memblock.c| 46 +++-
> 5 files changed, 17 insertions(+), 47 deletions(-)
Reviewed-by: Baoquan He
_pfn_range(i, MAX_NUMNODES, _pfn, _pfn, NULL) {
> - start_pfn = min_t(unsigned long, start_pfn, limit_pfn);
> - end_pfn = min_t(unsigned long, end_pfn, limit_pfn);
> - pages += end_pfn - start_pfn;
> - }
> -
> - return PFN_PHYS(pages);
> -}
Reviewed-by: Baoquan He
insertions(+), 26 deletions(-)
Applied this patch on top of 5.8, crashkernel reservation works well.
And the code change looks good.
Reviewed-by: Baoquan He
>
> diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
> index d8de4053c5e8..d7ced6982524 100644
> --- a/arch
isk_size)
> @@ -312,12 +308,6 @@ static void __init reserve_initrd(void)
>
> initrd_start = 0;
>
> - mapped_size = memblock_mem_size(max_pfn_mapped);
> - if (ramdisk_size >= (mapped_size>>1))
> - panic("initrd too large to hand
| 7 ++-
> arch/s390/mm/page-states.c | 6 ++
> arch/sh/mm/init.c| 9 +++--
> mm/memblock.c| 6 ++
> mm/sparse.c | 10 --
> 9 files changed, 35 insertions(+), 51 deletions(-)
>
Reviewed-by: Baoquan He
_region_memory_end_pfn(reg) -
> -memblock_region_memory_base_pfn(reg);
> + unsigned long total_pages = PHYS_PFN(memblock_phys_mem_size());
Reviewed-by: Baoquan He
>
> return (total_pages * CONFIG_CMA_SIZE_PERCENTAGE / 100) << PAGE_SHIFT;
> }
> --
> 2.26.2
>
| 1 +
> arch/arm64/kernel/machine_kexec_file.c | 6 ++
> arch/s390/kernel/crash_dump.c | 8
> include/linux/memblock.h | 18 ++
> mm/page_alloc.c| 3 +--
> 5 files changed, 22 insertions(+), 14 deletions(-)
Revi
ernel.
>
> Signed-off-by: Mike Rapoport
> ---
> arch/s390/kernel/setup.c | 4 ++--
> include/linux/memblock.h | 12 +---
> mm/memblock.c| 13 +++--
> 3 files changed, 14 insertions(+), 15 deletions(-)
Nice clean up.
Reviewed-by: Baoquan He
>
memblock = {
> .current_limit = MEMBLOCK_ALLOC_ANYWHERE,
> };
>
> +#define for_each_memblock_type(i, memblock_type, rgn)
> \
> + for (i = 0, rgn = _type->regions[0]; \
> + i < memblock_type->cnt;\
> + i++, rgn = _type->regions[i])
> +
Reviewed-by: Baoquan He
y trigger a
> * lockdep splat, so defer it here.
>*/
> dump_page(unmovable, "unmovable page");
>
> - return ret;
> + return -EBUSY;
Reviewed-by: Baoquan He
On 07/29/20 at 03:37pm, David Hildenbrand wrote:
> On 29.07.20 15:24, Baoquan He wrote:
> > On 06/30/20 at 04:26pm, David Hildenbrand wrote:
> >> Inside has_unmovable_pages(), we have a comment describing how unmovable
> >> data could end up in ZONE_MOVABLE - via &q
y trigger a
> + * lockdep splat, so defer it here.
> + */
> + dump_page(unmovable, "unmovable page");
>
> return ret;
> }
Otherwise, the patch looks good to me.
Reviewed-by: Baoquan He
On 07/28/20 at 04:07pm, David Hildenbrand wrote:
> On 28.07.20 15:48, Baoquan He wrote:
> > On 06/30/20 at 04:26pm, David Hildenbrand wrote:
> >> Let's move the split comment regarding bootmem allocations and memory
> >> holes, especially in the context of ZONE_M
On 07/28/20 at 09:46am, Mike Kravetz wrote:
> On 7/28/20 6:24 AM, Baoquan He wrote:
> > Hi Muchun,
> >
> > On 07/28/20 at 11:49am, Muchun Song wrote:
> >> In the reservation routine, we only check whether the cpuset meets
> >> the memory allocation re
On 07/28/20 at 05:15pm, Mike Rapoport wrote:
> On Tue, Jul 28, 2020 at 07:02:54PM +0800, Baoquan He wrote:
> > On 07/28/20 at 08:11am, Mike Rapoport wrote:
> > > From: Mike Rapoport
> > >
> > > numa_clear_kernel_node_hotplug() function first traverses numa_
olate_page(page))
> - goto out;
> + if (is_migrate_isolate_page(page)) {
> + spin_unlock_irqrestore(>lock, flags);
> + return -EBUSY;
Good catch, the fix looks good to me.
Reviewed-by: Baoquan He
> + }
>
> /*
>* FIXME: Now, memory hotplug doesn't call shrink_slab() by itself.
> --
> 2.26.2
>
>
en
> + * specifying "movable_core".
should be 'movablecore', we don't
have kernel parameter 'movable_core'.
Otherwise, this looks good to me. Esp the code comment at below had been
added very long time ago and obsolete.
Reviewed-b
Hi Muchun,
On 07/28/20 at 11:49am, Muchun Song wrote:
> In the reservation routine, we only check whether the cpuset meets
> the memory allocation requirements. But we ignore the mempolicy of
> MPOL_BIND case. If someone mmap hugetlb succeeds, but the subsequent
> memory allocation may fail due
On 07/28/20 at 08:11am, Mike Rapoport wrote:
> From: Mike Rapoport
>
> numa_clear_kernel_node_hotplug() function first traverses numa_meminfo
> regions to set node ID in memblock.reserved and than traverses
> memblock.reserved to update reserved_nodemask to include node IDs that were
> set in
On 07/23/20 at 11:21am, Mike Kravetz wrote:
> On 7/23/20 2:11 AM, Baoquan He wrote:
> > On 07/23/20 at 11:46am, Anshuman Khandual wrote:
> >>
> >>
> >> On 07/23/2020 08:52 AM, Baoquan He wrote:
> >>> A customer complained that no message is logged
try() in is_hugetlb_entry_migration() and
is_hugetlb_entry_hwpoisoned() is redundant.
Let's remove it to optimize code.
Signed-off-by: Baoquan He
Reviewed-by: Mike Kravetz
Reviewed-by: David Hildenbrand
Reviewed-by: Anshuman Khandual
---
v2->v3:
Updated patch log according to Anshuman's comment.
mm/hugetlb
On 07/23/20 at 11:46am, Anshuman Khandual wrote:
>
>
> On 07/23/2020 08:52 AM, Baoquan He wrote:
> > A customer complained that no message is logged wh en the number of
> > persistent huge pages is not changed to the exact value written to
> > the sysf
On 07/23/20 at 10:36am, Anshuman Khandual wrote:
>
>
> On 07/23/2020 08:52 AM, Baoquan He wrote:
> > The checking is_migration_entry() and is_hwpoison_entry() are stricter
> > than non_swap_entry(), means they have covered the conditional check
> > whic
On 07/23/20 at 10:47am, Anshuman Khandual wrote:
>
>
> On 07/23/2020 08:52 AM, Baoquan He wrote:
> > Change 'pecify' to 'Specify'.
> >
> > Signed-off-by: Baoquan He
> > Reviewed-by: Mike Kravetz
> > Reviewed-by: David Hildenbrand
> > ---
> >
Change 'pecify' to 'Specify'.
Signed-off-by: Baoquan He
Reviewed-by: Mike Kravetz
Reviewed-by: David Hildenbrand
---
Documentation/admin-guide/mm/hugetlbpage.rst | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/Documentation/admin-guide/mm/hugetlbpage.rst
b/Documentation
code.
Signed-off-by: Baoquan He
Reviewed-by: Mike Kravetz
Reviewed-by: David Hildenbrand
---
mm/hugetlb.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 3569e731e66b..c14837854392 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -3748,7
partially
satisfied.
Log a message if the code was unsuccessful in fully satisfying a
request. This includes both increasing and decreasing the number
of persistent huge pages.
Signed-off-by: Baoquan He
---
mm/hugetlb.c | 15 ++-
1 file changed, 14 insertions(+), 1 deletion(-)
diff
old patch 1/5 in v1 post, which was thought as typo, while
actually another kind of abbreviation.
Updated patch log of patch 4 which is rephrased by Mike. And move the
added message logging code after the hugetlb_lock dropping, this is
suggested by Mike.
Baoquan He (4):
mm/hugetlb.c: m
Just like its neighbour is_hugetlb_entry_migration() has done.
Signed-off-by: Baoquan He
Reviewed-by: Mike Kravetz
Reviewed-by: David Hildenbrand
---
mm/hugetlb.c | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index f24acb3af741
Hi Mike,
On 07/20/20 at 05:38pm, Mike Kravetz wrote:
> > diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> > index 467894d8332a..1dfb5d9e4e06 100644
> > --- a/mm/hugetlb.c
> > +++ b/mm/hugetlb.c
> > @@ -2661,7 +2661,7 @@ static int adjust_pool_surplus(struct hstate *h,
> > nodemask_t *nodes_allowed,
>
On 07/20/20 at 05:38pm, Mike Kravetz wrote:
> On 7/19/20 11:26 PM, Baoquan He wrote:
> > A customer complained that no any message is printed out when failed to
> > allocate explicitly specified number of persistent huge pages. That
> > specifying can be done by writ
On 07/20/20 at 03:32pm, Mike Kravetz wrote:
> On 7/19/20 11:26 PM, Baoquan He wrote:
> > The local variable is for global reservation of region.
> >
> > Signed-off-by: Baoquan He
> > ---
> > mm/hugetlb.c | 24
> > 1 file
code.
Signed-off-by: Baoquan He
---
mm/hugetlb.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index a58f976a9dd9..467894d8332a 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -3748,7 +3748,7 @@ bool is_hugetlb_entry_migration(pte_t pte
-off-by: Baoquan He
---
mm/hugetlb.c | 13 -
1 file changed, 12 insertions(+), 1 deletion(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 467894d8332a..1dfb5d9e4e06 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2661,7 +2661,7 @@ static int adjust_pool_surplus(struct hstate *h
Change 'pecify' to 'Specify'.
Signed-off-by: Baoquan He
---
Documentation/admin-guide/mm/hugetlbpage.rst | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/Documentation/admin-guide/mm/hugetlbpage.rst
b/Documentation/admin-guide/mm/hugetlbpage.rst
index 015a5f7d7854
The local variable is for global reservation of region.
Signed-off-by: Baoquan He
---
mm/hugetlb.c | 24
1 file changed, 12 insertions(+), 12 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index f24acb3af741..191a585bb315 100644
--- a/mm/hugetlb.c
+++ b/mm
Patch 1~4 are small cleanup.
Patch 5 is to add warning message when failed to increase or decrease
the expected number of persistent huge pages by writing into
/proc/sys/vm/nr_hugepages
/sys/kernel/mm/hugepages/hugepages-xxx/nr_hugepages.
Baoquan He (5):
mm/hugetlb.c: Fix typo of glb_reserve
Just like his neighbour is_hugetlb_entry_migration() has done.
Signed-off-by: Baoquan He
---
mm/hugetlb.c | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 191a585bb315..a58f976a9dd9 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
, field) \
> + vmcoreinfo_append_str("OFFSET(%s.%s)=%lu\n", #name, #field, \
> + (unsigned long)offsetof(name, field))
Acked-by: Baoquan He
> #define VMCOREINFO_LENGTH(name, value) \
> vmcoreinfo_append_str(&
On 06/29/20 at 07:27am, Dave Hansen wrote:
> On 6/28/20 11:52 PM, Baoquan He wrote:
> > On 06/25/20 at 05:34pm, Dave Hansen wrote:
> >>
> >> From: Dave Hansen
> >>
> >> I went to go add a new RECLAIM_* mode for the zone_reclaim_mode
> >> sy
On 06/25/20 at 05:34pm, Dave Hansen wrote:
>
> From: Dave Hansen
>
> I went to go add a new RECLAIM_* mode for the zone_reclaim_mode
> sysctl. Like a good kernel developer, I also went to go update the
> documentation. I noticed that the bits in the documentation didn't
> match the bits in
On 06/24/20 at 11:46am, Wei Yang wrote:
> On Wed, Jun 24, 2020 at 09:47:37AM +0800, Baoquan He wrote:
> >On 06/23/20 at 05:21pm, Dan Williams wrote:
> >> On Tue, Jun 23, 2020 at 2:43 AM Wei Yang
> >> wrote:
> >> >
> >> > For early sect
On 06/24/20 at 09:47am, Baoquan He wrote:
> On 06/23/20 at 05:21pm, Dan Williams wrote:
> > On Tue, Jun 23, 2020 at 2:43 AM Wei Yang
> > wrote:
> > >
> > > For early sections, we assumes its memmap will never be partially
> > > removed. But current
On 06/23/20 at 05:21pm, Dan Williams wrote:
> On Tue, Jun 23, 2020 at 2:43 AM Wei Yang
> wrote:
> >
> > For early sections, we assumes its memmap will never be partially
> > removed. But current behavior breaks this.
>
> Where do we assume that?
>
> The primary use case for this was mapping
On 06/17/20 at 06:03pm, Nitin Gupta wrote:
> Proactive compaction uses per-node/zone "fragmentation score" which
> is always in range [0, 100], so use unsigned type of these scores
> as well as for related constants.
>
> Signed-off-by: Nitin Gupta
Reviewed-by: Baoq
chan Kim
> Signed-off-by: Jaewon Kim
> Acked-by: Vlastimil Babka
Reviewed-by: Baoquan He
> ---
> v4: change description only; typo and log
> v3: change log in description to one having reserved_highatomic
> change comment in code
> v2: factor out common part
> v
On 06/17/20 at 06:03pm, Nitin Gupta wrote:
> Proactive compaction uses per-node/zone "fragmentation score" which
> is always in range [0, 100], so use unsigned type of these scores
> as well as for related constants.
>
> Signed-off-by: Nitin Gupta
> ---
> include/linux/compaction.h | 4 ++--
>
-id in VMCOREINFO brings
> some uniformity for automation tools.
>
> Signed-off-by: Vijay Balakrishna
Looks good to me, thanks.
Acked-by: Baoquan He
> ---
> Changes since v2:
> -
> - v1 was sent out as a single patch which can be seen here:
> http://
On 06/17/20 at 12:46am, Jaewon Kim wrote:
...
> > > >>> i.e)
> > > >>> In following situation, watermark check fails (9MB - 8MB < 4MB)
> > > >>> though there are
> > > >>> enough free (9MB - 4MB > 4MB). If this is really matter, we need to
> > > >>> count highatomic
> > > >>> free accurately.
> >
On 06/16/20 at 04:30pm, 김재원 wrote:
> >>> > > <4>[ 6207.637627] [3: Binder:9343_3:22875] Normal free:10908kB
> >>> > > min:6192kB low:44388kB high:47060kB active_anon:409160kB
> >>> > > inactive_anon:325924kB active_file:235820kB inactive_file:276628kB
> >>> > > unevictable:2444kB
On 06/13/20 at 10:08pm, Jaewon Kim wrote:
...
> > > This is an example of ALLOC_HARDER allocation failure.
> > >
> > > <4>[ 6207.637280] [3: Binder:9343_3:22875] Binder:9343_3: page
> > > allocation failure: order:0, mode:0x480020(GFP_ATOMIC), nodemask=(null)
> > > <4>[ 6207.637311] [3:
On 06/13/20 at 11:51am, Jaewon Kim wrote:
> zone_watermark_fast was introduced by commit 48ee5f3696f6 ("mm,
> page_alloc: shortcut watermark checks for order-0 pages"). The commit
> simply checks if free pages is bigger than watermark without additional
> calculation such like reducing watermark.
On 06/04/20 at 05:01pm, Vijay Balakrishna wrote:
> Make kernel GNU build-id available in VMCOREINFO. Having
> build-id in VMCOREINFO facilitates presenting appropriate kernel
> namelist image with debug information file to kernel crash dump
> analysis tools. Currently VMCOREINFO lacks uniquely
On 06/09/20 at 06:51pm, Jaewon Kim wrote:
> zone_watermark_fast was introduced by commit 48ee5f3696f6 ("mm,
> page_alloc: shortcut watermark checks for order-0 pages"). The commit
> simply checks if free pages is bigger than watermark without additional
> calculation such like reducing watermark.
more dangerous?
>
> So, here, let's simplify the logic to improve code readability. If the
> KEXEC_SIG_FORCE enabled or kexec lockdown enabled, signature verification
> is mandated. Otherwise, we lift the bar for any kernel image.
>
> Signed-off-by: Lianbo Jiang
Looks good
On 06/01/20 at 02:42pm, Mike Rapoport wrote:
> On Thu, May 28, 2020 at 10:15:10AM -0500, Steve Wahl wrote:
> > On Thu, May 28, 2020 at 05:07:31PM +0800, Baoquan He wrote:
> > > On 05/26/20 at 01:49pm, David Hildenbrand wrote:
> > > > On 26.05.20 13:32, Mike Rapoport
On 05/28/20 at 04:59pm, Baoquan He wrote:
> a...@linux-foundation.org, c...@lca.pw, mho...@kernel.org,
> steve.w...@hpe.com,
> Bcc: b...@redhat.com
> Subject: Re: [PATCH] mm/compaction: Fix the incorrect hole in
> fast_isolate_freepages()
> Reply-To:
> In-Reply-To: <
On 05/26/20 at 01:49pm, David Hildenbrand wrote:
> On 26.05.20 13:32, Mike Rapoport wrote:
> > Hello Baoquan,
> >
> > On Tue, May 26, 2020 at 04:45:43PM +0800, Baoquan He wrote:
> >> On 05/22/20 at 05:20pm, Mike Rapoport wrote:
> >>> Hello Baoquan,
>
David Hildenbrand wrote:
> On 26.05.20 13:32, Mike Rapoport wrote:
> > Hello Baoquan,
> >
> > On Tue, May 26, 2020 at 04:45:43PM +0800, Baoquan He wrote:
> >> On 05/22/20 at 05:20pm, Mike Rapoport wrote:
> >>> Hello Baoquan,
> >>>
> >>>
On 05/22/20 at 05:20pm, Mike Rapoport wrote:
> Hello Baoquan,
>
> On Fri, May 22, 2020 at 03:25:24PM +0800, Baoquan He wrote:
> > On 05/22/20 at 03:01pm, Baoquan He wrote:
> > >
> > > So let's add these unavailable ranges into memblock and reserve them
> >
On 05/21/20 at 05:38pm, Chen Zhou wrote:
> This patch series enable reserving crashkernel above 4G in arm64.
>
> There are following issues in arm64 kdump:
> 1. We use crashkernel=X to reserve crashkernel below 4G, which will fail
> when there is no enough low memory.
> 2. Currently,
On 05/21/20 at 05:38pm, Chen Zhou wrote:
> Crashkernel=X tries to reserve memory for the crash dump kernel under
> 4G. If crashkernel=X,low is specified simultaneously, reserve spcified
> size low memory for crash kdump kernel devices firstly and then reserve
> memory above 4G.
Wondering why
On 05/21/20 at 05:38pm, Chen Zhou wrote:
> In preparation for supporting reserve_crashkernel_low in arm64 as
> x86_64 does, move reserve_crashkernel_low() into kernel/crash_core.c.
> BTW, move x86 CRASH_ALIGN to 2M.
The reason is?
>
> Note, in arm64, we reserve low memory if and only if
On 05/22/20 at 03:01pm, Baoquan He wrote:
> > > As I said, the unavailable range includes firmware reserved ranges, and
> > > holes inside one boot memory section, if that boot memory section haves
> > > useable memory range, and firmware reserved ranges, and h
On 05/21/20 at 08:18pm, Mike Rapoport wrote:
> On Thu, May 21, 2020 at 11:52:25PM +0800, Baoquan He wrote:
> > On 05/21/20 at 12:26pm, Mike Rapoport wrote:
> > > > For this kind of e820 reserved range, it won't be added to memblock
> > > > allocator.
>
23000-0x33a32fff]
> > reserved
> > [0.00] BIOS-e820: [mem 0x33a33000-0x33a42fff] ACPI
> > NVS
> > [0.00] BIOS-e820: [mem 0x33a43000-0x0000000033a52fff] ACPI
> > data
> > [0.00] BIOS-e820: [mem 0x00
On 05/21/20 at 10:36am, Mel Gorman wrote:
> On Thu, May 21, 2020 at 09:44:07AM +0800, Baoquan He wrote:
> > After investigation, it turns out that this is introduced by commit of
> > linux-next: commit f6edbdb71877 ("mm: memmap_init: iterate over memblock
> > regions
008fff] reserved
[0.00] BIOS-e820: [mem 0xfed8-0xfed80fff] reserved
[0.00] BIOS-e820: [mem 0x0001-0x00087eff] usable
[0.00] BIOS-e820: [mem 0x00087f00-0x00087fff] reserved
Reported-by: Qian Cai
Signed-off-by: Baoquan He
---
Kdump is implemented based on kexec, however some files are only
related to crash dumping and missing, add them to KDUMP entry.
Signed-off-by: Baoquan He
Acked-by: Dave Young
---
MAINTAINERS | 5 +
1 file changed, 5 insertions(+)
diff --git a/MAINTAINERS b/MAINTAINERS
index 83cf5c43242a
On 05/20/20 at 05:14pm, Dave Young wrote:
> Hi Baoquan,
> On 05/20/20 at 04:05pm, Baoquan He wrote:
> > Kdump is implemented based on kexec, however some files are only
> > related to crash dumping and missing, add them to KDUMP entry.
> >
> > Signed-off-by: Baoqua
Kdump is implemented based on kexec, however some files are only
related to crash dumping and missing, add them to KDUMP entry.
Signed-off-by: Baoquan He
---
MAINTAINERS | 5 +
1 file changed, 5 insertions(+)
diff --git a/MAINTAINERS b/MAINTAINERS
index 83cf5c43242a..2f9eefd33114 100644
On 04/15/20 at 02:04pm, Kristen Carlson Accardi wrote:
...
> diff --git a/arch/x86/boot/compressed/misc.c b/arch/x86/boot/compressed/misc.c
> index 9652d5c2afda..2e108fdc7757 100644
> --- a/arch/x86/boot/compressed/misc.c
> +++ b/arch/x86/boot/compressed/misc.c
> @@ -26,9 +26,6 @@
> * it is not
>
> + zone->watermark_boost = 0;
> zone->_watermark[WMARK_LOW] = min_wmark_pages(zone) + tmp;
> zone->_watermark[WMARK_HIGH] = min_wmark_pages(zone) + tmp * 2;
> - zone->watermark_boost = 0;
Yeah, watermark_boost is a temporary
On 05/05/20 at 09:20am, Qian Cai wrote:
>
>
> > On May 5, 2020, at 8:43 AM, Baoquan He wrote:
> >
> > Hi,
> >
> > On 04/24/20 at 09:45am, Qian Cai wrote:
> >>
> >>
> >>> On Apr 23, 2020, at 11:43 PM, Baoquan He wrote:
>
On 05/10/20 at 02:22pm, Rafael Aquini wrote:
> > > diff --git a/Documentation/admin-guide/kernel-parameters.txt
> > > b/Documentation/admin-guide/kernel-parameters.txt
> > > index 7bc83f3d9bdf..4a69fe49a70d 100644
> > > --- a/Documentation/admin-guide/kernel-parameters.txt
> > > +++
On 05/09/20 at 09:10pm, Randy Dunlap wrote:
> On 5/9/20 7:59 PM, Baoquan He wrote:
> > Read admin-guide/tainted-kernels.rst, but still do not get what 'G' means.
>
> I interpret 'G' as GPL (strictly it means that no proprietary module has
> been loaded). B
if (*s == '-') {
> + panic_on_taint_exclusive = true;
> + continue;
> + }
> +
> + for (i = 0; i < TAINT_FLAGS_COUNT; i++) {
> + if (toupper(*s) == taint_flags[i].c_true) {
> + set_bit(i, _on_taint);
> + break;
> + }
> + }
Read admin-guide/tainted-kernels.rst, but still do not get what 'G' means.
If I specify 'panic_on_taint="G"' or 'panic_on_taint="-G"' in cmdline,
what is expected for this customer behaviour?
Except of above minor nitpicks, this patch looks good to me, thanks.
Reviewed-by: Baoquan He
Thanks
Baoquan
Hi,
On 04/24/20 at 09:45am, Qian Cai wrote:
>
>
> > On Apr 23, 2020, at 11:43 PM, Baoquan He wrote:
> >
> > On 04/23/20 at 05:25pm, Qian Cai wrote:
> >> Compaction starts to crash below on linux-next today. The faulty page
> >> belongs to Node
On 09/30/19 at 05:14am, Eric W. Biederman wrote:
> Baoquan He writes:
> >> needs a little better description. I know it is not a lot on modern
> >> systems but reserving an extra 1M of memory to avoid having to special
> >> case it later seems in need of calling
On 09/20/19 at 12:05am, Kairui Song wrote:
> Currently, kernel fails to boot on some HyperV VMs when using EFI.
> And it's a potential issue on all platforms.
>
> It's caused a broken kernel relocation on EFI systems, when below three
> conditions are met:
>
> 1. Kernel image is not loaded to
On 09/24/19 at 03:16pm, Michal Hocko wrote:
> On Tue 24-09-19 21:04:58, Baoquan He wrote:
> > On 09/24/19 at 02:27pm, Michal Hocko wrote:
> > > On Tue 24-09-19 19:11:51, Baoquan He wrote:
> > > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> > > &g
On 09/24/19 at 02:27pm, Michal Hocko wrote:
> On Tue 24-09-19 19:11:51, Baoquan He wrote:
> > diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> > index f3c15bb07cce..84e3fdb1ccb4 100644
> > --- a/mm/memcontrol.c
> > +++ b/mm/memcontrol.c
>
ed data in function mem_cgroup_track_foreign_dirty_slowpath().
Fix it by returning directly if memcg is disabled, but not trying to
record the foreign writebacks with dirty pages.
Fixed: 97b27821b485 ("writeback, memcg: Implement foreign dirty flushing")
Signed-off-by: Baoquan He
---
v1-
On 09/23/19 at 04:30pm, Baoquan He wrote:
> ---
> include/linux/memcontrol.h | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index ad8f1a397ae4..fa53f9d51205 100644
> --- a/include/linux
ed data in function mem_cgroup_track_foreign_dirty_slowpath().
Fix it by returning directly if memcg is disabled, but not trying to
record the foreign writebacks with dirty pages.
Fixed: 97b27821b485 ("writeback, memcg: Implement foreign dirty flushing")
Signed-off-by: Baoquan He
---
include/linux
101 - 200 of 3124 matches
Mail list logo