On 04.06.20 15:39, Vamshi K Sthambamkadi wrote:
> On x86_32, while onlining highmem sections, the func default_zone_for_pfn()
> defaults target zone to ZONE_NORMAL (movable_node_enabled = 0). Onlining of
> pages is successful, and these highmem pages are moved into zone_normal.
> 
> As a consequence, these pages are treated as low mem, and page addresses
> are calculated using lowmem_page_address() which effectively overflows the
> 32 bit virtual addresses, leading to kernel panics and system becomes
> unusable.
> 
> Change default_kernel_zone_for_pfn() to intersect highmem pfn range, and
> calculate the default zone accordingly.

We discussed this recently [1], and decided that we don't really care
about memory hotplug on 32-bit anymore (especially, user space could
still configure a different zone and make things crash). There was a
patch from Michal in [1], looks like it has not been picked up yet.

@Andrew, can we queue Michals patch?

[1] https://lkml.kernel.org/r/[email protected]

> 
> Signed-off-by: Vamshi K Sthambamkadi <[email protected]>
> ---
>  mm/memory_hotplug.c | 7 ++++++-
>  1 file changed, 6 insertions(+), 1 deletion(-)
> 
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index c4d5c45..30f101a 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -725,8 +725,13 @@ static struct zone *default_kernel_zone_for_pfn(int nid, 
> unsigned long start_pfn
>  {
>       struct pglist_data *pgdat = NODE_DATA(nid);
>       int zid;
> +     int nr_zones = ZONE_NORMAL;
>  
> -     for (zid = 0; zid <= ZONE_NORMAL; zid++) {
> +#ifdef CONFIG_HIGHMEM
> +     nr_zones = ZONE_HIGHMEM;
> +#endif
> +
> +     for (zid = 0; zid <= nr_zones; zid++) {
>               struct zone *zone = &pgdat->node_zones[zid];
>  
>               if (zone_intersects(zone, start_pfn, nr_pages))
> 


-- 
Thanks,

David / dhildenb

Reply via email to