Luckily, we have no users left, so we can get rid of it.
Cc: Greg Kroah-Hartman
Cc: "Rafael J. Wysocki"
Cc: Andrew Morton
Cc: Pavel Tatashin
Cc: Michal Hocko
Cc: Dan Williams
Cc: Oscar Salvador
Cc: Qian Cai
Cc: Anshuman Khandual
Cc: Pingfan Liu
Signed-off-by: David H
: Arun KS
Signed-off-by: David Hildenbrand
---
arch/powerpc/platforms/pseries/cmm.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/arch/powerpc/platforms/pseries/cmm.c
b/arch/powerpc/platforms/pseries/cmm.c
index 235fd7fe9df1..a6ec2bbb1f91 100644
--- a/arch/powerpc
auner
Cc: Gao Xiang
Cc: Greg Hackmann
Cc: David Howells
Signed-off-by: David Hildenbrand
---
arch/powerpc/platforms/pseries/Kconfig | 1 +
arch/powerpc/platforms/pseries/cmm.c | 132 ++---
include/uapi/linux/magic.h | 1 +
3 files changed, 120 insertions(+
the thread and a concurrent OOM notifier.
Cc: Benjamin Herrenschmidt
Cc: Paul Mackerras
Cc: Michael Ellerman
Cc: Andrew Morton
Cc: Pavel Tatashin
Cc: Richard Fontana
Cc: Allison Randal
Cc: Thomas Gleixner
Cc: Arun KS
Signed-off-by: David Hildenbrand
---
arch/powerpc/platforms/pseries/cmm.c | 35
Mackerras
Cc: Michael Ellerman
Cc: Andrew Morton
Cc: Pavel Tatashin
Cc: Richard Fontana
Cc: Allison Randal
Cc: Thomas Gleixner
Cc: Arun KS
Signed-off-by: David Hildenbrand
---
arch/powerpc/platforms/pseries/cmm.c | 97 +---
1 file changed, 1 insertion(+), 96 deletion
Cc: Richard Fontana
Cc: Allison Randal
Cc: Thomas Gleixner
Cc: Arun KS
Signed-off-by: David Hildenbrand
---
arch/powerpc/platforms/pseries/cmm.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/arch/powerpc/platforms/pseries/cmm.c
b/arch/powerpc/platforms/pseries
mas Gleixner
Cc: Arun KS
Signed-off-by: David Hildenbrand
---
arch/powerpc/platforms/pseries/cmm.c | 163 ++-
1 file changed, 36 insertions(+), 127 deletions(-)
diff --git a/arch/powerpc/platforms/pseries/cmm.c
b/arch/powerpc/platforms/pseries/cmm.c
index 738eb1681
No need to initialize rc. Also, let's return 0 directly when succeeding.
Cc: Benjamin Herrenschmidt
Cc: Paul Mackerras
Cc: Michael Ellerman
Cc: Greg Kroah-Hartman
Cc: Michal Hocko
Cc: Konstantin Khlebnikov
Cc: Andrew Morton
Cc: Arun KS
Cc: Thomas Gleixner
Signed-off-by: David Hildenbrand
h actual HW that has this feature.
Cc: Alexander Duyck
Cc: Alexander Potapenko
Cc: Alexey Kardashevskiy
Cc: Allison Randal
Cc: Andrew Morton
Cc: Anshuman Khandual
Cc: Anshuman Khandual
Cc: Arun KS
Cc: Benjamin Herrenschmidt
Cc: Christian Brauner
Cc: Dan Williams
Cc: David Hildenbrand
Cc: David
If we don't set the rc, we will return "0", making it look like we
succeeded.
Cc: Benjamin Herrenschmidt
Cc: Paul Mackerras
Cc: Michael Ellerman
Cc: Andrew Morton
Cc: Pavel Tatashin
Cc: Richard Fontana
Cc: Allison Randal
Cc: Thomas Gleixner
Cc: Arun KS
Signed-off-by: David H
-Hartman
Cc: Thomas Gleixner
Cc: Arun KS
Signed-off-by: David Hildenbrand
---
arch/powerpc/platforms/pseries/cmm.c | 5 +
1 file changed, 5 insertions(+)
diff --git a/arch/powerpc/platforms/pseries/cmm.c
b/arch/powerpc/platforms/pseries/cmm.c
index b33251d75927..572651a5c87b 100644
--- a/arch
On 06.10.19 10:56, David Hildenbrand wrote:
We currently try to shrink a single zone when removing memory. We use the
zone of the first page of the memory we are removing. If that memmap was
never initialized (e.g., memory was never onlined), we will read garbage
and can trigger kernel BUGs (due
On 23.10.19 09:26, David Hildenbrand wrote:
On 22.10.19 23:54, Dan Williams wrote:
Hi David,
Thanks for tackling this!
Thanks for having a look :)
[...]
I am probably a little bit too careful (but I don't want to break things).
In most places (besides KVM and vfio that are nuts
Cc: Thomas Gleixner
Signed-off-by: David Hildenbrand
---
mm/usercopy.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/mm/usercopy.c b/mm/usercopy.c
index 660717a1ea5c..80f254024c97 100644
--- a/mm/usercopy.c
+++ b/mm/usercopy.c
@@ -199,9 +199,9 @@ static inline void check_
Yznaga
Cc: Michal Hocko
Cc: Oscar Salvador
Cc: Dan Williams
Cc: Mel Gorman
Cc: Mike Rapoport
Cc: Anshuman Khandual
Cc: Matt Sickler
Cc: Kees Cook
Suggested-by: Michal Hocko
Signed-off-by: David Hildenbrand
---
drivers/hv/hv_balloon.c| 6 ++
drivers/xen/balloon.c | 7 +++
: Ingo Molnar
Cc: Borislav Petkov
Cc: "H. Peter Anvin"
Signed-off-by: David Hildenbrand
---
arch/x86/mm/ioremap.c | 13 ++---
1 file changed, 10 insertions(+), 3 deletions(-)
diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
index a39dcdb5ae34..db6913b48edf 100644
Leroy
Cc: "Aneesh Kumar K.V"
Cc: Allison Randal
Cc: Nicholas Piggin
Cc: Thomas Gleixner
Signed-off-by: David Hildenbrand
---
arch/powerpc/mm/pgtable.c | 10 ++
1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtab
: "Aneesh Kumar K.V"
Cc: Christophe Leroy
Cc: Nicholas Piggin
Cc: Andrew Morton
Cc: Mike Rapoport
Cc: YueHaibing
Signed-off-by: David Hildenbrand
---
arch/powerpc/mm/book3s64/hash_utils.c | 10 ++
1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/arch/powerpc/m
: Michael Ellerman
Signed-off-by: David Hildenbrand
---
arch/powerpc/kvm/book3s_64_mmu_radix.c | 14 --
1 file changed, 8 insertions(+), 6 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c
b/arch/powerpc/kvm/book3s_64_mmu_radix.c
index 2d415c36a61d..05397c0561fc 100644
have an initialized memmap (and don't have ZONE_DEVICE memory).
Rewrite is_invalid_reserved_pfn() similar to kvm_is_reserved_pfn() to make
sure the function produces the same result once we stop setting ZONE_DEVICE
pages PG_reserved.
Cc: Alex Williamson
Cc: Cornelia Huck
Signed-off-by: David
llah Ahmed
Signed-off-by: David Hildenbrand
---
virt/kvm/kvm_main.c | 10 --
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index e9eb666eb6e8..9d18cc67d124 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -151,9 +151,15 @@
eng Li
Cc: Jim Mattson
Cc: Joerg Roedel
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Borislav Petkov
Cc: "H. Peter Anvin"
Cc: KarimAllah Ahmed
Cc: Michal Hocko
Cc: Dan Williams
Signed-off-by: David Hildenbrand
---
arch/x86/kvm/mmu.c | 29 +
1 file changed,
: Michal Hocko
Cc: Oscar Salvador
Cc: Pavel Tatashin
Cc: Dan Williams
Cc: Anshuman Khandual
Signed-off-by: David Hildenbrand
---
mm/memory_hotplug.c | 26 --
1 file changed, 24 insertions(+), 2 deletions(-)
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index
er.kernel.org
Cc: linuxppc-dev@lists.ozlabs.org
Cc: k...@vger.kernel.org
Cc: linux-hyp...@vger.kernel.org
Cc: de...@driverdev.osuosl.org
Cc: xen-de...@lists.xenproject.org
Cc: x...@kernel.org
Cc: Alexander Duyck
David Hildenbrand (10):
mm/memory_hotplug: Don't allow to online/offline memory blocks w
On 24.10.19 05:53, Anshuman Khandual wrote:
On 10/22/2019 10:42 PM, David Hildenbrand wrote:
Our onlining/offlining code is unnecessarily complicated. Only memory
blocks added during boot can have holes. Hotplugged memory never has
holes. That memory is already online.
Why hot plugged memory
On 23.10.19 21:39, Dan Williams wrote:
> On Wed, Oct 23, 2019 at 10:28 AM David Hildenbrand wrote:
>>
>>>> I dislike this for three reasons
>>>>
>>>> a) It does not protect against any races, really, it does not improve
>>&
>> I dislike this for three reasons
>>
>> a) It does not protect against any races, really, it does not improve things.
>> b) We do have the exact same problem with pfn_to_online_page(). As long as we
>>don't hold the memory hotplug lock, memory can get offlined and remove
>> any time. Racy.
On 23.10.19 18:25, Kees Cook wrote:
> On Wed, Oct 23, 2019 at 10:20:14AM +0200, David Hildenbrand wrote:
>> On 22.10.19 19:12, David Hildenbrand wrote:
>>> Right now, ZONE_DEVICE memory is always set PG_reserved. We want to
>>> change that.
>>>
>>> L
On 22.10.19 19:12, David Hildenbrand wrote:
Right now, ZONE_DEVICE memory is always set PG_reserved. We want to
change that.
Let's make sure that the logic in the function won't change. Once we no
longer set these pages to reserved, we can rework this function to
perform separate checks
On 22.10.19 19:12, David Hildenbrand wrote:
Right now, ZONE_DEVICE memory is always set PG_reserved. We want to
change that.
The pages are obtained via get_user_pages_fast(). I assume, these
could be ZONE_DEVICE pages. Let's just exclude them as well explicitly.
Cc: Rob Springer
Cc: Todd
On 22.10.19 23:54, Dan Williams wrote:
> Hi David,
>
> Thanks for tackling this!
Thanks for having a look :)
[...]
>> I am probably a little bit too careful (but I don't want to break things).
>> In most places (besides KVM and vfio that are nuts), the
>> pfn_to_online_page() check could most
On 22.10.19 19:55, Matt Sickler wrote:
Right now, ZONE_DEVICE memory is always set PG_reserved. We want to change that.
The pages are obtained via get_user_pages_fast(). I assume, these could be
ZONE_DEVICE pages. Let's just exclude them as well explicitly.
I'm not sure what ZONE_DEVICE
: Kees Cook
Cc: Andrew Morton
Cc: Kate Stewart
Cc: Allison Randal
Cc: "Isaac J. Manjarres"
Cc: Qian Cai
Cc: Thomas Gleixner
Signed-off-by: David Hildenbrand
---
mm/usercopy.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/mm/usercopy.c b/mm/userco
2019/10/21/736
[2] https://lkml.org/lkml/2019/10/21/1034
Cc: Michal Hocko
Cc: Dan Williams
Cc: kvm-...@vger.kernel.org
Cc: linuxppc-dev@lists.ozlabs.org
Cc: k...@vger.kernel.org
Cc: linux-hyp...@vger.kernel.org
Cc: de...@driverdev.osuosl.org
Cc: xen-de...@lists.xenproject.org
Cc: x...@k
n Khandual
Suggested-by: Michal Hocko
Signed-off-by: David Hildenbrand
---
drivers/hv/hv_balloon.c| 6 ++
drivers/xen/balloon.c | 7 +++
include/linux/page-flags.h | 8 +---
mm/memory_hotplug.c| 17 +++--
mm/page_alloc.c| 11 --
Cc: Andy Lutomirski
Cc: Peter Zijlstra
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Borislav Petkov
Cc: "H. Peter Anvin"
Signed-off-by: David Hildenbrand
---
arch/x86/mm/ioremap.c | 13 ++---
1 file changed, 10 insertions(+), 3 deletions(-)
diff --git a/arch/x86/mm/ioremap.c
Herrenschmidt
Cc: Paul Mackerras
Cc: Michael Ellerman
Cc: Christophe Leroy
Cc: "Aneesh Kumar K.V"
Cc: Allison Randal
Cc: Nicholas Piggin
Cc: Thomas Gleixner
Signed-off-by: David Hildenbrand
---
arch/powerpc/mm/pgtable.c | 10 ++
1 file changed, 6 insertions(+), 4 deletion
Signed-off-by: David Hildenbrand
---
drivers/staging/gasket/gasket_page_table.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/staging/gasket/gasket_page_table.c
b/drivers/staging/gasket/gasket_page_table.c
index f6d715787da8..d43fed58bf65 100644
--- a/drivers/staging
Herrenschmidt
Cc: Paul Mackerras
Cc: Michael Ellerman
Cc: "Aneesh Kumar K.V"
Cc: Christophe Leroy
Cc: Nicholas Piggin
Cc: Andrew Morton
Cc: Mike Rapoport
Cc: YueHaibing
Signed-off-by: David Hildenbrand
---
arch/powerpc/mm/book3s64/hash_utils.c | 10 ++
1 file changed, 6
or accessed).
Cc: Alex Williamson
Cc: Cornelia Huck
Signed-off-by: David Hildenbrand
---
drivers/vfio/vfio_iommu_type1.c | 10 --
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
index 2ada8e6cdb88
-by: David Hildenbrand
---
arch/powerpc/kvm/book3s_64_mmu_radix.c | 14 --
1 file changed, 8 insertions(+), 6 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c
b/arch/powerpc/kvm/book3s_64_mmu_radix.c
index 2d415c36a61d..05397c0561fc 100644
--- a/arch/powerpc/kvm
;
Cc: Dan Carpenter
Cc: Nishka Dasgupta
Cc: Madhumitha Prabakaran
Cc: Fabio Estevam
Cc: Matt Sickler
Cc: Jeremy Sowden
Signed-off-by: David Hildenbrand
---
drivers/staging/kpc2000/kpc_dma/fileops.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/staging/kpc20
-by: David Hildenbrand
---
mm/memory_hotplug.c | 26 --
1 file changed, 24 insertions(+), 2 deletions(-)
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 561371ead39a..7210f4375279 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -1447,10 +1447,19
or accessed).
Cc: Paolo Bonzini
Cc: "Radim Krčmář"
Cc: Michal Hocko
Cc: Dan Williams
Cc: KarimAllah Ahmed
Signed-off-by: David Hildenbrand
---
virt/kvm/kvm_main.c | 10 --
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_ma
"H. Peter Anvin"
Cc: KarimAllah Ahmed
Cc: Michal Hocko
Cc: Dan Williams
Signed-off-by: David Hildenbrand
---
arch/x86/kvm/mmu.c | 30 ++
1 file changed, 18 insertions(+), 12 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 24c23c
On 06.10.19 10:56, David Hildenbrand wrote:
Let's poison the pages similar to when adding new memory in
sparse_add_section(). Also call remove_pfn_range_from_zone() from
memunmap_pages(), so we can poison the memmap from there as well.
While at it, calculate the pfn in memunmap_pages() only
On 15.10.19 13:50, David Hildenbrand wrote:
On 15.10.19 13:47, Michal Hocko wrote:
On Tue 15-10-19 13:42:03, David Hildenbrand wrote:
[...]
-static bool pfn_range_valid_gigantic(struct zone *z,
- unsigned long start_pfn, unsigned long nr_pages)
-{
- unsigned long i
On 15.10.19 13:47, Michal Hocko wrote:
On Tue 15-10-19 13:42:03, David Hildenbrand wrote:
[...]
-static bool pfn_range_valid_gigantic(struct zone *z,
- unsigned long start_pfn, unsigned long nr_pages)
-{
- unsigned long i, end_pfn = start_pfn + nr_pages
On 15.10.19 11:21, Anshuman Khandual wrote:
alloc_gigantic_page() implements an allocation method where it scans over
various zones looking for a large contiguous memory block which could not
have been allocated through the buddy allocator. A subsequent patch which
tests arch page table helpers
On 06.10.19 10:56, David Hildenbrand wrote:
> We currently try to shrink a single zone when removing memory. We use the
> zone of the first page of the memory we are removing. If that memmap was
> never initialized (e.g., memory was never onlined), we will read garbage
> and can tr
On 06.10.19 10:56, David Hildenbrand wrote:
Let's limit shrinking to !ZONE_DEVICE so we can fix the current code. We
should never try to touch the memmap of offline sections where we could
have uninitialized memmaps and could trigger BUGs when calling
page_to_nid() on poisoned pages
On 06.10.19 10:56, David Hildenbrand wrote:
> We might use the nid of memmaps that were never initialized. For
> example, if the memmap was poisoned, we will crash the kernel in
> pfn_to_nid() right now. Let's use the calculated boundaries of the separate
> zones instead. This now
On 06.10.19 10:56, David Hildenbrand wrote:
From: "Aneesh Kumar K.V"
With an altmap, the memmap falling into the reserved altmap space are
not initialized and, therefore, contain a garbage NID and a garbage
zone. Make sure to read the NID/zone from a memmap that was initialzed.
5225120440=2
All mails popped up on the mm list.
>
> On Sun, 06. Oct 10:56, David Hildenbrand wrote:
>> From: "Aneesh Kumar K.V"
>>
>> With an altmap, the memmap falling into the reserved altmap space are
>> not initialized and, therefore, contain a garbage
ge_from_zone()"
- Stop shrinking ZONE_DEVICE
- Reshuffle patches, moving all fixes to the front. Add Fixes: tags.
- Change subject/description of various patches
- Minor changes (too many to mention)
Cc: Aneesh Kumar K.V
Cc: Andrew Morton
Cc: Dan Williams
Cc: Michal Hocko
Aneesh Kumar K.V (2
Let's drop the basically unused section stuff and simplify.
Also, let's use a shorter variant to calculate the number of pages to
the next section boundary.
Cc: Andrew Morton
Cc: Oscar Salvador
Cc: Michal Hocko
Cc: Pavel Tatashin
Cc: Dan Williams
Cc: Wei Yang
Signed-off-by: David
Get rid of the unnecessary local variables.
Cc: Andrew Morton
Cc: Oscar Salvador
Cc: David Hildenbrand
Cc: Michal Hocko
Cc: Pavel Tatashin
Cc: Dan Williams
Cc: Wei Yang
Signed-off-by: David Hildenbrand
---
mm/memory_hotplug.c | 15 ++-
1 file changed, 6 insertions(+), 9
If we have holes, the holes will automatically get detected and removed
once we remove the next bigger/smaller section. The extra checks can
go.
Cc: Andrew Morton
Cc: Oscar Salvador
Cc: Michal Hocko
Cc: David Hildenbrand
Cc: Pavel Tatashin
Cc: Dan Williams
Cc: Wei Yang
Signed-off-by: David
With shrink_pgdat_span() out of the way, we now always have a valid
zone.
Cc: Andrew Morton
Cc: Oscar Salvador
Cc: David Hildenbrand
Cc: Michal Hocko
Cc: Pavel Tatashin
Cc: Dan Williams
Cc: Wei Yang
Signed-off-by: David Hildenbrand
---
mm/memory_hotplug.c | 4 ++--
1 file changed, 2
Let's poison the pages similar to when adding new memory in
sparse_add_section(). Also call remove_pfn_range_from_zone() from
memunmap_pages(), so we can poison the memmap from there as well.
While at it, calculate the pfn in memunmap_pages() only once.
Cc: Andrew Morton
Cc: David Hildenbrand
v@lists.ozlabs.org
Cc: linux-s...@vger.kernel.org
Cc: linux...@vger.kernel.org
Fixes: d0dc12e86b31 ("mm/memory_hotplug: optimize memory hotplug")
Signed-off-by: David Hildenbrand
---
arch/arm64/mm/mmu.c| 4 +---
arch/ia64/mm/init.c| 4 +---
arch/powerpc/mm/mem.c | 3
zed with 0 and the node with the
right value. So the zone might be wrong but not garbage. After that
commit, both the zone and the node will be garbage when touching
uninitialized memmaps.
Cc: Andrew Morton
Cc: Oscar Salvador
Cc: David Hildenbrand
Cc: Michal Hocko
Cc: Pavel Tatashin
Cc: Da
Cc: Oscar Salvador
Cc: David Hildenbrand
Cc: Michal Hocko
Cc: Pavel Tatashin
Cc: Dan Williams
Cc: Wei Yang
Fixes: d0dc12e86b31 ("mm/memory_hotplug: optimize memory hotplug")
Reported-by: Aneesh Kumar K.V
Signed-off-by: David Hildenbrand
---
mm/m
e Rapoport
Cc: Dan Williams
Cc: Alexander Duyck
Cc: Pavel Tatashin
Cc: Alexander Potapenko
Reviewed-by: Pankaj Gupta
Reviewed-by: David Hildenbrand
Signed-off-by: Aneesh Kumar K.V
Signed-off-by: David Hildenbrand
---
mm/page_alloc.c | 8
1 file changed, 4 insertions(+), 4 deletion
0274087dd0] c04c2a6c ksys_write+0x7c/0x140
[c00274087e20] c000bbd0 system_call+0x5c/0x68
Cc: Dan Williams
Cc: Andrew Morton
Cc: Jason Gunthorpe
Cc: Logan Gunthorpe
Cc: Ira Weiny
Signed-off-by: Aneesh Kumar K.V
[ minimze code changes, rephrase description ]
Signed-off-by: David H
On 05.10.19 08:13, Aneesh Kumar K.V wrote:
> On 10/4/19 2:33 PM, David Hildenbrand wrote:
>> On 04.10.19 11:00, David Hildenbrand wrote:
>>> On 03.10.19 18:48, Aneesh Kumar K.V wrote:
>>>> On 10/1/19 8:33 PM, David Hildenbrand wrote:
>>>>&g
On 04.10.19 11:03, David Hildenbrand wrote:
> On 04.10.19 11:00, David Hildenbrand wrote:
>> On 03.10.19 18:48, Aneesh Kumar K.V wrote:
>>> On 10/1/19 8:33 PM, David Hildenbrand wrote:
>>>> On 01.10.19 16:57, David Hildenbrand wrote:
>>>>> On 01.10.1
On 04.10.19 11:00, David Hildenbrand wrote:
> On 03.10.19 18:48, Aneesh Kumar K.V wrote:
>> On 10/1/19 8:33 PM, David Hildenbrand wrote:
>>> On 01.10.19 16:57, David Hildenbrand wrote:
>>>> On 01.10.19 16:40, David Hildenbrand wrote:
>>>>> From: &qu
On 03.10.19 18:48, Aneesh Kumar K.V wrote:
> On 10/1/19 8:33 PM, David Hildenbrand wrote:
>> On 01.10.19 16:57, David Hildenbrand wrote:
>>> On 01.10.19 16:40, David Hildenbrand wrote:
>>>> From: "Aneesh Kumar K.V"
>>>>
>>>&g
On 02.10.19 02:06, kbuild test robot wrote:
> Hi David,
>
> I love your patch! Perhaps something to improve:
>
> [auto build test WARNING on mmotm/master]
>
> url:
> https://github.com/0day-ci/linux/commits/David-Hildenbrand/mm-memory_hotplug-Shrink-zones-before-
On 01.10.19 16:57, David Hildenbrand wrote:
> On 01.10.19 16:40, David Hildenbrand wrote:
>> From: "Aneesh Kumar K.V"
>>
>> With altmap, all the resource pfns are not initialized. While initializing
>> pfn, altmap reserve space is skipped. Hence when removing
On 01.10.19 16:40, David Hildenbrand wrote:
> From: "Aneesh Kumar K.V"
>
> With altmap, all the resource pfns are not initialized. While initializing
> pfn, altmap reserve space is skipped. Hence when removing pfn from zone
> skip pfns that were never initialized.
&
Let's drop the basically unused section stuff and simplify.
Also, let's use a shorter variant to calculate the number of pages to
the next section boundary.
Cc: Andrew Morton
Cc: Oscar Salvador
Cc: Michal Hocko
Cc: Pavel Tatashin
Cc: Dan Williams
Cc: Wei Yang
Signed-off-by: David
Get rid of the unnecessary local variables.
Cc: Andrew Morton
Cc: Oscar Salvador
Cc: David Hildenbrand
Cc: Michal Hocko
Cc: Pavel Tatashin
Cc: Dan Williams
Cc: Wei Yang
Signed-off-by: David Hildenbrand
---
mm/memory_hotplug.c | 15 ++-
1 file changed, 6 insertions(+), 9
If we have holes, the holes will automatically get detected and removed
once we remove the next bigger/smaller section. The extra checks can
go.
Cc: Andrew Morton
Cc: Oscar Salvador
Cc: Michal Hocko
Cc: David Hildenbrand
Cc: Pavel Tatashin
Cc: Dan Williams
Cc: Wei Yang
Signed-off-by: David
With shrink_pgdat_span() out of the way, we now always have a valid
zone.
Cc: Andrew Morton
Cc: Oscar Salvador
Cc: David Hildenbrand
Cc: Michal Hocko
Cc: Pavel Tatashin
Cc: Dan Williams
Cc: Wei Yang
Signed-off-by: David Hildenbrand
---
mm/memory_hotplug.c | 4 ++--
1 file changed, 2
Let's poison the pages similar to when adding new memory in
sparse_add_section(). Also call remove_pfn_range_from_zone() from
memunmap_pages(), so we can poison the memmap from there as well.
While at it, calculate the pfn in memunmap_pages() only once.
Cc: Andrew Morton
Cc: David Hildenbrand
v@lists.ozlabs.org
Cc: linux-s...@vger.kernel.org
Cc: linux...@vger.kernel.org
Fixes: d0dc12e86b31 ("mm/memory_hotplug: optimize memory hotplug")
Signed-off-by: David Hildenbrand
---
arch/arm64/mm/mmu.c| 4 +---
arch/ia64/mm/init.c| 4 +---
arch/powerpc/mm/mem.c | 3
zed with 0 and the node with the
right value. So the zone might be wrong but not garbage. After that
commit, both the zone and the node will be garbage when touching
uninitialized memmaps.
Cc: Andrew Morton
Cc: Oscar Salvador
Cc: David Hildenbrand
Cc: Michal Hocko
Cc: Pavel Tatashin
Cc: Da
Cc: Oscar Salvador
Cc: David Hildenbrand
Cc: Michal Hocko
Cc: Pavel Tatashin
Cc: Dan Williams
Cc: Wei Yang
Fixes: d0dc12e86b31 ("mm/memory_hotplug: optimize memory hotplug")
Reported-by: Aneesh Kumar K.V
Signed-off-by: David Hildenbrand
---
mm/m
e Rapoport
Cc: Dan Williams
Cc: Alexander Duyck
Cc: Pavel Tatashin
Cc: Alexander Potapenko
Reviewed-by: Pankaj Gupta
Reviewed-by: David Hildenbrand
Signed-off-by: Aneesh Kumar K.V
Signed-off-by: David Hildenbrand
---
mm/page_alloc.c | 8
1 file changed, 4 insertions(+), 4 deletion
Kumar K.V
[ move all pfn-realted declarations into a single line ]
Signed-off-by: David Hildenbrand
---
mm/memremap.c | 13 -
1 file changed, 8 insertions(+), 5 deletions(-)
diff --git a/mm/memremap.c b/mm/memremap.c
index 557e53c6fb46..026788b2ac69 100644
--- a/mm/memremap.
Hocko
Aneesh Kumar K.V (2):
mm/memunmap: Use the correct start and end pfn when removing pages
from zone
mm/memmap_init: Update variable name in memmap_init_zone
David Hildenbrand (8):
mm/memory_hotplug: Don't access uninitialized memmaps in
shrink_pgdat_span()
mm/memory_hotplug:
On 25.09.19 09:37, David Hildenbrand wrote:
> On 10.09.19 18:39, David Hildenbrand wrote:
>> We can simply store the pages in a list (page->lru), no need for a
>> separate data structure (+ complicated handling). This is how most
>> other balloon drivers store allocated
On 10.09.19 18:39, David Hildenbrand wrote:
> We can simply store the pages in a list (page->lru), no need for a
> separate data structure (+ complicated handling). This is how most
> other balloon drivers store allocated pages without additional tracking
> data.
>
> F
hin
Cc: Thomas Gleixner
Cc: Andrew Morton
Cc: Vlastimil Babka
Signed-off-by: David Hildenbrand
---
Only compile-tested. I hope the page_to_phys() thingy is correct and I
didn't mess up something else / ignoring something important why the array
is needed.
I stumbled over this while looking at
On 04.09.19 07:25, Alastair D'Silva wrote:
> On Mon, 2019-09-02 at 09:28 +0200, David Hildenbrand wrote:
>> On 02.09.19 01:54, Alastair D'Silva wrote:
>>> On Tue, 2019-08-27 at 09:13 +0200, David Hildenbrand wrote:
>>>> On 27.08.19 08:39, Alastair D'Silva wrote:
>
On 02.09.19 01:54, Alastair D'Silva wrote:
> On Tue, 2019-08-27 at 09:13 +0200, David Hildenbrand wrote:
>> On 27.08.19 08:39, Alastair D'Silva wrote:
>>> On Tue, 2019-08-27 at 08:28 +0200, Michal Hocko wrote:
>>>> On Tue 27-08-19 15:20:46, Alastair D'Silva wrote
v@lists.ozlabs.org
Cc: linux-s...@vger.kernel.org
Cc: linux...@vger.kernel.org
Fixes: d0dc12e86b31 ("mm/memory_hotplug: optimize memory hotplug")
Signed-off-by: David Hildenbrand
---
arch/arm64/mm/mmu.c| 4 +---
arch/ia64/mm/init.c| 4 +---
arch/powerpc/mm/mem.c | 3
Cc: Wei Yang
Cc: Qian Cai
Cc: Jason Gunthorpe
Cc: Logan Gunthorpe
Cc: Ira Weiny
Cc: linux-arm-ker...@lists.infradead.org
Cc: linux-i...@vger.kernel.org
Cc: linuxppc-dev@lists.ozlabs.org
Cc: linux-s...@vger.kernel.org
Cc: linux...@vger.kernel.org
Signed-off-by: David Hildenbrand
---
arch/
On 27.08.19 08:39, Alastair D'Silva wrote:
> On Tue, 2019-08-27 at 08:28 +0200, Michal Hocko wrote:
>> On Tue 27-08-19 15:20:46, Alastair D'Silva wrote:
>>> From: Alastair D'Silva
>>>
>>> It is possible for firmware to allocate memory ranges outside
>>> the range of physical memory that we
-ker...@lists.infradead.org
Cc: linux-i...@vger.kernel.org
Cc: linuxppc-dev@lists.ozlabs.org
Cc: linux-s...@vger.kernel.org
Cc: linux...@vger.kernel.org
Signed-off-by: David Hildenbrand
---
arch/arm64/mm/mmu.c| 4 +---
arch/ia64/mm/init.c| 4 +---
arch/powerpc/mm/mem.c
On 21.08.19 17:40, David Hildenbrand wrote:
> No longer in use, let's drop it. We no longer access the zone of
> possibly never onlined memory (and therefore don't read garabage in
> these scenarios).
>
> Cc: Catalin Marinas
> Cc: Will Deacon
> Cc: Tony Luck
> Cc: Fe
g
Cc: linux...@vger.kernel.org
Signed-off-by: David Hildenbrand
---
arch/arm64/mm/mmu.c| 4 +---
arch/ia64/mm/init.c| 4 +---
arch/powerpc/mm/mem.c | 3 +--
arch/s390/mm/init.c| 4 +---
arch/sh/mm/init.c | 4 +---
arch/x86/mm/init_32.c
rc &= is_mem_section_removable(pfn, PAGES_PER_SECTION);
> + rc = rc && is_mem_section_removable(pfn, PAGES_PER_SECTION);
> phys_addr += MIN_MEMORY_BLOCK_SIZE;
> }
>
> - return rc ? true : false;
> + return rc;
> }
>
> static int dlpar_add_lmb(struct drmem_lmb *);
>
Yeah, why not
Reviewed-by: David Hildenbrand
--
Thanks,
David / dhildenb
On 02.08.19 09:18, David Hildenbrand wrote:
> On 02.08.19 01:10, Leonardo Bras wrote:
>> Changes the return variable to bool (as the return value) and
>> avoids doing a ternary operation before returning.
>>
>> Also, since rc will always be true, there is no need to do
&
, 18 insertions(+), 8 deletions(-)
More LOC but seems to be the right thing to do
Reviewed-by: David Hildenbrand
>
> diff --git a/arch/powerpc/platforms/pseries/hotplug-memory.c
> b/arch/powerpc/platforms/pseries/hotplug-memory.c
> index 46d0d35b9ca4..8e700390f3d6 100644
> --- a/ar
On 02.08.19 01:10, Leonardo Bras wrote:
> Changes the return variable to bool (as the return value) and
> avoids doing a ternary operation before returning.
>
> Also, since rc will always be true, there is no need to do
> rc &= bool, as (true && X) will result in X.
>
> Signed-off-by: Leonardo
On 16.07.19 10:46, Oscar Salvador wrote:
> On Mon, Jul 15, 2019 at 01:10:33PM +0200, David Hildenbrand wrote:
>> On 01.07.19 12:27, Michal Hocko wrote:
>>> On Mon 01-07-19 11:36:44, Oscar Salvador wrote:
>>>> On Mon, Jul 01, 2019 at 10:51:44AM +0200, Michal Hocko wrot
On 16.07.19 10:46, Oscar Salvador wrote:
> On Mon, Jul 15, 2019 at 01:10:33PM +0200, David Hildenbrand wrote:
>> On 01.07.19 12:27, Michal Hocko wrote:
>>> On Mon 01-07-19 11:36:44, Oscar Salvador wrote:
>>>> On Mon, Jul 01, 2019 at 10:51:44AM +0200, Michal Hocko wrot
701 - 800 of 1027 matches
Mail list logo