On Thu, Oct 29, 2020 at 05:27:17PM +0100, David Hildenbrand wrote:
> Let's revert what we did in case seomthing goes wrong and we return an
> error.
Dumb question, but should not we do this for other arches as well?
--
Oscar Salvador
SUSE L3
On Thu, Oct 29, 2020 at 05:27:16PM +0100, David Hildenbrand wrote:
> Let's print a warning similar to in arch_add_linear_mapping() instead of
> WARN_ON_ONCE() and eventually crashing the kernel.
>
> Cc: Michael Ellerman
> Cc: Benjamin Herrenschmidt
> Cc: Paul Mackerras
> Cc: Rashmica Gupta
>
On Thu, Oct 29, 2020 at 08:57:32AM -0700, Yang Shi wrote:
> IMHO, we don't have to modify those two places at all. They are used
> to rebalance the anon lru active/inactive ratio even if we did not try
> to evict anon pages at all, so "total_swap_pages" is used instead of
> checking swappiness and
On 2020-10-16 15:42, Michal Hocko wrote:
OK, I finally managed to convince my friday brain to think and grasped
what the code is intended to do. The loop is hairy and we want to
prevent from spurious EIO when all the pages are on a proper node. So
the check has to be done inside the loop. Anyway
On 2020-10-16 14:31, Michal Hocko wrote:
I do not like the fix though. The code is really confusing. Why should
we check for flags in each iteration of the loop when it cannot change?
Also why should we take the ptl lock in the first place when the look
is
broken out immediately?
About
On 2020-10-15 14:15, Shijie Luo wrote:
When flags don't have MPOL_MF_MOVE or MPOL_MF_MOVE_ALL bits, code
breaks
and passing origin pte - 1 to pte_unmap_unlock seems like not a good
idea.
Signed-off-by: Shijie Luo
Signed-off-by: linmiaohe
---
mm/mempolicy.c | 6 +-
1 file changed, 5
On 2020-10-07 18:17, Dave Hansen wrote:
From: Dave Hansen
Reclaim-based migration is attempting to optimize data placement in
memory based on the system topology. If the system changes, so must
the migration ordering.
The implementation here is pretty simple and entirely unoptimized. On
any
On 2020-09-22 19:03, Andrew Morton wrote:
On Tue, 22 Sep 2020 15:56:36 +0200 Oscar Salvador
wrote:
This patchset is the latest version of soft offline rework patchset
targetted for v5.9.
Thanks.
Where do we now stand with the followon patches:
On 2020-09-19 02:23, Andrew Morton wrote:
On Fri, 18 Sep 2020 09:58:22 +0200 osalva...@suse.de wrote:
I just found out yesterday that the patchset Naoya sent has diverged
from mine in some aspects that lead to some bugs [1].
This was due to a misunderstanding so no blame here.
So, patch#8 and
On 2020-08-06 20:49, nao.horigu...@gmail.com wrote:
From: Oscar Salvador
This patch changes the way we set and handle in-use poisoned pages.
Until now, poisoned pages were released to the buddy allocator,
trusting
that the checks that take place prior to hand the page would act as a
safe net
On 2020-09-17 17:27, HORIGUCHI NAOYA wrote:
Sorry, I modified the patches based on the different assumption from
yours.
I firstly thought of taking page off after confirming the error page
is freed back to buddy. This approach leaves the possibility of reusing
the error page (which is
On 2020-09-16 18:30, osalva...@suse.de wrote:
On 2020-09-16 16:46, Aristeu Rozanski wrote:
Hi Oscar,
On Wed, Sep 16, 2020 at 04:09:30PM +0200, Oscar Salvador wrote:
On Wed, Sep 16, 2020 at 09:53:58AM -0400, Aristeu Rozanski wrote:
Can you try the other patch I posted in response to Naoya?
On 2020-09-16 16:46, Aristeu Rozanski wrote:
Hi Oscar,
On Wed, Sep 16, 2020 at 04:09:30PM +0200, Oscar Salvador wrote:
On Wed, Sep 16, 2020 at 09:53:58AM -0400, Aristeu Rozanski wrote:
Can you try the other patch I posted in response to Naoya?
Same thing:
[ 369.195056] Soft offlining pfn
On 2020-09-16 20:34, David Hildenbrand wrote:
When adding separate memory blocks via add_memory*() and onlining them
immediately, the metadata (especially the memmap) of the next block
will be
placed onto one of the just added+onlined block. This creates a chain
of unmovable allocations: If
On 2020-09-16 19:58, Aristeu Rozanski wrote:
On Wed, Sep 16, 2020 at 06:34:52PM +0200, osalva...@suse.de wrote:
Fat fingers, sorry:
Ok, this is something different.
The race you saw previously is kinda normal as there is a race window
between spotting a freepage and taking it off the buddy
On 2020-09-08 09:56, Oscar Salvador wrote:
The important bit of this patchset is patch#1, which is a fix to take
off
HWPoison pages off a buddy freelist since it can lead us to having
HWPoison
pages back in the game without no one noticing it.
So fix it (we did that already for
On 2020-09-09 12:54, Vlastimil Babka wrote:
Thanks! I expect no performance change while no isolation is in
progress, as
there are no new tests added in alloc/free paths. During page isolation
there's
a single drain instead of once-per-pageblock, which is a benefit. But
the
pcplists are
On 2020-08-04 03:49, Qian Cai wrote:
Well, each iteration will mmap/munmap, so there should be no leaking.
https://gitlab.com/cailca/linux-mm/-/blob/master/random.c#L376
It also seem to me madvise(MADV_SOFT_OFFLINE) does start to fragment
memory
somehow, because after this "madvise: Cannot
On 2020-07-20 10:27, osalva...@suse.de wrote:
On 2020-07-17 08:55, HORIGUCHI NAOYA wrote:
I ran Quan Cai's test program (https://github.com/cailca/linux-mm) on
a
small (4GB memory) VM, and weiredly found that (1) the target
hugepages
are not always dissolved and (2) dissovled hugetpages are
On 2020-07-17 08:55, HORIGUCHI NAOYA wrote:
I ran Quan Cai's test program (https://github.com/cailca/linux-mm) on a
small (4GB memory) VM, and weiredly found that (1) the target hugepages
are not always dissolved and (2) dissovled hugetpages are still counted
in "HugePages_Total:". See below:
On 2020-07-16 14:38, Oscar Salvador wrote:
From: David Woodhouse
Sorry for the noise.
This should not be here.
I dunno how this patch sneaked in.
Please ignore it.
On 2019-10-11 23:32, Qian Cai wrote:
# /opt/ltp/runtest/bin/move_pages12
move_pages12.c:263: INFO: Free RAM 258988928 kB
move_pages12.c:281: INFO: Increasing 2048kB hugepages pool on node 0 to
4
move_pages12.c:291: INFO: Increasing 2048kB hugepages pool on node 8 to
4
move_pages12.c:207:
On 2019-09-11 08:22, Naoya Horiguchi wrote:
I found another panic ...
Hi Naoya,
Thanks for giving it a try. Are these testcase public?
I will definetely take a look and try to solve these cases.
Thanks!
This testcase is testing the corner case where hugepage migration fails
by allocation
On 2019-07-24 22:11, Dan Williams wrote:
On Tue, Jun 25, 2019 at 12:53 AM Oscar Salvador
wrote:
This patch introduces MHP_MEMMAP_DEVICE and MHP_MEMMAP_MEMBLOCK flags,
and prepares the callers that add memory to take a "flags" parameter.
This "flags" parameter will be evaluated later on in
On 2019-05-07 01:39, Dan Williams wrote:
Towards enabling memory hotplug to track partial population of a
section, introduce 'struct mem_section_usage'.
A pointer to a 'struct mem_section_usage' instance replaces the
existing
pointer to a 'pageblock_flags' bitmap. Effectively it adds one more
On 2019-05-07 01:39, Dan Williams wrote:
Prepare for hot{plug,remove} of sub-ranges of a section by tracking a
sub-section active bitmask, each bit representing a PMD_SIZE span of
the
architecture's memory hotplug section size.
The implications of a partially populated section is that
On 2018-12-17 16:29, Michal Hocko wrote:
On Mon 17-12-18 16:06:51, Oscar Salvador wrote:
[...]
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index a6e7bfd18cde..18d41e85f672 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -8038,11 +8038,12 @@ bool has_unmovable_pages(struct zone *zone,
On 2018-12-11 11:18, Michal Hocko wrote:
Currently, if we fail to isolate a single page, we put all already
isolated pages back to their LRU and we bail out from the function.
This is quite suboptimal, as this will force us to start over again
because scan_movable_pages will give us the same
On 2018-12-11 09:50, Oscar Salvador wrote:
- } else {
- pr_warn("failed to isolate pfn %lx\n", pfn);
- dump_page(page, "isolation failed");
- put_page(page);
- /* Because we don't have big
This commit adds shake_page() for mlocked pages to make sure that the
target
page is flushed out from LRU cache. Without this shake_page(),
subsequent
delete_from_lru_cache() (from me_pagecache_clean()) fails to isolate
it and
the page will finally return back to LRU list. So this scenario
This commit adds shake_page() for mlocked pages to make sure that the
target
page is flushed out from LRU cache. Without this shake_page(),
subsequent
delete_from_lru_cache() (from me_pagecache_clean()) fails to isolate
it and
the page will finally return back to LRU list. So this scenario
> Btw. the way how we drop all the work on the first page that we
> cannot
> isolate is just goofy. Why don't we simply migrate all that we
> already
> have on the list and go on? Something for a followup cleanup though.
Indeed, that is just wrong.
I will try to send a followup cleanup to fix
> Btw. the way how we drop all the work on the first page that we
> cannot
> isolate is just goofy. Why don't we simply migrate all that we
> already
> have on the list and go on? Something for a followup cleanup though.
Indeed, that is just wrong.
I will try to send a followup cleanup to fix
On 2018-12-03 12:16, David Hildenbrand wrote:
Let's use the easier to read (and not mess up) variants:
- Use DEVICE_ATTR_RO
- Use DEVICE_ATTR_WO
- Use DEVICE_ATTR_RW
instead of the more generic DEVICE_ATTR() we're using right now.
We have to rename most callback functions. By fixing the
On 2018-12-03 12:16, David Hildenbrand wrote:
Let's use the easier to read (and not mess up) variants:
- Use DEVICE_ATTR_RO
- Use DEVICE_ATTR_WO
- Use DEVICE_ATTR_RW
instead of the more generic DEVICE_ATTR() we're using right now.
We have to rename most callback functions. By fixing the
On 2018-12-03 11:03, Michal Hocko wrote:
Debugged-by: Oscar Salvador
Cc: stable
Signed-off-by: Michal Hocko
Bit by bit memory-hotplug is getting trained :-)
Reviewed-by: Oscar Salvador
On 2018-12-03 11:03, Michal Hocko wrote:
Debugged-by: Oscar Salvador
Cc: stable
Signed-off-by: Michal Hocko
Bit by bit memory-hotplug is getting trained :-)
Reviewed-by: Oscar Salvador
> Signed-off-by: Michal Hocko
[...]
> + do {
> + for (pfn = start_pfn; pfn;)
> + {
> + /* start memory hot removal */
Should we change thAT comment? I mean, this is not really the hot-
removal stage.
Maybe "start memory migration" suits better?
> Signed-off-by: Michal Hocko
[...]
> + do {
> + for (pfn = start_pfn; pfn;)
> + {
> + /* start memory hot removal */
Should we change thAT comment? I mean, this is not really the hot-
removal stage.
Maybe "start memory migration" suits better?
On Tue, 2018-11-20 at 14:43 +0100, Michal Hocko wrote:
> From: Michal Hocko
>
> do_migrate_range has been limiting the number of pages to migrate to
> 256
> for some reason which is not documented.
When looking back at old memory-hotplug commits one feels pretty sad
about the brevity of the
On Tue, 2018-11-20 at 14:43 +0100, Michal Hocko wrote:
> From: Michal Hocko
>
> do_migrate_range has been limiting the number of pages to migrate to
> 256
> for some reason which is not documented.
When looking back at old memory-hotplug commits one feels pretty sad
about the brevity of the
On Fri, 2018-11-16 at 14:41 -0800, Dave Hansen wrote:
> On 11/16/18 2:12 AM, Oscar Salvador wrote:
> > Physical memory hotadd has to allocate a memmap (struct page array)
> > for
> > the newly added memory section. Currently, kmalloc is used for
> > those
> > allocations.
>
> Did you literally
On Fri, 2018-11-16 at 14:41 -0800, Dave Hansen wrote:
> On 11/16/18 2:12 AM, Oscar Salvador wrote:
> > Physical memory hotadd has to allocate a memmap (struct page array)
> > for
> > the newly added memory section. Currently, kmalloc is used for
> > those
> > allocations.
>
> Did you literally
On Mon, 2018-11-12 at 21:28 +, Pavel Tatashin wrote:
> >
> > This collides with the refactoring of hmm, to be done in terms of
> > devm_memremap_pages(). I'd rather not introduce another common
> > function *beneath* hmm and devm_memremap_pages() and rather make
> > devm_memremap_pages() the
On Mon, 2018-11-12 at 21:28 +, Pavel Tatashin wrote:
> >
> > This collides with the refactoring of hmm, to be done in terms of
> > devm_memremap_pages(). I'd rather not introduce another common
> > function *beneath* hmm and devm_memremap_pages() and rather make
> > devm_memremap_pages() the
> This collides with the refactoring of hmm, to be done in terms of
> devm_memremap_pages(). I'd rather not introduce another common
> function *beneath* hmm and devm_memremap_pages() and rather make
> devm_memremap_pages() the common function.
Hi Dan,
That is true.
Previous version of this
> This collides with the refactoring of hmm, to be done in terms of
> devm_memremap_pages(). I'd rather not introduce another common
> function *beneath* hmm and devm_memremap_pages() and rather make
> devm_memremap_pages() the common function.
Hi Dan,
That is true.
Previous version of this
On Fri, 2018-11-16 at 12:22 +0100, Michal Hocko wrote:
> On Fri 16-11-18 11:47:01, osalvador wrote:
> > On Fri, 2018-11-16 at 09:30 +0100, Michal Hocko wrote:
> > > From: Michal Hocko
> > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > > index a919ba5cb3c8
On Fri, 2018-11-16 at 12:22 +0100, Michal Hocko wrote:
> On Fri 16-11-18 11:47:01, osalvador wrote:
> > On Fri, 2018-11-16 at 09:30 +0100, Michal Hocko wrote:
> > > From: Michal Hocko
> > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > > index a919ba5cb3c8
On Fri, 2018-11-16 at 10:57 +0100, Michal Hocko wrote:
> On Thu 15-11-18 13:37:35, Andrew Morton wrote:
> [...]
> > Worse, the situations in which managed_zone() != populated_zone()
> > are
> > rare(?), so it will take a long time for problems to be discovered,
> > I
> > expect.
>
> We would
On Fri, 2018-11-16 at 10:57 +0100, Michal Hocko wrote:
> On Thu 15-11-18 13:37:35, Andrew Morton wrote:
> [...]
> > Worse, the situations in which managed_zone() != populated_zone()
> > are
> > rare(?), so it will take a long time for problems to be discovered,
> > I
> > expect.
>
> We would
On Fri, 2018-11-16 at 09:30 +0100, Michal Hocko wrote:
> From: Michal Hocko
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index a919ba5cb3c8..ec2c7916dc2d 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -7845,6 +7845,7 @@ bool has_unmovable_pages(struct zone *zone,
> struct page
On Fri, 2018-11-16 at 09:30 +0100, Michal Hocko wrote:
> From: Michal Hocko
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index a919ba5cb3c8..ec2c7916dc2d 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -7845,6 +7845,7 @@ bool has_unmovable_pages(struct zone *zone,
> struct page
On Fri, 2018-11-16 at 09:30 +0100, Michal Hocko wrote:
> From: Michal Hocko
>
> The memory offlining failure reporting is inconsistent and
> insufficient.
> Some error paths simply do not report the failure to the log at all.
> When we do report there are no details about the reason of the
>
On Fri, 2018-11-16 at 09:30 +0100, Michal Hocko wrote:
> From: Michal Hocko
>
> The memory offlining failure reporting is inconsistent and
> insufficient.
> Some error paths simply do not report the failure to the log at all.
> When we do report there are no details about the reason of the
>
On Fri, 2018-11-16 at 09:30 +0100, Michal Hocko wrote:
> From: Michal Hocko
>
> This function is never called from a context which would provide
> misaligned pfn range so drop the pointless check.
>
> Signed-off-by: Michal Hocko
I vaguely remember that someone reported a problem about
On Fri, 2018-11-16 at 09:30 +0100, Michal Hocko wrote:
> From: Michal Hocko
>
> This function is never called from a context which would provide
> misaligned pfn range so drop the pointless check.
>
> Signed-off-by: Michal Hocko
I vaguely remember that someone reported a problem about
On Wed, 2018-11-07 at 08:35 +0100, Michal Hocko wrote:
> On Wed 07-11-18 07:35:18, Balbir Singh wrote:
> > The check seems to be quite aggressive and in a loop that iterates
> > pages, but has nothing to do with the page, did you mean to make
> > the check
> >
> > zone_idx(page_zone(page)) ==
On Wed, 2018-11-07 at 08:35 +0100, Michal Hocko wrote:
> On Wed 07-11-18 07:35:18, Balbir Singh wrote:
> > The check seems to be quite aggressive and in a loop that iterates
> > pages, but has nothing to do with the page, did you mean to make
> > the check
> >
> > zone_idx(page_zone(page)) ==
On Tue, 2018-11-06 at 10:55 +0100, Michal Hocko wrote:
> From: Michal Hocko
>
> Reported-and-tested-by: Baoquan He
> Acked-by: Baoquan He
> Fixes: "mm, memory_hotplug: make has_unmovable_pages more robust")
> Signed-off-by: Michal Hocko
Looks good to me.
Reviewed-by: Oscar Salvador
Oscar
On Tue, 2018-11-06 at 10:55 +0100, Michal Hocko wrote:
> From: Michal Hocko
>
> Reported-and-tested-by: Baoquan He
> Acked-by: Baoquan He
> Fixes: "mm, memory_hotplug: make has_unmovable_pages more robust")
> Signed-off-by: Michal Hocko
Looks good to me.
Reviewed-by: Oscar Salvador
Oscar
From: Oscar Salvador
unregister_memory_section() calls remove_memory_section()
with three arguments:
* node_id
* section
* phys_device
Neither node_id nor phys_device are used.
Let us drop them from the function.
Signed-off-by: Oscar Salvador
---
drivers/base/memory.c | 5 ++---
1 file
From: Oscar Salvador
This patchset does some cleanups and refactoring in the memory-hotplug code.
The first and the second patch are pretty straightforward, as they
only remove unused arguments/checks.
The third one refactors unregister_mem_sect_under_nodes.
This is needed to have a proper
From: Oscar Salvador
unregister_memory_section() calls remove_memory_section()
with three arguments:
* node_id
* section
* phys_device
Neither node_id nor phys_device are used.
Let us drop them from the function.
Signed-off-by: Oscar Salvador
---
drivers/base/memory.c | 5 ++---
1 file
From: Oscar Salvador
This patchset does some cleanups and refactoring in the memory-hotplug code.
The first and the second patch are pretty straightforward, as they
only remove unused arguments/checks.
The third one refactors unregister_mem_sect_under_nodes.
This is needed to have a proper
From: Oscar Salvador
unregister_mem_sect_under_nodes() tries to allocate a nodemask_t
in order to check whithin the loop which nodes have already been unlinked,
so we do not repeat the operation on them.
NODEMASK_ALLOC calls kmalloc() if NODES_SHIFT > 8, otherwise
it just declares a nodemask_t
From: Oscar Salvador
Before calling to unregister_mem_sect_under_nodes(),
remove_memory_section() already checks if we got a valid
memory_block.
No need to check that again in unregister_mem_sect_under_nodes().
Signed-off-by: Oscar Salvador
---
drivers/base/node.c | 4
1 file changed, 4
From: Oscar Salvador
unregister_mem_sect_under_nodes() tries to allocate a nodemask_t
in order to check whithin the loop which nodes have already been unlinked,
so we do not repeat the operation on them.
NODEMASK_ALLOC calls kmalloc() if NODES_SHIFT > 8, otherwise
it just declares a nodemask_t
From: Oscar Salvador
Before calling to unregister_mem_sect_under_nodes(),
remove_memory_section() already checks if we got a valid
memory_block.
No need to check that again in unregister_mem_sect_under_nodes().
Signed-off-by: Oscar Salvador
---
drivers/base/node.c | 4
1 file changed, 4
From: Oscar Salvador
unregister_memory_section() calls remove_memory_section()
with three arguments:
* node_id
* section
* phys_device
Neither node_id nor phys_device are used.
Let us drop them from the function.
Signed-off-by: Oscar Salvador
---
drivers/base/memory.c | 5 ++---
1 file
From: Oscar Salvador
unregister_memory_section() calls remove_memory_section()
with three arguments:
* node_id
* section
* phys_device
Neither node_id nor phys_device are used.
Let us drop them from the function.
Signed-off-by: Oscar Salvador
---
drivers/base/memory.c | 5 ++---
1 file
From: Oscar Salvador
This patchset is about cleaning up/refactoring a few functions
from the memory-hotplug code.
The first and the second patch are pretty straightforward, as they
only remove unused arguments/checks.
The third one change the layout of the unregister_mem_sect_under_nodes a bit.
From: Oscar Salvador
This patchset is about cleaning up/refactoring a few functions
from the memory-hotplug code.
The first and the second patch are pretty straightforward, as they
only remove unused arguments/checks.
The third one change the layout of the unregister_mem_sect_under_nodes a bit.
From: Oscar Salvador
Before calling to unregister_mem_sect_under_nodes(),
remove_memory_section() already checks if we got a valid
memory_block.
No need to check that again in unregister_mem_sect_under_nodes().
Signed-off-by: Oscar Salvador
---
drivers/base/node.c | 4
1 file changed, 4
From: Oscar Salvador
Before calling to unregister_mem_sect_under_nodes(),
remove_memory_section() already checks if we got a valid
memory_block.
No need to check that again in unregister_mem_sect_under_nodes().
Signed-off-by: Oscar Salvador
---
drivers/base/node.c | 4
1 file changed, 4
From: Oscar Salvador
With the assumption that the relationship between
memory_block <-> node is 1:1, we can refactor this function a bit.
This assumption is being taken from register_mem_sect_under_node()
code.
register_mem_sect_under_node() takes the mem_blk's nid, and compares it
to the
From: Oscar Salvador
With the assumption that the relationship between
memory_block <-> node is 1:1, we can refactor this function a bit.
This assumption is being taken from register_mem_sect_under_node()
code.
register_mem_sect_under_node() takes the mem_blk's nid, and compares it
to the
From: Oscar Salvador
This tries to fix [1], which was reported by David Hildenbrand, and also
does some cleanups/refactoring.
I am sending this as RFC to see if the direction I am going is right before
spending more time into it.
And also to gather feedback about hmm/zone_device stuff.
The code
From: Oscar Salvador
This patch is only a preparation for the following-up patches.
The idea is to remove the zone parameter and pass the nid instead.
The zone parameter was needed because down the chain we call
__remove_zone, which adjusts the spanned pages of a zone/node.
online_pages()
From: Oscar Salvador
This patch refactors shrink_zone_span and shrink_pgdat_span functions.
In case that find_smallest/biggest_section do not return any pfn,
it means that the zone/pgdat has no online sections left, so we can
set the respective values to 0:
zone case:
From: Oscar Salvador
This tries to fix [1], which was reported by David Hildenbrand, and also
does some cleanups/refactoring.
I am sending this as RFC to see if the direction I am going is right before
spending more time into it.
And also to gather feedback about hmm/zone_device stuff.
The code
From: Oscar Salvador
This patch is only a preparation for the following-up patches.
The idea is to remove the zone parameter and pass the nid instead.
The zone parameter was needed because down the chain we call
__remove_zone, which adjusts the spanned pages of a zone/node.
online_pages()
From: Oscar Salvador
This patch refactors shrink_zone_span and shrink_pgdat_span functions.
In case that find_smallest/biggest_section do not return any pfn,
it means that the zone/pgdat has no online sections left, so we can
set the respective values to 0:
zone case:
From: Oscar Salvador
Currently, we decrement zone/node spanned_pages when we
__remove__ the memory.
This is not really great.
Incrementing of spanned pages is done in online_pages() path,
decrementing spanned pages should be moved to offline_pages().
This, besides making the core more
From: Oscar Salvador
Currently, we decrement zone/node spanned_pages when we
__remove__ the memory.
This is not really great.
Incrementing of spanned pages is done in online_pages() path,
decrementing spanned pages should be moved to offline_pages().
This, besides making the core more
From: Oscar Salvador
Moving the #ifdefs out of the function makes it easier to follow.
Signed-off-by: Oscar Salvador
Acked-by: Michal Hocko
Reviewed-by: Pavel Tatashin
---
mm/page_alloc.c | 50 +-
1 file changed, 37 insertions(+), 13
From: Oscar Salvador
Moving the #ifdefs out of the function makes it easier to follow.
Signed-off-by: Oscar Salvador
Acked-by: Michal Hocko
Reviewed-by: Pavel Tatashin
---
mm/page_alloc.c | 50 +-
1 file changed, 37 insertions(+), 13
From: Pavel Tatashin
__paginginit is the same thing as __meminit except for platforms without
sparsemem, there it is defined as __init.
Remove __paginginit and use __meminit. Use __ref in one single function
that merges __meminit and __init sections: setup_usemap().
Signed-off-by: Pavel
From: Pavel Tatashin
__paginginit is the same thing as __meminit except for platforms without
sparsemem, there it is defined as __init.
Remove __paginginit and use __meminit. Use __ref in one single function
that merges __meminit and __init sections: setup_usemap().
Signed-off-by: Pavel
From: Oscar Salvador
Let us move the code between CONFIG_DEFERRED_STRUCT_PAGE_INIT
to an inline function.
Not having an ifdef in the function makes the code more readable.
Signed-off-by: Oscar Salvador
Acked-by: Michal Hocko
Reviewed-by: Pavel Tatashin
---
mm/page_alloc.c | 26
From: Oscar Salvador
Currently, whenever a new node is created/re-used from the memhotplug path,
we call free_area_init_node()->free_area_init_core().
But there is some code that we do not really need to run when we are coming
from such path.
free_area_init_core() performs the following
From: Pavel Tatashin
zone->node is configured only when CONFIG_NUMA=y, so it is a good idea to
have inline functions to access this field in order to avoid ifdef's in
c files.
Signed-off-by: Pavel Tatashin
Signed-off-by: Oscar Salvador
Reviewed-by: Oscar Salvador
Acked-by: Michal Hocko
---
From: Oscar Salvador
Let us move the code between CONFIG_DEFERRED_STRUCT_PAGE_INIT
to an inline function.
Not having an ifdef in the function makes the code more readable.
Signed-off-by: Oscar Salvador
Acked-by: Michal Hocko
Reviewed-by: Pavel Tatashin
---
mm/page_alloc.c | 26
From: Oscar Salvador
Currently, whenever a new node is created/re-used from the memhotplug path,
we call free_area_init_node()->free_area_init_core().
But there is some code that we do not really need to run when we are coming
from such path.
free_area_init_core() performs the following
From: Pavel Tatashin
zone->node is configured only when CONFIG_NUMA=y, so it is a good idea to
have inline functions to access this field in order to avoid ifdef's in
c files.
Signed-off-by: Pavel Tatashin
Signed-off-by: Oscar Salvador
Reviewed-by: Oscar Salvador
Acked-by: Michal Hocko
---
From: Oscar Salvador
Changes:
v5 -> v6:
- Added patch from Pavel that removes __paginginit
- Convert all __meminit(old __paginginit) to __init
for functions we do not need after initialization.
- Move definition of free_area_init_core_hotplug
to
From: Oscar Salvador
Changes:
v5 -> v6:
- Added patch from Pavel that removes __paginginit
- Convert all __meminit(old __paginginit) to __init
for functions we do not need after initialization.
- Move definition of free_area_init_core_hotplug
to
From: Oscar Salvador
__pagininit macro is being used to mark functions for:
a) Functions that we do not need to keep once the system is fully
initialized with regard to memory.
b) Functions that will be needed for the memory-hotplug code,
and because of that we need to keep them after
From: Oscar Salvador
__pagininit macro is being used to mark functions for:
a) Functions that we do not need to keep once the system is fully
initialized with regard to memory.
b) Functions that will be needed for the memory-hotplug code,
and because of that we need to keep them after
From: Oscar Salvador
is_dev_zone() is using zone_id() to check if the zone is ZONE_DEVICE.
zone_id() looks pretty much the same as zone_idx(), and while the use of
zone_idx() is quite spread in the kernel, zone_id() is only being
used by is_dev_zone().
This patch removes zone_id() and makes
1 - 100 of 185 matches
Mail list logo