On 11/2/23 16:46, Paolo Bonzini wrote:
> On Thu, Nov 2, 2023 at 4:38 PM Sean Christopherson wrote:
>> Actually, looking that this again, there's not actually a hard dependency on
>> THP.
>> A THP-enabled kernel _probably_ gives a higher probability of using
>> hugepages,
>> but mostly because
CKUP @
>> __update_freelist_slow+0x74/0x90
>
> Sorry, the bug can be fixed by this patch from Vlastimil Babka:
>
> https://lore.kernel.org/all/83ff4b9e-94f1-8b35-1233-3dd414ea4...@suse.cz/
The current -next should be fixed, the fix was folded to the preparatory
commit, which
folios are also unevictable. To enforce
that expecation, make mapping_set_unmovable() also set AS_UNEVICTABLE.
Also incorporate comment update suggested by Matthew.
Fixes: 3424873596ce ("mm: Add AS_UNMOVABLE to mark mapping as completely
unmovable")
Signed-off-by: Vlastimil Bab
On 9/6/23 01:56, Sean Christopherson wrote:
> On Fri, Sep 01, 2023, Vlastimil Babka wrote:
>> As Kirill pointed out, mapping can be removed under us due to
>> truncation. Test it under folio lock as already done for the async
>> compaction / dirty folio case. To preven
On 7/25/23 14:51, Matthew Wilcox wrote:
> On Tue, Jul 25, 2023 at 01:24:03PM +0300, Kirill A . Shutemov wrote:
>> On Tue, Jul 18, 2023 at 04:44:53PM -0700, Sean Christopherson wrote:
>> > diff --git a/mm/compaction.c b/mm/compaction.c
>> > index dbc9f86b1934..a3d2b132df52 100644
>> > ---
folios are also unevictable - it is the
case for guest memfd folios.
Also incorporate comment update suggested by Matthew.
Fixes: 3424873596ce ("mm: Add AS_UNMOVABLE to mark mapping as completely
unmovable")
Signed-off-by: Vlastimil Babka
---
Feel free to squash into 3424873596ce.
mm/co
On 7/26/23 13:20, Nikunj A. Dadhania wrote:
> Hi Sean,
>
> On 7/24/2023 10:30 PM, Sean Christopherson wrote:
>> On Mon, Jul 24, 2023, Nikunj A. Dadhania wrote:
>>> On 7/19/2023 5:14 AM, Sean Christopherson wrote:
This is the next iteration of implementing fd-based (instead of vma-based)
On 7/19/23 01:44, Sean Christopherson wrote:
> Signed-off-by: Sean Christopherson
Process wise this will probably be frowned upon when done separately, so I'd
fold it in the patch using the export, seems to be the next one.
> ---
> security/security.c | 1 +
> 1 file changed, 1 insertion(+)
>
On 7/25/23 14:51, Matthew Wilcox wrote:
> On Tue, Jul 25, 2023 at 01:24:03PM +0300, Kirill A . Shutemov wrote:
>> On Tue, Jul 18, 2023 at 04:44:53PM -0700, Sean Christopherson wrote:
>> > diff --git a/mm/compaction.c b/mm/compaction.c
>> > index dbc9f86b1934..a3d2b132df52 100644
>> > ---
On 7/11/23 12:35, Leon Romanovsky wrote:
>
> On Mon, Feb 27, 2023 at 09:35:59AM -0800, Suren Baghdasaryan wrote:
>
> <...>
>
>> Laurent Dufour (1):
>> powerc/mm: try VMA lock-based page fault handling first
>
> Hi,
>
> This series and specifically the commit above broke docker over PPC.
>
On 5/24/23 02:29, David Rientjes wrote:
> On Tue, 23 May 2023, Vlastimil Babka wrote:
>
>> As discussed at LSF/MM [1] [2] and with no objections raised there,
>> deprecate the SLAB allocator. Rename the user-visible option so that
>> users with CONFIG_SLAB=y get a new
On 5/23/23 11:22, Geert Uytterhoeven wrote:
> Hi Vlastimil,
>
> Thanks for your patch!
>
> On Tue, May 23, 2023 at 11:12 AM Vlastimil Babka wrote:
>> As discussed at LSF/MM [1] [2] and with no objections raised there,
>> deprecate the SLAB allocator. Rename
with CONFIG_SLAB=y remove the line so those also
switch to SLUB. Regressions due to the switch should be reported to
linux-mm and slab maintainers.
[1] https://lore.kernel.org/all/4b9fc9c6-b48c-198f-5f80-811a44737...@suse.cz/
[2] https://lwn.net/Articles/932201/
Signed-off-by: Vlastimil Babka
---
arch/arc
On 1/9/23 21:53, Suren Baghdasaryan wrote:
> rw_semaphore is a sizable structure of 40 bytes and consumes
> considerable space for each vm_area_struct. However vma_lock has
> two important specifics which can be used to replace rw_semaphore
> with a simpler structure:
> 1. Readers never wait. They
ple, a VM running with VFIO could run into the memlock limit and
> fail to run. However, we essentially had the same behavior already in
> commit 17839856fd58 ("gup: document and work around "COW can break either
> way" issue") which got merged into some enterprise distros, and there were
> not any such complaints. So most probably, we're fine.
>
> Signed-off-by: David Hildenbrand
Reviewed-by: Vlastimil Babka
pud() also handles it
> correctly, for example, splitting the huge zeropage on FAULT_FLAG_UNSHARE
> such that we can handle FAULT_FLAG_UNSHARE on the PTE level.
>
> This change is a requirement for reliable long-term R/O pinning in
> COW mappings.
>
> Signed-off-by: David Hildenb
; Let's just split (->zap) + fallback in that case.
>
> This is a preparation for more generic FAULT_FLAG_UNSHARE support in
> COW mappings.
>
> Signed-off-by: David Hildenbrand
Reviewed-by: Vlastimil Babka
Nits:
> ---
> mm/memory.c | 24 +++-
>
ate mappings last.
>
> While at it, use folio-based functions instead of page-based functions
> where we touch the code either way.
>
> Signed-off-by: David Hildenbrand
Reviewed-by: Vlastimil Babka
preparation for reliable R/O long-term pinning of pages in
> private mappings, whereby we want to make sure that we will never break
> COW in a read-only private mapping.
>
> Signed-off-by: David Hildenbrand
Reviewed-by: Vlastimil Babka
> ---
> mm/memory.c | 8
>
-
> mm/huge_memory.c | 3 ---
> mm/hugetlb.c | 5 -
> mm/memory.c | 23 ---
> 3 files changed, 20 insertions(+), 11 deletions(-)
Reviewed-by: Vlastimil Babka
te it. So let's prepare for non-anon
> tests by renaming to "cow".
>
> Signed-off-by: David Hildenbrand
Acked-by: Vlastimil Babka
On 9/28/22 04:28, Suren Baghdasaryan wrote:
> On Sun, Sep 11, 2022 at 2:35 AM Vlastimil Babka wrote:
>>
>> On 9/2/22 01:26, Suren Baghdasaryan wrote:
>> >
>> >>
>> >> Two complaints so far:
>> >> - I don't like the vma_mark_locked
On 9/2/22 01:26, Suren Baghdasaryan wrote:
> On Thu, Sep 1, 2022 at 1:58 PM Kent Overstreet
> wrote:
>>
>> On Thu, Sep 01, 2022 at 10:34:48AM -0700, Suren Baghdasaryan wrote:
>> > Resending to fix the issue with the In-Reply-To tag in the original
>> > submission at [4].
>> >
>> > This is a proof
On 3/29/22 18:43, David Hildenbrand wrote:
> Let's test that __HAVE_ARCH_PTE_SWP_EXCLUSIVE works as expected.
>
> Signed-off-by: David Hildenbrand
Acked-by: Vlastimil Babka
> ---
> mm/debug_vm_pgtable.c | 15 +++
> 1 file changed, 15 insertions(+)
&g
es were never really reliable, especially
> when taking one on a shared page and then writing to the page (e.g., GUP
> after fork()). FOLL_GET, including R/W references, were never really
> reliable once fork was involved (e.g., GUP before fork(),
> GUP during fork()). KSM steps bac
On 11/29/21 23:08, Zi Yan wrote:
> On 23 Nov 2021, at 12:32, Vlastimil Babka wrote:
>
>> On 11/23/21 17:35, Zi Yan wrote:
>>> On 19 Nov 2021, at 10:15, Zi Yan wrote:
>>>>>> From what my understanding, cma required alignment of
>>>>>> max(
On 11/23/21 17:35, Zi Yan wrote:
> On 19 Nov 2021, at 10:15, Zi Yan wrote:
From what my understanding, cma required alignment of
max(MAX_ORDER - 1, pageblock_order), because when MIGRATE_CMA was
introduced,
__free_one_page() does not prevent merging two different pageblocks,
On 11/15/21 20:37, Zi Yan wrote:
> From: Zi Yan
>
> Hi David,
>
> You suggested to make alloc_contig_range() deal with pageblock_order instead
> of
> MAX_ORDER - 1 and get rid of MAX_ORDER - 1 dependency in virtio_mem[1]. This
> patchset is my attempt to achieve that. Please take a look and
On 11/8/20 7:57 AM, Mike Rapoport wrote:
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -1428,21 +1428,19 @@ static bool is_debug_pagealloc_cache(struct kmem_cache
*cachep)
return false;
}
-#ifdef CONFIG_DEBUG_PAGEALLOC
static void slab_kernel_map(struct kmem_cache *cachep, void *objp, int
On 11/3/20 5:20 PM, Mike Rapoport wrote:
From: Mike Rapoport
Subject should have "on DEBUG_PAGEALLOC" ?
The design of DEBUG_PAGEALLOC presumes that __kernel_map_pages() must never
fail. With this assumption is wouldn't be safe to allow general usage of
this function.
Moreover, some
,invalid}_noflush().
Still, add a pr_warn() so that future changes in set_memory APIs will not
silently break hibernation.
Signed-off-by: Mike Rapoport
Acked-by: Rafael J. Wysocki
Reviewed-by: David Hildenbrand
Acked-by: Kirill A. Shutemov
Acked-by: Vlastimil Babka
The bool param is a bit
when page
allocation debug is enabled.
Signed-off-by: Mike Rapoport
Reviewed-by: David Hildenbrand
Acked-by: Kirill A. Shutemov
Acked-by: Vlastimil Babka
But, the "enable" param is hideous. I would rather have map and unmap variants
(and just did the same split for page
On 10/8/20 11:49 AM, Christophe Leroy wrote:
In a 10 years old commit
(https://github.com/linuxppc/linux/commit/d069cb4373fe0d451357c4d3769623a7564dfa9f),
powerpc 8xx has
made the handling of PTE accessed bit conditional to CONFIG_SWAP.
Since then, this has been extended to some other powerpc
On 4/21/20 10:39 AM, Nicolai Stange wrote:
> Hi
>
> [adding some drivers/char/random folks + LKML to CC]
>
> Vlastimil Babka writes:
>
>> On 4/17/20 6:53 PM, Michal Suchánek wrote:
>>> Hello,
>>
>> Hi, thanks for reproducing on lat
On 4/17/20 6:53 PM, Michal Suchánek wrote:
> Hello,
Hi, thanks for reproducing on latest upstream!
> instrumenting the kernel with the following patch
>
> ---
> mm/slub.c | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index d6787bbe0248..d40995d5f8ff 100644
>
sted-by: Sachin Sant
Reported-by: PUVICHAKRAVARTHY RAMACHANDRAN
Tested-by: Bharata B Rao
Debugged-by: Srikar Dronamraju
Signed-off-by: Vlastimil Babka
Fixes: a561ce00b09e ("slub: fall back to node_to_mem_node() node if allocating
on memoryless node")
Cc: sta...@vger.kernel.org
Cc: Mel Gorman
On 3/20/20 8:46 AM, Srikar Dronamraju wrote:
> * Vlastimil Babka [2020-03-19 15:10:19]:
>
>> On 3/19/20 3:05 PM, Srikar Dronamraju wrote:
>> > * Vlastimil Babka [2020-03-19 14:47:58]:
>> >
>>
>> No, but AFAICS, such node values are already han
On 3/20/20 4:42 AM, Bharata B Rao wrote:
> On Thu, Mar 19, 2020 at 02:47:58PM +0100, Vlastimil Babka wrote:
>> diff --git a/mm/slub.c b/mm/slub.c
>> index 17dc00e33115..7113b1f9cd77 100644
>> --- a/mm/slub.c
>> +++ b/mm/slub.c
>> @@ -1973,8 +1973,6 @@ static void
On 3/19/20 3:05 PM, Srikar Dronamraju wrote:
> * Vlastimil Babka [2020-03-19 14:47:58]:
>
>> 8<
>> diff --git a/mm/slub.c b/mm/slub.c
>> index 17dc00e33115..7113b1f9cd77 100644
>> --- a/mm/slub.c
>> +++ b/mm/slub.c
>> @@ -1973,8 +1973,6 @@
On 3/19/20 2:26 PM, Sachin Sant wrote:
>
>
>> On 19-Mar-2020, at 6:53 PM, Vlastimil Babka wrote:
>>
>> On 3/19/20 9:52 AM, Sachin Sant wrote:
>>>
>>>> OK how about this version? It's somewhat ugly, but important is that the
>>>> fast
On 3/19/20 9:52 AM, Sachin Sant wrote:
>
>> OK how about this version? It's somewhat ugly, but important is that the fast
>> path case (c->page exists) is unaffected and another common case (c->page is
>> NULL, but node is NUMA_NO_NODE) is just one extra check - impossible to
>> avoid at
>> some
On 3/19/20 1:32 AM, Michael Ellerman wrote:
> Seems like a nice solution to me
Thanks :)
>> 8<
>> diff --git a/mm/slub.c b/mm/slub.c
>> index 17dc00e33115..1d4f2d7a0080 100644
>> --- a/mm/slub.c
>> +++ b/mm/slub.c
>> @@ -1511,7 +1511,7 @@ static inline struct page *alloc_slab_page(struct
kernel.org/linux-next/3381cd91-ab3d-4773-ba04-e7a072a63...@linux.vnet.ibm.com/
[2]
https://lore.kernel.org/linux-mm/fff0e636-4c36-ed10-281c-8cdb0687c...@virtuozzo.com/
[3] https://lore.kernel.org/linux-mm/20200317092624.gb22...@in.ibm.com/
[4]
https://lore.kernel.org/linux-mm/088b5996-faae-8a56-ef9c-5b5
On 3/18/20 5:06 PM, Bharata B Rao wrote:
> On Wed, Mar 18, 2020 at 03:42:19PM +0100, Vlastimil Babka wrote:
>> This is a PowerPC platform with following NUMA topology:
>>
>> available: 2 nodes (0-1)
>> node 0 cpus:
>> node 0 size: 0 MB
>> node 0 free: 0 MB
b5996-faae-8a56-ef9c-5b567125a...@suse.cz/
Reported-by: Sachin Sant
Reported-by: Bharata B Rao
Debugged-by: Srikar Dronamraju
Signed-off-by: Vlastimil Babka
Cc: Mel Gorman
Cc: Michael Ellerman
Cc: Michal Hocko
Cc: Christopher Lameter
Cc: linuxppc-dev@lists.ozlabs.org
Cc: Joonsoo Kim
Cc: Pek
ssing the pgdat
>> structure. Fix the same for node_spanned_pages() too.
>>
>> Cc: Andrew Morton
>> Cc: linux...@kvack.org
>> Cc: Mel Gorman
>> Cc: Michael Ellerman
>> Cc: Sachin Sant
>> Cc: Michal Hocko
>> Cc: Christopher Lameter
>>
On 3/18/20 4:20 AM, Srikar Dronamraju wrote:
> * Vlastimil Babka [2020-03-17 17:45:15]:
>>
>> Yes, that Kirill's patch was about the memcg shrinker map allocation. But the
>> patch hunk that Bharata posted as a "hack" that fixes the problem, it follows
>>
On 3/17/20 5:25 PM, Srikar Dronamraju wrote:
> * Vlastimil Babka [2020-03-17 16:56:04]:
>
>>
>> I wonder why do you get a memory leak while Sachin in the same situation [1]
>> gets a crash? I don't understand anything anymore.
>
> Sachin was testing on linux-next
On 3/17/20 12:53 PM, Bharata B Rao wrote:
> On Tue, Mar 17, 2020 at 02:56:28PM +0530, Bharata B Rao wrote:
>> Case 1: 2 node NUMA, node0 empty
>>
>> # numactl -H
>> available: 2 nodes (0-1)
>> node 0 cpus:
>> node 0 size: 0 MB
>> node 0 free: 0 MB
>> node 1 cpus: 0
On 3/17/20 3:51 PM, Srikar Dronamraju wrote:
> * Vlastimil Babka [2020-03-17 14:53:26]:
>
>> >> >
>> >> > Mitigate this by allocating the new slab from the node_numa_mem.
>> >>
>> >> Are you sure this is really needed and the othe
On 3/17/20 2:45 PM, Srikar Dronamraju wrote:
> * Vlastimil Babka [2020-03-17 14:34:25]:
>
>> On 3/17/20 2:17 PM, Srikar Dronamraju wrote:
>> > Currently while allocating a slab for a offline node, we use its
>> > associated node_numa_mem to search for a
On 3/16/20 10:06 AM, Michal Hocko wrote:
> On Thu 12-03-20 17:41:58, Vlastimil Babka wrote:
> [...]
>> with nid present in:
>> N_POSSIBLE - pgdat might not exist, node_to_mem_node() must return some
>> online
>
> I would rather have a dummy pgdat for those
On 3/17/20 2:17 PM, Srikar Dronamraju wrote:
> Currently while allocating a slab for a offline node, we use its
> associated node_numa_mem to search for a partial slab. If we don't find
> a partial slab, we try allocating a slab from the offline node using
> __alloc_pages_node. However this is
On 3/13/20 12:04 PM, Srikar Dronamraju wrote:
>> I lost all the memory about it. :)
>> Anyway, how about this?
>>
>> 1. make node_present_pages() safer
>> static inline node_present_pages(nid)
>> {
>> if (!node_online(nid)) return 0;
>> return (NODE_DATA(nid)->node_present_pages);
>> }
>>
>
>
On 3/13/20 12:12 PM, Srikar Dronamraju wrote:
> * Michael Ellerman [2020-03-13 21:48:06]:
>
>> Sachin Sant writes:
>> >> The patch below might work. Sachin can you test this? I tried faking up
>> >> a system with a memoryless node zero but couldn't get it to even start
>> >> booting.
>> >>
>>
On 3/12/20 5:13 PM, Srikar Dronamraju wrote:
> * Vlastimil Babka [2020-03-12 14:51:38]:
>
>> > * Vlastimil Babka [2020-03-12 10:30:50]:
>> >
>> >> On 3/12/20 9:23 AM, Sachin Sant wrote:
>> >> >> On 12-Mar-2020, at 10:57 AM, Srikar Dronamra
On 3/12/20 2:14 PM, Srikar Dronamraju wrote:
> * Vlastimil Babka [2020-03-12 10:30:50]:
>
>> On 3/12/20 9:23 AM, Sachin Sant wrote:
>> >> On 12-Mar-2020, at 10:57 AM, Srikar Dronamraju
>> >> wrote:
>> >> * Michal Hocko [2020-03-11 12:57:35]:
&g
On 3/12/20 9:23 AM, Sachin Sant wrote:
>
>
>> On 12-Mar-2020, at 10:57 AM, Srikar Dronamraju
>> wrote:
>>
>> * Michal Hocko [2020-03-11 12:57:35]:
>>
>>> On Wed 11-03-20 16:32:35, Srikar Dronamraju wrote:
A Powerpc system with multiple possible nodes and with CONFIG_NUMA
enabled
ts.infradead.org
> Cc: linuxppc-dev@lists.ozlabs.org
> Cc: linux-s...@vger.kernel.org
> Cc: de...@driverdev.osuosl.org
> Cc: linux...@kvack.org
> Cc: linux-ker...@vger.kernel.org
> Signed-off-by: Anshuman Khandual
Reviewed-by: Vlastimil Babka
Thanks.
On 3/2/20 7:47 AM, Anshuman Khandual wrote:
> There are many places where all basic VMA access flags (read, write, exec)
> are initialized or checked against as a group. One such example is during
> page fault. Existing vma_is_accessible() wrapper already creates the notion
> of VMA accessibility
On 3/2/20 7:47 AM, Anshuman Khandual wrote:
> There are many platforms with exact same value for VM_DATA_DEFAULT_FLAGS
> This creates a default value for VM_DATA_DEFAULT_FLAGS in line with the
> existing VM_STACK_DEFAULT_FLAGS. While here, also define some more macros
> with standard VMA access
On 2/27/20 5:00 PM, Sachin Sant wrote:
>
>
>> On 27-Feb-2020, at 5:42 PM, Michal Hocko wrote:
>>
>> A very good hint indeed. I would do this
>> diff --git a/include/linux/topology.h b/include/linux/topology.h
>> index eb2fe6edd73c..d9f1b6737e4d 100644
>> --- a/include/linux/topology.h
>> +++
On 2/26/20 10:45 PM, Vlastimil Babka wrote:
>
>
> if (node == NUMA_NO_NODE)
> page = alloc_pages(flags, order);
> else
> page = __alloc_pages_node(node, flags, order);
>
> So yeah looks like SLUB's kmalloc_node() is supposed to behave like the
> page all
On 2/26/20 7:41 PM, Michal Hocko wrote:
> On Wed 26-02-20 18:25:28, Cristopher Lameter wrote:
>> On Mon, 24 Feb 2020, Michal Hocko wrote:
>>
>>> Hmm, nasty. Is there any reason why kmalloc_node behaves differently
>>> from the page allocator?
>>
>> The page allocator will do the same thing if you
el.org
> Cc: linux-a...@vger.kernel.org
> Cc: linux...@kvack.org
> Signed-off-by: Anshuman Khandual
Meh, why is there _page in the function's name... but too many users to bother
changing it now, I guess.
Acked-by: Vlastimil Babka
rg
> Acked-by: Geert Uytterhoeven
> Acked-by: Guo Ren
> Signed-off-by: Anshuman Khandual
Acked-by: Vlastimil Babka
: linux-ker...@vger.kernel.org
> Cc: linux...@kvack.org
> Signed-off-by: Anshuman Khandual
Some comment for the function wouln't hurt, but perhaps it is self-explanatory
enough.
Acked-by: Vlastimil Babka
On 8/20/19 4:30 AM, Christoph Hellwig wrote:
> On Mon, Aug 19, 2019 at 07:46:00PM +0200, David Sterba wrote:
>> Another thing that is lost is the slub debugging support for all
>> architectures, because get_zeroed_pages lacking the red zones and sanity
>> checks.
>>
>> I find working with raw
On 3/6/19 8:00 PM, Alexandre Ghiti wrote:
> This condition allows to define alloc_contig_range, so simplify
> it into a more accurate naming.
>
> Suggested-by: Vlastimil Babka
> Signed-off-by: Alexandre Ghiti
Acked-by: Vlastimil Babka
(you could have sent this with
On 3/1/19 2:21 PM, Alexandre Ghiti wrote:
> I collected mistakes here: domain name expired and no mailing list added :)
> Really sorry about that, I missed the whole discussion (if any).
> Could someone forward it to me (if any) ? Thanks !
Bounced you David and Mike's discussion (4 messages
On 2/27/19 3:47 PM, Aneesh Kumar K.V wrote:
> This patch adds PF_MEMALLOC_NOCMA which make sure any allocation in that
> context
> is marked non-movable and hence cannot be satisfied by CMA region.
>
> This is useful with get_user_pages_longterm where we want to take a page pin
> by
> migrating
to make it more accurate: this value being false
> does not mean that the system cannot use gigantic pages, it just means that
> runtime allocation of gigantic pages is not supported, one can still
> allocate boottime gigantic pages if the architecture supports it.
>
> Sig
On 2/13/19 8:30 PM, Dave Hansen wrote:
>> -#if (defined(CONFIG_MEMORY_ISOLATION) && defined(CONFIG_COMPACTION)) ||
>> defined(CONFIG_CMA)
>> +#ifdef CONFIG_COMPACTION_CORE
>> static __init int gigantic_pages_init(void)
>> {
>> /* With compaction or CMA we can allocate gigantic pages at
On 1/17/19 7:39 PM, Alexandre Ghiti wrote:
> From: Alexandre Ghiti
>
> On systems without CMA or (MEMORY_ISOLATION && COMPACTION) activated but
> that support gigantic pages, boottime reserved gigantic pages can not be
> freed at all. This patchs simply enables the possibility to hand back
>
On 10/16/18 9:43 PM, Joel Fernandes wrote:
> On Tue, Oct 16, 2018 at 01:29:52PM +0200, Vlastimil Babka wrote:
>> On 10/16/18 12:33 AM, Joel Fernandes wrote:
>>> On Mon, Oct 15, 2018 at 02:42:09AM -0700, Christoph Hellwig wrote:
>>>> On Fri, Oct 12, 2018 at 06:31:58PM
On 10/16/18 12:33 AM, Joel Fernandes wrote:
> On Mon, Oct 15, 2018 at 02:42:09AM -0700, Christoph Hellwig wrote:
>> On Fri, Oct 12, 2018 at 06:31:58PM -0700, Joel Fernandes (Google) wrote:
>>> Android needs to mremap large regions of memory during memory management
>>> related operations.
>>
>>
mapping: clear buffers allocated with FORCE_CONTIGUOUS flag").
>
> Signed-off-by: Marek Szyprowski
Acked-by: Vlastimil Babka
ndard gfp flags and callers can pass __GFP_ZERO to get zeroed buffer,
> what has already been an issue: see commit dd65a941f6ba ("arm64:
> dma-mapping: clear buffers allocated with FORCE_CONTIGUOUS flag").
>
> Signed-off-by: Marek Szyprowski
Acked-by: Vlastimil Babka
On 08/05/2016 09:24 AM, Srikar Dronamraju wrote:
* Vlastimil Babka <vba...@suse.cz> [2016-08-05 08:45:03]:
@@ -5493,10 +5493,10 @@ static void __paginginit free_area_init_core(struct
pglist_data *pgdat)
}
/* Account for reserved pages */
-
On 08/04/2016 07:12 PM, Srikar Dronamraju wrote:
Expand the scope of the existing dma_reserve to accommodate other memory
reserves too. Accordingly rename variable dma_reserve to
nr_memory_reserve.
set_memory_reserve also takes a new parameter that helps to identify if
the current value needs
On 08/03/2016 07:20 AM, Balbir Singh wrote:
On Tue, 2016-08-02 at 18:49 +0530, Srikar Dronamraju wrote:
Fadump kernel reserves significant number of memory blocks. On a multi-node
machine, with CONFIG_DEFFERRED_STRUCT_PAGE support, fadump kernel fails to
boot. Fix this by disabling deferred
fo:
>
> Acked-by: Mel Gorman <mgor...@techsingularity.net>
> Signed-off-by: Li Zhang <zhlci...@linux.vnet.ibm.com>
Acked-by: Vlastimil Babka <vba...@suse.cz>
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev
On 03/03/2016 08:01 AM, Li Zhang wrote:
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -293,13 +293,20 @@ static inline bool update_defer_init(pg_data_t *pgdat,
> unsigned long pfn, unsigned long zone_end,
> unsigned long
() with an offline node will now be checked for
DEBUG_VM builds. Since it's not fatal if the node has been previously online,
and this patch may expose some existing buggy callers, change the VM_BUG_ON
in __alloc_pages_node() to VM_WARN_ON.
Signed-off-by: Vlastimil Babka vba...@suse.cz
Acked-by: David
() is
left for the next patch which can in turn expose more existing buggy callers.
Signed-off-by: Vlastimil Babka vba...@suse.cz
Acked-by: Johannes Weiner han...@cmpxchg.org
Cc: Mel Gorman mgor...@suse.de
Cc: David Rientjes rient...@google.com
Cc: Greg Thelen gthe...@google.com
Cc: Aneesh Kumar K.V
-by: Christoph Lameter c...@linux.com
Signed-off-by: Vlastimil Babka vba...@suse.cz
Acked-by: David Rientjes rient...@google.com
Acked-by: Mel Gorman mgor...@techsingularity.net
Acked-by: Christoph Lameter c...@linux.com
---
v3: better commit message
include/linux/gfp.h | 5 +++--
1 file changed, 3
mlockall()
invocation.
munlock() will unconditionally clear both vma flags. munlockall()
unconditionally clears for VMA flags on all VMAs and in the
mm-def_flags field.
Signed-off-by: Eric B Munson emun...@akamai.com
Cc: Michal Hocko mho...@suse.cz
Cc: Vlastimil Babka vba...@suse.cz
The logic seems
On 07/30/2015 07:41 PM, Johannes Weiner wrote:
On Thu, Jul 30, 2015 at 06:34:31PM +0200, Vlastimil Babka wrote:
numa_mem_id() is able to handle allocation from CPUs on memory-less nodes,
so it's a more robust fallback than the currently used numa_node_id().
Won't it fall through to the next
.
Signed-off-by: Eric B Munson emun...@akamai.com
Cc: Michal Hocko mho...@suse.cz
Cc: Vlastimil Babka vba...@suse.cz
Acked-by: Vlastimil Babka vba...@suse.cz
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo
On 31.7.2015 23:25, David Rientjes wrote:
On Thu, 30 Jul 2015, Vlastimil Babka wrote:
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index aa58a32..56355f2 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2469,7 +2469,7 @@ khugepaged_alloc_page(struct page **hpage, gfp_t gfp
On 07/30/2015 07:58 PM, Christoph Lameter wrote:
On Thu, 30 Jul 2015, Vlastimil Babka wrote:
--- a/mm/slob.c
+++ b/mm/slob.c
void *page;
-#ifdef CONFIG_NUMA
-if (node != NUMA_NO_NODE)
-page = alloc_pages_exact_node(node, gfp, order);
-else
-#endif
numa_mem_id() is able to handle allocation from CPUs on memory-less nodes,
so it's a more robust fallback than the currently used numa_node_id().
Suggested-by: Christoph Lameter c...@linux.com
Signed-off-by: Vlastimil Babka vba...@suse.cz
Acked-by: David Rientjes rient...@google.com
Acked-by: Mel
() with an offline node will now be checked for
DEBUG_VM builds. Since it's not fatal if the node has been previously online,
and this patch may expose some existing buggy callers, change the VM_BUG_ON
in __alloc_pages_node() to VM_WARN_ON.
Signed-off-by: Vlastimil Babka vba...@suse.cz
Acked-by: David
in alloc_pages_node() is
left for the next patch which can in turn expose more existing buggy callers.
Signed-off-by: Vlastimil Babka vba...@suse.cz
Cc: Mel Gorman mgor...@suse.de
Cc: David Rientjes rient...@google.com
Cc: Greg Thelen gthe...@google.com
Cc: Aneesh Kumar K.V aneesh.ku
On 07/29/2015 12:45 PM, Michal Hocko wrote:
In a much less
likely corner case, it is not possible in the current setup to request
all current VMAs be VM_LOCKONFAULT and all future be VM_LOCKED.
Vlastimil has already pointed that out. MCL_FUTURE doesn't clear
MCL_CURRENT. I was quite
On 07/28/2015 01:17 PM, Michal Hocko wrote:
[I am sorry but I didn't get to this sooner.]
On Mon 27-07-15 10:54:09, Eric B Munson wrote:
Now that VM_LOCKONFAULT is a modifier to VM_LOCKED and
cannot be specified independentally, it might make more sense to mirror
that relationship to
On 07/27/2015 04:54 PM, Eric B Munson wrote:
On Mon, 27 Jul 2015, Vlastimil Babka wrote:
We do actually have an MCL_LOCKED, we just call it MCL_CURRENT. Would
you prefer that I match the name in mlock2() (add MLOCK_CURRENT
instead)?
Hm it's similar but not exactly the same, because
On 07/27/2015 03:35 PM, Eric B Munson wrote:
On Mon, 27 Jul 2015, Vlastimil Babka wrote:
On 07/24/2015 11:28 PM, Eric B Munson wrote:
...
Changes from V4:
Drop all architectures for new sys call entries except x86[_64] and MIPS
Drop munlock2 and munlockall2
Make VM_LOCKONFAULT a modifier
On 07/24/2015 11:28 PM, Eric B Munson wrote:
...
Changes from V4:
Drop all architectures for new sys call entries except x86[_64] and MIPS
Drop munlock2 and munlockall2
Make VM_LOCKONFAULT a modifier to VM_LOCKED only to simplify book keeping
Adjust tests to match
Hi, thanks for considering
On 07/23/2015 10:27 PM, David Rientjes wrote:
On Thu, 23 Jul 2015, Christoph Lameter wrote:
The only possible downside would be existing users of
alloc_pages_node() that are calling it with an offline node. Since it's a
VM_BUG_ON() that would catch that, I think it should be changed to a
1 - 100 of 111 matches
Mail list logo