/
Suggested-by: Peter Xu
Signed-off-by: Mike Kravetz
---
RFC->v1 Created zap_vma_pages to zap entire vma (Christoph Hellwig)
Did not add Acked-by's as routine was changed.
arch/arm64/kernel/vdso.c| 6 ++---
arch/powerpc/kernel/vdso.c | 4 +---
arch/powe
vma, vmaddr, size);
> > + zap_vma_page_range(vma, vmaddr, size);
>
> And then just call zap_page_range_single directly for those that
> don't want to zap the entire vma.
Thanks!
This sounds like a good idea and I will incorporate in a new patch.
--
Mike Kravetz
On 12/19/22 13:06, Michal Hocko wrote:
> On Fri 16-12-22 11:20:12, Mike Kravetz wrote:
> > zap_page_range was originally designed to unmap pages within an address
> > range that could span multiple vmas. While working on [1], it was
> > discovered that all callers of zap_pa
in line with other exported routines that operate within a vma.
We can then remove zap_page_range.
Also, change madvise_dontneed_single_vma to use this new routine.
[1]
https://lore.kernel.org/linux-mm/20221114235507.294320-2-mike.krav...@oracle.com/
Suggested-by: Peter Xu
Signed-off-by: Mike
On 10/30/22 15:45, Peter Xu wrote:
> On Fri, Oct 28, 2022 at 11:11:08AM -0700, Mike Kravetz wrote:
> > + } else {
> > + if (is_hugetlb_entry_migration(entry)) {
> > + spin_unlock(ptl);
> > + hugetlb_vma_unlock_read(vma)
be overwritten by architectures, and already
handles special cases such as hugepd entries.
[1]
https://lore.kernel.org/linux-mm/cover.1661240170.git.baolin.w...@linux.alibaba.com/
Suggested-by: David Hildenbrand
Signed-off-by: Mike Kravetz
---
v5 -Remove left over hugetlb_vma_unlock_read
v4
be overwritten by architectures, and already
handles special cases such as hugepd entries.
[1]
https://lore.kernel.org/linux-mm/cover.1661240170.git.baolin.w...@linux.alibaba.com/
Suggested-by: David Hildenbrand
Signed-off-by: Mike Kravetz
---
v4 -Remove vma (pmd sharing) locking as this can
On 10/27/22 15:34, Peter Xu wrote:
> On Wed, Oct 26, 2022 at 05:34:04PM -0700, Mike Kravetz wrote:
> > On 10/26/22 17:59, Peter Xu wrote:
>
> If we want to use the vma read lock to protect here as the slow gup path,
> then please check again with below [1] - I think we'll al
On 10/26/22 17:59, Peter Xu wrote:
> Hi, Mike,
>
> On Sun, Sep 18, 2022 at 07:13:48PM -0700, Mike Kravetz wrote:
> > +struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma,
> > + unsigned long address, unsigned int flags)
> >
w that as well. It'd be easier to just limit the hugetlbfs max
> blocksize to 4GB. It's very unlikely anything else will use such large
> blocksizes and having to introduce new user interfaces for it doesn't sound
> right.
I was not around hugetlbfs when the decision was made to set 'blocksize =
pagesize'. However, I must say that it does seem to make sense as you
can only add or remove entire hugetlb pages from a hugetlbfs file. So,
the hugetlb page size does seem to correspond to the meaning of filesystem
blocksize.
Does any application code make use of this? I can not make a guess.
--
Mike Kravetz
pdated version of the patch was posted here,
https://lore.kernel.org/linux-mm/20220921202702.106069-1-mike.krav...@oracle.com/
Sorry about that,
--
Mike Kravetz
>
> Kernel attempted to read user page (34) - exploit attempt? (uid: 0)
> BUG: Kernel NULL pointer dereference on read at 0x
be overwritten by architectures, and already
handles special cases such as hugepd entries.
[1]
https://lore.kernel.org/linux-mm/cover.1661240170.git.baolin.w...@linux.alibaba.com/
Suggested-by: David Hildenbrand
Signed-off-by: Mike Kravetz
---
v3 -Change WARN_ON_ONCE() to BUILD_BUG
On 09/05/22 06:34, Christophe Leroy wrote:
>
>
> Le 02/09/2022 à 21:03, Mike Kravetz a écrit :
> > diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> > index fe4944f89d34..275e554dd365 100644
> > --- a/include/linux/hugetlb.h
> > +++ b/include/linu
be overwritten by architectures, and already
handles special cases such as hugepd entries.
[1]
https://lore.kernel.org/linux-mm/cover.1661240170.git.baolin.w...@linux.alibaba.com/
Suggested-by: David Hildenbrand
Signed-off-by: Mike Kravetz
---
v2 -Added WARN_ON_ONCE() and updated comment
On 08/19/22 21:22, Michael Ellerman wrote:
> Mike Kravetz writes:
> > On 08/16/22 22:43, Andrew Morton wrote:
> >> On Wed, 17 Aug 2022 03:31:37 + "Wang, Haiyue"
> >> wrote:
> >>
> >> > > > }
> >> >
le entries with hugetlb
- hugetlb pmds can be shared instead of copied
In any case, completely eliminating the copy at fork time should speed
things up.
Signed-off-by: Mike Kravetz
Acked-by: Muchun Song
Acked-by: David Hildenbrand
---
mm/memory.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
tags.
Baolin Wang (1):
arm64/hugetlb: Implement arm64 specific hugetlb_mask_last_page
Mike Kravetz (3):
hugetlb: skip to end of PT page mapping when pte not present
hugetlb: do not update address in huge_pmd_unshare
hugetlb: Lazy page table copies in fork()
arch/arm64/mm/hugetl
hugetlb_mask_last_page to update address if pmd is unshared.
Signed-off-by: Mike Kravetz
Acked-by: Muchun Song
Reviewed-by: Baolin Wang
---
include/linux/hugetlb.h | 4 ++--
mm/hugetlb.c| 46 +
mm/rmap.c | 4 ++--
3 files changed, 23 insertions
an ARM64 specific hugetlb_mask_last_page() to help this case.
[1]
https://lore.kernel.org/linux-mm/20220527225849.284839-1-mike.krav...@oracle.com/
Signed-off-by: Baolin Wang
Signed-off-by: Mike Kravetz
Acked-by: Muchun Song
---
arch/arm64/mm/hugetlbpage.c | 20
1 file
.
Signed-off-by: Mike Kravetz
Tested-by: Baolin Wang
Reviewed-by: Baolin Wang
Acked-by: Muchun Song
Reported-by: kernel test robot
---
include/linux/hugetlb.h | 1 +
mm/hugetlb.c| 56 +
2 files changed, 52 insertions(+), 5 deletions(-)
diff --git
m/next
> xilinx-xlnx/master]
> [If your patch is applied to the wrong git tree, kindly drop us a note.
> And when submitting patch, we suggest to use '--base' as documented in
> https://git-scm.com/docs/git-format-patch]
>
> url:
> https://github.com/intel-lab-lkp/linux/co
On 06/17/22 10:15, Peter Xu wrote:
> Hi, Mike,
>
> On Thu, Jun 16, 2022 at 02:05:15PM -0700, Mike Kravetz wrote:
> > @@ -6877,6 +6896,39 @@ pte_t *huge_pte_offset(struct mm_struct *mm,
> > return (pte_t *)pmd;
> > }
> >
> > +/*
> > + * Return
hugetlb_mask_last_page to update address if pmd is unshared.
Signed-off-by: Mike Kravetz
Acked-by: Muchun Song
Reviewed-by: Baolin Wang
---
include/linux/hugetlb.h | 4 ++--
mm/hugetlb.c| 47 ++---
mm/rmap.c | 4 ++--
3 files changed, 24 insertions
le entries with hugetlb
- hugetlb pmds can be shared instead of copied
In any case, completely eliminating the copy at fork time should speed
things up.
Signed-off-by: Mike Kravetz
Acked-by: Muchun Song
Acked-by: David Hildenbrand
---
mm/memory.c | 2 +-
1 file changed, 1 insertion(+), 1 deletio
an ARM64 specific hugetlb_mask_last_page() to help this case.
[1]
https://lore.kernel.org/linux-mm/20220527225849.284839-1-mike.krav...@oracle.com/
Signed-off-by: Baolin Wang
Signed-off-by: Mike Kravetz
---
arch/arm64/mm/hugetlbpage.c | 20
1 file changed, 20 insertions
.
Signed-off-by: Mike Kravetz
Tested-by: Baolin Wang
Reviewed-by: Baolin Wang
---
include/linux/hugetlb.h | 1 +
mm/hugetlb.c| 62 +
2 files changed, 58 insertions(+), 5 deletions(-)
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
4 specific hugetlb_mask_last_page
Mike Kravetz (3):
hugetlb: skip to end of PT page mapping when pte not present
hugetlb: do not update address in huge_pmd_unshare
hugetlb: Lazy page table copies in fork()
arch/arm64/mm/hugetlbpage.c | 20 +++
include/linux/hugetlb.h | 5 +-
mm
ld not easily be added to the first
if (folio_test_hugetlb(folio)) block in this routine. However, it
is fine to add here.
Looks good. Thanks for all these changes,
Reviewed-by: Mike Kravetz
--
Mike Kravetz
d-off-by: Baolin Wang
> ---
> mm/rmap.c | 24 ++--
> 1 file changed, 18 insertions(+), 6 deletions(-)
With the addition of !CONFIG_HUGETLB_PAGE stubs,
Reviewed-by: Mike Kravetz
--
Mike Kravetz
gt;>>
>>>>> So this is a preparation patch, which changes the
>>>>> huge_ptep_clear_flush()
>>>>> to return the original pte to help to nuke a hugetlb page table.
>>>>>
>>>>> Signed-off-by: Baolin Wang
>>>>>
On 5/5/22 20:39, Baolin Wang wrote:
>
> On 5/6/2022 7:53 AM, Mike Kravetz wrote:
>> On 4/29/22 01:14, Baolin Wang wrote:
>>> On some architectures (like ARM64), it can support CONT-PTE/PMD size
>>> hugetlb, which means it can support not only PMD/PUD size hugetlb:
&g
eval be overwritten here with
>>> pteval = swp_entry_to_pte(make_hwpoison_entry(subpage))?
>>> IOW, what sense does it make to save the returned pteval from
>>> huge_ptep_clear_flush(), when it is never being used anywhere?
>>
>> Please see previous code, we'll use the original pte value to check if
>> it is uffd-wp armed, and if need to mark it dirty though the hugetlbfs
>> is set noop_dirty_folio().
>>
>> pte_install_uffd_wp_if_needed(vma, address, pvmw.pte, pteval);
>
> Uh, ok, that wouldn't work on s390, but we also don't have
> CONFIG_PTE_MARKER_UFFD_WP / HAVE_ARCH_USERFAULTFD_WP set, so
> I guess we will be fine (for now).
>
> Still, I find it a bit unsettling that pte_install_uffd_wp_if_needed()
> would work on a potential hugetlb *pte, directly de-referencing it
> instead of using huge_ptep_get().
>
> The !pte_none(*pte) check at the beginning would be broken in the
> hugetlb case for s390 (not sure about other archs, but I think s390
> might be the only exception strictly requiring huge_ptep_get()
> for de-referencing hugetlb *pte pointers).
>
Adding Peter Wu mostly for above as he is working uffd_wp.
>>
>> /* Set the dirty flag on the folio now the pte is gone. */
>> if (pte_dirty(pteval))
>> folio_mark_dirty(folio);
>
> Ok, that should work fine, huge_ptep_clear_flush() will return
> a pteval properly de-referenced and converted with huge_ptep_get(),
> and that would contain the hugetlb pmd/pud dirty information.
>
--
Mike Kravetz
erhaps add
a VM_BUG_ON() to make sure the passed huge page is poisoned? This
would be in the same 'if block' where we call
adjust_range_if_pmd_sharing_possible.
--
Mike Kravetz
> which means now we will unmap only one pte entry for a CONT-PTE or
> CONT-PMD size poisoned hugetlb page,
set_pte_at(mm, address, pvmw.pte, pteval);
> + if (folio_test_hugetlb(folio))
> + set_huge_pte_at(mm, address, pvmw.pte,
> pteval);
And, we will use that pteval for ALL the PTE/PMDs here. So, we would set
the dirty or young bit in ALL PTE/PMDs.
Could that cause any issues? May be more of a question for the arm64 people.
--
Mike Kravetz
ush() today is hugetlb_cow/wp() in
mm/hugetlb.c. Any reason why you did not change that code? At least
cast the return of huge_ptep_clear_flush() to void with a comment?
Not absolutely necessary.
Acked-by: Mike Kravetz
--
Mike Kravetz
definitions. This makes huge pte creation much cleaner and easier
> to follow.
>
> Cc: Catalin Marinas
> Cc: Will Deacon
> Cc: Michael Ellerman
> Cc: Paul Mackerras
> Cc: "David S. Miller"
> Cc: Mike Kravetz
> Cc: Andrew Morton
> Cc: lin
Sorry, no suggestion for how to make a beautiful generic implementation.
This patch is straight forward.
Acked-by: Mike Kravetz
--
Mike Kravetz
Cc: Palmer Dabbelt
> Cc: Heiko Carstens
> Cc: Vasily Gorbik
> Cc: Christian Borntraeger
> Cc: Yoshinori Sato
> Cc: Rich Felker
> Cc: "David S. Miller"
> Cc: Thomas Gleixner
> Cc: Ingo Molnar
> Cc: Borislav Petkov
> Cc: "H. Peter Anvin"
On 5/10/20 8:14 PM, Anshuman Khandual wrote:
> On 05/09/2020 03:52 AM, Mike Kravetz wrote:
>> On 5/7/20 8:07 PM, Anshuman Khandual wrote:
>>
>> Did you try building without CONFIG_HUGETLB_PAGE defined? I'm guessing
>
> Yes I did for multiple platforms (s390, ar
Cc: Palmer Dabbelt
> Cc: Heiko Carstens
> Cc: Vasily Gorbik
> Cc: Christian Borntraeger
> Cc: Yoshinori Sato
> Cc: Rich Felker
> Cc: "David S. Miller"
> Cc: Thomas Gleixner
> Cc: Ingo Molnar
> Cc: Borislav Petkov
> Cc: "H. Peter Anvin"
> C
ed by some
architectures to set up ALL huge pages sizes.
Signed-off-by: Mike Kravetz
Acked-by: Mina Almasry
Reviewed-by: Peter Xu
Acked-by: Gerald Schaefer [s390]
Acked-by: Will Deacon
---
arch/arm64/mm/hugetlbpage.c | 15 ---
arch/powerpc/mm/hugetlbpage.c | 15 ---
processing "hugepagesz=".
After this, calls to size_to_hstate() in arch specific code can be
removed and hugetlb_add_hstate can be called without worrying about
warning messages.
Signed-off-by: Mike Kravetz
Acked-by: Mina Almasry
Acked-by: Gerald Schaefer [s390]
Acked-by: Will Deacon
Test
the bootmem allocator required
for gigantic allocations is not available at this time.
Signed-off-by: Mike Kravetz
Acked-by: Gerald Schaefer [s390]
Acked-by: Will Deacon
Tested-by: Sandipan Das
---
.../admin-guide/kernel-parameters.txt | 40 +++--
Documentation/admin-guide/mm
independent routine.
- Clean up command line processing to follow desired semantics and
document those semantics.
[1] https://lore.kernel.org/linux-mm/20200305033014.1152-1-longpe...@huawei.com
Mike Kravetz (4):
hugetlbfs: add arch_hugetlb_valid_size
hugetlbfs: move hugepagesz= parsing to arch
of the "hugepagesz=" in arch specific code to a common
routine in arch independent code.
Signed-off-by: Mike Kravetz
Acked-by: Gerald Schaefer [s390]
Acked-by: Will Deacon
---
arch/arm64/mm/hugetlbpage.c | 17 +
arch/powerpc/mm/hugetlbpage.c | 20 +---
arc
On 4/27/20 1:18 PM, Andrew Morton wrote:
> On Mon, 27 Apr 2020 12:09:47 -0700 Mike Kravetz
> wrote:
>
>> Previously, a check for hugepages_supported was added before processing
>> hugetlb command line parameters. On some architectures such as powerpc,
>> hugep
On 4/27/20 10:25 AM, Mike Kravetz wrote:
> On 4/26/20 10:04 PM, Sandipan Das wrote:
>> On 18/04/20 12:20 am, Mike Kravetz wrote:
>>> Now that architectures provide arch_hugetlb_valid_size(), parsing
>>> of "hugepagesz=" can be done in architecture indep
On 4/26/20 10:04 PM, Sandipan Das wrote:
> Hi Mike,
>
> On 18/04/20 12:20 am, Mike Kravetz wrote:
>> Now that architectures provide arch_hugetlb_valid_size(), parsing
>> of "hugepagesz=" can be done in architecture independent code.
>> Create a single
On 4/22/20 3:42 AM, Aneesh Kumar K.V wrote:
> Mike Kravetz writes:
>
>> The routine hugetlb_add_hstate prints a warning if the hstate already
>> exists. This was originally done as part of kernel command line
>> parsing. If 'hugepagesz=' was specified mor
On 4/20/20 1:29 PM, Anders Roxell wrote:
> On Mon, 20 Apr 2020 at 20:23, Mike Kravetz wrote:
>> On 4/20/20 8:34 AM, Qian Cai wrote:
>>>
>>> Reverted this series fixed many undefined behaviors on arm64 with the
>>> config,
>> While rearranging the code
On 4/20/20 8:34 AM, Qian Cai wrote:
>
>
>> On Apr 17, 2020, at 2:50 PM, Mike Kravetz wrote:
>>
>> Longpeng(Mike) reported a weird message from hugetlb command line processing
>> and proposed a solution [1]. While the proposed patch does address the
>> spe
and into
an arch independent routine.
- Clean up command line processing to follow desired semantics and
document those semantics.
[1] https://lore.kernel.org/linux-mm/20200305033014.1152-1-longpe...@huawei.com
Mike Kravetz (4):
hugetlbfs: add arch_hugetlb_valid_size
hugetlbfs: move hugepagesz
allocator required
for gigantic allocations is not available at this time.
Signed-off-by: Mike Kravetz
---
.../admin-guide/kernel-parameters.txt | 40 +++--
Documentation/admin-guide/mm/hugetlbpage.rst | 35
mm/hugetlb.c | 159 ++
3 files
of the "hugepagesz=" in arch specific code to a common
routine in arch independent code.
Signed-off-by: Mike Kravetz
---
arch/arm64/mm/hugetlbpage.c | 17 +
arch/powerpc/mm/hugetlbpage.c | 20 +---
arch/riscv/mm/hugetlbpage.c | 26 +-
ar
processing "hugepagesz=".
After this, calls to size_to_hstate() in arch specific code can be
removed and hugetlb_add_hstate can be called without worrying about
warning messages.
Signed-off-by: Mike Kravetz
Acked-by: Mina Almasry
---
arch/arm64/mm/hugetlbpage.c | 16
arch/powe
ed by some
architectures to set up ALL huge pages sizes.
Signed-off-by: Mike Kravetz
Acked-by: Mina Almasry
Reviewed-by: Peter Xu
---
arch/arm64/mm/hugetlbpage.c | 15 ---
arch/powerpc/mm/hugetlbpage.c | 15 ---
arch/riscv/mm/hugetlbpage.c | 16
ar
On 4/10/20 1:37 PM, Peter Xu wrote:
> On Wed, Apr 01, 2020 at 11:38:19AM -0700, Mike Kravetz wrote:
>> With all hugetlb page processing done in a single file clean up code.
>> - Make code match desired semantics
>> - Update documentation with semantics
>> - Make all w
On 4/10/20 12:16 PM, Peter Xu wrote:
> On Wed, Apr 01, 2020 at 11:38:16AM -0700, Mike Kravetz wrote:
>> diff --git a/arch/arm64/include/asm/hugetlb.h
>> b/arch/arm64/include/asm/hugetlb.h
>> index 2eb6c234d594..81606223494f 100644
>> --- a/arch/arm64/include/asm/hu
code and into
an arch independent routine.
- Clean up command line processing to follow desired semantics and
document those semantics.
[1] https://lore.kernel.org/linux-mm/20200305033014.1152-1-longpe...@huawei.com
Mike Kravetz (4):
hugetlbfs: add arch_hugetlb_valid_size
hugetlbfs: move
of the "hugepagesz=" in arch specific code to a common
routine in arch independent code.
Signed-off-by: Mike Kravetz
---
arch/arm64/include/asm/hugetlb.h | 2 ++
arch/arm64/mm/hugetlbpage.c| 17 +
arch/powerpc/include/asm/hugetlb.h | 3 +++
arch/powerpc/mm/hugetlbpage.c
processing "hugepagesz=".
After this, calls to size_to_hstate() in arch specific code can be
removed and hugetlb_add_hstate can be called without worrying about
warning messages.
Signed-off-by: Mike Kravetz
---
arch/arm64/mm/hugetlbpage.c | 16
arch/powerpc/mm/hugetlbpage.c |
() before processing parameters.
- Add comments to code
- Describe some of the subtle interactions
- Describe semantics of command line arguments
Signed-off-by: Mike Kravetz
---
.../admin-guide/kernel-parameters.txt | 35 ---
Documentation/admin-guide/mm/hugetlbpage.rst | 44
ed by some
architectures to set up ALL huge pages sizes.
Signed-off-by: Mike Kravetz
---
arch/arm64/mm/hugetlbpage.c | 15 ---
arch/powerpc/mm/hugetlbpage.c | 15 ---
arch/riscv/mm/hugetlbpage.c | 16
arch/s390/mm/hugetlbpage.c| 18 --
On 3/18/20 4:36 PM, Dave Hansen wrote:
> On 3/18/20 3:52 PM, Mike Kravetz wrote:
>> Sounds good. I'll incorporate those changes into a v2, unless someone
>> else with has a different opinion.
>>
>> BTW, this patch should not really change the way the code works today.
On 3/23/20 8:47 PM, Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
wrote:
>
>
> On 2020/3/24 8:43, Mina Almasry wrote:
>> On Wed, Mar 18, 2020 at 3:07 PM Mike Kravetz wrote:
>>> +default_hugepagesz - Specify the default huge page size. This parameter
On 3/23/20 5:01 PM, Mina Almasry wrote:
> On Wed, Mar 18, 2020 at 3:07 PM Mike Kravetz wrote:
>>
>> The routine hugetlb_add_hstate prints a warning if the hstate already
>> exists. This was originally done as part of kernel command line
>> parsing. If 'hugepagesz=' w
On 3/19/20 12:00 AM, Christophe Leroy wrote:
>
> Le 18/03/2020 à 23:06, Mike Kravetz a écrit :
>> The architecture independent routine hugetlb_default_setup sets up
>> the default huge pages size. It has no way to verify if the passed
>> value is valid, so it accepts it
On 3/19/20 12:04 AM, Christophe Leroy wrote:
>
>
> Le 18/03/2020 à 23:06, Mike Kravetz a écrit :
>> Now that architectures provide arch_hugetlb_valid_size(), parsing
>> of "hugepagesz=" can be done in architecture independent code.
>> Create a single
On 3/18/20 5:20 PM, Randy Dunlap wrote:
> Hi Mike,
>
> On 3/18/20 3:06 PM, Mike Kravetz wrote:
>> With all hugetlb page processing done in a single file clean up code.
>> - Make code match desired semantics
>> - Update documentation with semantics
>> - Make
ifferent opinion.
BTW, this patch should not really change the way the code works today.
It is mostly a movement of code. Unless I am missing something, the
existing code will always allow setup of PMD_SIZE hugetlb pages.
--
Mike Kravetz
On 3/18/20 3:09 PM, Will Deacon wrote:
> On Wed, Mar 18, 2020 at 03:06:31PM -0700, Mike Kravetz wrote:
>> The architecture independent routine hugetlb_default_setup sets up
>> the default huge pages size. It has no way to verify if the passed
>> value is valid, so it ac
some of the subtle interactions
- Describe semantics of command line arguments
Signed-off-by: Mike Kravetz
---
Documentation/admin-guide/mm/hugetlbpage.rst | 26 +++
mm/hugetlb.c | 78 +++-
2 files changed, 87 insertions(+), 17 deletions
] https://lore.kernel.org/linux-mm/20200305033014.1152-1-longpe...@huawei.com
Mike Kravetz (4):
hugetlbfs: add arch_hugetlb_valid_size
hugetlbfs: move hugepagesz= parsing to arch independent code
hugetlbfs: remove hugetlb_add_hstate() warning for existing hstate
hugetlbfs: clean up command
of the "hugepagesz=" in arch specific code to a common
routine in arch independent code.
Signed-off-by: Mike Kravetz
---
arch/arm64/include/asm/hugetlb.h | 2 ++
arch/arm64/mm/hugetlbpage.c| 19 ++-
arch/powerpc/include/asm/hugetlb.h | 3 +++
arch/powerpc/mm/hugetlbpage.c
ed by some
architectures to set up ALL huge pages sizes.
Signed-off-by: Mike Kravetz
---
arch/arm64/mm/hugetlbpage.c | 15 ---
arch/powerpc/mm/hugetlbpage.c | 15 ---
arch/riscv/mm/hugetlbpage.c | 16
arch/s390/mm/hugetlbpage.c| 18 --
processing "hugepagesz=".
After this, calls to size_to_hstate() in arch specific code can be
removed and hugetlb_add_hstate can be called without worrying about
warning messages.
Signed-off-by: Mike Kravetz
---
arch/arm64/mm/hugetlbpage.c | 16
arch/powerpc/mm/hugetlbpage.c |
etlbfs: per mount huge page sizes")
> Cc: sta...@vger.kernel.org
> Signed-off-by: Christophe Leroy
As hugetlb.h evolved over time, I suspect nobody imagined a configuration
with CONFIG_HUGETLB_PAGE and not CONFIG_HUGETLBFS. This patch does address
the build issues. So,
Review
On 3/17/20 9:47 AM, Christophe Leroy wrote:
>
>
> Le 17/03/2020 à 17:40, Mike Kravetz a écrit :
>> On 3/17/20 1:43 AM, Christophe Leroy wrote:
>>>
>>>
>>> Le 17/03/2020 à 09:25, Baoquan He a écrit :
>>>> On 03/17/20 at 08:04am, Christo
use cases which never
use the filesystem interface. However, hugetlb support is so intertwined
with hugetlbfs, I am thinking there would be issues trying to use them
separately. I will look into this further.
--
Mike Kravetz
On 5/28/19 2:49 AM, Wanpeng Li wrote:
> Cc Paolo,
> Hi all,
> On Wed, 14 Feb 2018 at 06:34, Mike Kravetz wrote:
>>
>> On 02/12/2018 06:48 PM, Michael Ellerman wrote:
>>> Andrew Morton writes:
>>>
>>>> On Thu, 08 Feb 2018 12:30:45 + Punit
;
> Signed-off-by: Alexandre Ghiti
> Acked-by: David S. Miller [sparc]
Thanks for all the updates
Reviewed-by: Mike Kravetz
--
Mike Kravetz
;
> Signed-off-by: Alexandre Ghiti
> Acked-by: David S. Miller [sparc]
Reviewed-by: Mike Kravetz
--
Mike Kravetz
On 3/1/19 5:21 AM, Alexandre Ghiti wrote:
> On 03/01/2019 07:25 AM, Alex Ghiti wrote:
>> On 2/28/19 5:26 PM, Mike Kravetz wrote:
>>> On 2/28/19 12:23 PM, Dave Hansen wrote:
>>>> On 2/28/19 11:50 AM, Mike Kravetz wrote:
>>>&g
that runtime allocation of gigantic pages is not supported,
> one can still allocate boottime gigantic pages if the architecture supports
> it.
>
> Signed-off-by: Alexandre Ghiti
Thank you for doing this!
Reviewed-by: Mike Kravetz
> --- a/include/linux/gfp.h
> +++ b/i
S_ENABLED(CONFIG_ARCH_ENABLE_HUGEPAGE_MIGRATION) &&
> + PageHuge(page)) {
How about using hugepage_migration_supported instead? It would automatically
catch those non-migratable huge page sizes. Something like:
On 07/05/2018 04:07 AM, Alexandre Ghiti wrote:
> arm, arm64, mips, parisc, sh, x86 architectures use the
> same version of hugetlb_free_pgd_range, so move this generic
> implementation into asm-generic/hugetlb.h.
>
Just one small issue below. Not absolutely necessary to fix.
Revie
On 07/26/2018 04:46 AM, Michael Ellerman wrote:
> Mike Kravetz writes:
>
>> On 07/20/2018 11:37 AM, Alex Ghiti wrote:
>>> Does anyone have any suggestion about those patches ?
>>
>> I only took a quick look. From the hugetlb perspective, I like the
>> i
On 07/05/2018 04:07 AM, Alexandre Ghiti wrote:
> ia64, mips, parisc, powerpc, sh, sparc, x86 architectures use the
> same version of huge_ptep_get, so move this generic implementation into
> asm-generic/hugetlb.h.
>
Reviewed-by: Mike Kravetz
--
Mike Kravetz
> Signed-off-by:
On 07/05/2018 04:07 AM, Alexandre Ghiti wrote:
> arm, ia64, sh, x86 architectures use the same version
> of huge_ptep_set_access_flags, so move this generic implementation
> into asm-generic/hugetlb.h.
>
Reviewed-by: Mike Kravetz
--
Mike Kravetz
> Signed-off-by: Alexandre Ghiti
> the above architectures, but the modification was not straightforward
> and hence has not been done.
>
Just one small comment, otehrwise
Reviewed-by: Mike Kravetz
> Signed-off-by: Alexandre Ghiti
> ---
> arch/arm/include/asm/hugetlb-3level.h| 6 --
>
On 07/05/2018 04:07 AM, Alexandre Ghiti wrote:
> arm, arm64, ia64, parisc, powerpc, sh, sparc, x86 architectures
> use the same version of huge_pte_none, so move this generic
> implementation into asm-generic/hugetlb.h.
>
Reviewed-by: Mike Kravetz
--
Mike Kravetz
> Signed-of
On 07/05/2018 04:07 AM, Alexandre Ghiti wrote:
> arm, arm64, powerpc, sparc, x86 architectures use the same version of
> prepare_hugepage_range, so move this generic implementation into
> asm-generic/hugetlb.h.
>
Reviewed-by: Mike Kravetz
--
Mike Kravetz
> Signed-off-by:
On 07/05/2018 04:07 AM, Alexandre Ghiti wrote:
> arm, arm64, ia64, mips, parisc, powerpc, sh, sparc, x86
> architectures use the same version of huge_pte_wrprotect, so move
> this generic implementation into asm-generic/hugetlb.h.
>
Reviewed-by: Mike Kravetz
--
Mike Kravetz
On 07/05/2018 04:07 AM, Alexandre Ghiti wrote:
> arm, x86 architectures use the same version of
> huge_ptep_clear_flush, so move this generic implementation into
> asm-generic/hugetlb.h.
>
Reviewed-by: Mike Kravetz
--
Mike Kravetz
> Signed-off-by: Alexandre Ghiti
> ---
&
On 07/05/2018 04:07 AM, Alexandre Ghiti wrote:
> arm, ia64, sh, x86 architectures use the
> same version of huge_ptep_get_and_clear, so move this generic
> implementation into asm-generic/hugetlb.h.
>
Reviewed-by: Mike Kravetz
--
Mike Kravetz
> Signed-off-by: Alexandre Ghiti
On 07/05/2018 04:07 AM, Alexandre Ghiti wrote:
> arm, ia64, mips, powerpc, sh, x86 architectures use the
> same version of set_huge_pte_at, so move this generic
> implementation into asm-generic/hugetlb.h.
>
Just one comment below, otherwise:
Reviewed-by: Mike Kravetz
viewed-by: Mike Kravetz
--
Mike Kravetz
> Signed-off-by: Alexandre Ghiti
> ---
> arch/arm64/include/asm/hugetlb.h | 2 +-
> include/asm-generic/hugetlb.h| 2 +-
> 2 files changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/arch/arm64/include/asm/hugetlb.h
> b/a
ies.
--
Mike Kravetz
> On 07/09/2018 02:16 PM, Michal Hocko wrote:
>> [CC hugetlb guys -
>> http://lkml.kernel.org/r/20180705110716.3919-1-a...@ghiti.fr]
>>
>> On Thu 05-07-18 11:07:05, Alexandre Ghiti wrote:
>>> In order to reduce copy/paste of functions acros
level"
This patch will disable that functionality. So, at a minimum this is a
'heads up'. If there are actual use cases that depend on this, then more
work/discussions will need to happen. From the e-mail thread on PGD_SIZE
support, I can not tell if there is a real use case or this is
lized in this function
[-Wmaybe-uninitialized]
You have added a way of getting out of that big if/else if statement without
setting mhp. mhp will be examined later in the code, so this is indeed a bug.
Like Aneesh, I am not sure if there is great benefit in this patch.
You added this change in functionality only for powerpc. IMO, it would be
best if behavior was consistent in all architectures. So, if we change it
for powerpc we may want to change everywhere.
--
Mike Kravetz
1 - 100 of 101 matches
Mail list logo