> -Original Message-
> From: owner-linux...@kvack.org On Behalf
> Of pi...@codeaurora.org
> Sent: Wednesday, March 3, 2021 6:34 AM
> To: Nitin Gupta
> Cc: linux-kernel@vger.kernel.org; a...@linux-foundation.org; linux-
> m...@kvack.org; linux-fsde...@vger.k
...@codeaurora.org;
> iamjoonsoo@lge.com; sh_...@163.com; mateusznos...@gmail.com;
> b...@redhat.com; Nitin Gupta ; vba...@suse.cz;
> yzai...@google.com; keesc...@chromium.org; mcg...@kernel.org;
> mgor...@techsingularity.net
> Cc: pintu.p...@gmail.com
> Subject: [PATCH] m
Fix compile error when COMPACTION_HPAGE_ORDER is assigned
to HUGETLB_PAGE_ORDER. The correct way to check if this
constant is defined is to check for CONFIG_HUGETLBFS.
Signed-off-by: Nitin Gupta
To: Andrew Morton
Reported-by: Nathan Chancellor
Tested-by: Nathan Chancellor
---
mm/compaction.c
On 6/22/20 9:57 PM, Nathan Chancellor wrote:
> On Mon, Jun 22, 2020 at 09:32:12PM -0700, Nitin Gupta wrote:
>> On 6/22/20 7:26 PM, Nathan Chancellor wrote:
>>> On Tue, Jun 16, 2020 at 01:45:27PM -0700, Nitin Gupta wrote:
>>>> For some applications, we need
On 6/22/20 7:26 PM, Nathan Chancellor wrote:
> On Tue, Jun 16, 2020 at 01:45:27PM -0700, Nitin Gupta wrote:
>> For some applications, we need to allocate almost all memory as
>> hugepages. However, on a running system, higher-order allocations can
>> fail if the memory is fra
On 6/18/20 6:41 AM, Baoquan He wrote:
> On 06/17/20 at 06:03pm, Nitin Gupta wrote:
>> Proactive compaction uses per-node/zone "fragmentation score" which
>> is always in range [0, 100], so use unsigned type of these scores
>> as well as for related constants.
&
Proactive compaction uses per-node/zone "fragmentation score" which
is always in range [0, 100], so use unsigned type of these scores
as well as for related constants.
Signed-off-by: Nitin Gupta
---
include/linux/compaction.h | 4 ++--
kernel/sysctl.c| 2 +-
mm/co
On 6/17/20 1:53 PM, Andrew Morton wrote:
On Tue, 16 Jun 2020 13:45:27 -0700 Nitin Gupta wrote:
For some applications, we need to allocate almost all memory as
hugepages. However, on a running system, higher-order allocations can
fail if the memory is fragmented. Linux kernel currently does
erred maximum number of times
with HPAGE_FRAG_CHECK_INTERVAL_MSEC of wait between each check
(=> ~30 seconds between retries).
[1] https://patchwork.kernel.org/patch/11098289/
[2] https://lore.kernel.org/linux-mm/20161230131412.gi13...@dhcp22.suse.cz/
[3] https://lwn.net/Articles/817905/
Signed-off-by: Nit
On 6/16/20 2:46 AM, Oleksandr Natalenko wrote:
> Hello.
>
> Please see the notes inline.
>
> On Mon, Jun 15, 2020 at 07:36:14AM -0700, Nitin Gupta wrote:
>> For some applications, we need to allocate almost all memory as
>> hugepages. However, on a running system, h
erred maximum number of times
with HPAGE_FRAG_CHECK_INTERVAL_MSEC of wait between each check
(=> ~30 seconds between retries).
[1] https://patchwork.kernel.org/patch/11098289/
[2] https://lore.kernel.org/linux-mm/20161230131412.gi13...@dhcp22.suse.cz/
[3] https://lwn.net/Articles/817905/
Signed-off-by: Nit
On 6/15/20 7:25 AM, Oleksandr Natalenko wrote:
> On Mon, Jun 15, 2020 at 10:29:01AM +0200, Oleksandr Natalenko wrote:
>> Just to let you know, this fails to compile for me with THP disabled on
>> v5.8-rc1:
>>
>> CC mm/compaction.o
>> In file included from ./include/linux/dev_printk.h:14,
>>
On 6/9/20 12:23 PM, Khalid Aziz wrote:
> On Mon, 2020-06-01 at 12:48 -0700, Nitin Gupta wrote:
>> For some applications, we need to allocate almost all memory as
>> hugepages. However, on a running system, higher-order allocations can
>> fail if the memory is fragmented. L
On Mon, Jun 1, 2020 at 12:48 PM Nitin Gupta wrote:
>
> For some applications, we need to allocate almost all memory as
> hugepages. However, on a running system, higher-order allocations can
> fail if the memory is fragmented. Linux kernel currently does on-demand
> compaction as
erred maximum number of times
with HPAGE_FRAG_CHECK_INTERVAL_MSEC of wait between each check
(=> ~30 seconds between retries).
[1] https://patchwork.kernel.org/patch/11098289/
[2] https://lore.kernel.org/linux-mm/20161230131412.gi13...@dhcp22.suse.cz/
[3] https://lwn.net/Articles/817905/
Signed-off-by: Nit
this based upon their workload. More comments below.
>
Tunables like the one this patch introduces, and similar ones like 'swappiness'
will always require some experimentations from the user.
> On Mon, 2020-05-18 at 11:14 -0700, Nitin Gupta wrote:
> > For some applications, we need to
On Wed, May 27, 2020 at 3:18 AM Vlastimil Babka wrote:
>
> On 5/18/20 8:14 PM, Nitin Gupta wrote:
> > For some applications, we need to allocate almost all memory as
> > hugepages. However, on a running system, higher-order allocations can
> > fail if the memory is
On Thu, May 28, 2020 at 2:50 AM Vlastimil Babka wrote:
>
> On 5/28/20 11:15 AM, Holger Hoffstätte wrote:
> >
> > On 5/18/20 8:14 PM, Nitin Gupta wrote:
> > [patch v5 :)]
> >
> > I've been successfully using this in my tree and it works great, but a
> > f
INTERVAL_MSEC of wait between each check
(=> ~30 seconds between retries).
[1] https://patchwork.kernel.org/patch/11098289/
Signed-off-by: Nitin Gupta
To: Mel Gorman
To: Michal Hocko
To: Vlastimil Babka
CC: Matthew Wilcox
CC: Andrew Morton
CC: Mike Kravetz
CC: Joonsoo Kim
CC: Dav
G_CHECK_INTERVAL_MSEC of wait between each check
(=> ~30 seconds between retries).
[1] https://patchwork.kernel.org/patch/11098289/
Signed-off-by: Nitin Gupta
To: Mel Gorman
To: Michal Hocko
To: Vlastimil Babka
CC: Matthew Wilcox
CC: Andrew Morton
CC: Mike Kravetz
CC: Joonsoo Kim
CC: Dav
On Tue, 2019-08-20 at 10:46 +0200, Vlastimil Babka wrote:
> > This patch is largely based on ideas from Michal Hocko posted here:
> > https://lore.kernel.org/linux-mm/20161230131412.gi13...@dhcp22.suse.cz/
> >
> > Testing done (on x86):
> > - Set
On Thu, 2019-08-22 at 09:51 +0100, Mel Gorman wrote:
> As unappealing as it sounds, I think it is better to try improve the
> allocation latency itself instead of trying to hide the cost in a kernel
> thread. It's far harder to implement as compaction is not easy but it
> would be more obvious
On Mon, 2019-09-16 at 13:16 -0700, David Rientjes wrote:
> On Fri, 16 Aug 2019, Nitin Gupta wrote:
>
> > For some applications we need to allocate almost all memory as
> > hugepages. However, on a running system, higher order allocations can
> > fail if the memory is
On Thu, 2019-09-12 at 17:11 +0530, Bharath Vedartham wrote:
> Hi Nitin,
> On Wed, Sep 11, 2019 at 10:33:39PM +, Nitin Gupta wrote:
> > On Wed, 2019-09-11 at 08:45 +0200, Michal Hocko wrote:
> > > On Tue 10-09-19 22:27:53, Nitin Gupta wrote:
> > > [...]
> > &
On Wed, 2019-09-11 at 08:45 +0200, Michal Hocko wrote:
> On Tue 10-09-19 22:27:53, Nitin Gupta wrote:
> [...]
> > > On Tue 10-09-19 13:07:32, Nitin Gupta wrote:
> > > > For some applications we need to allocate almost all memory as
> > > > hugepages.
>
> -Original Message-
> From: owner-linux...@kvack.org On Behalf
> Of Michal Hocko
> Sent: Tuesday, September 10, 2019 1:19 PM
> To: Nitin Gupta
> Cc: a...@linux-foundation.org; vba...@suse.cz;
> mgor...@techsingularity.net; dan.j.willi...@intel.com;
> khalid.
ain scenarios to reduce hugepage allocation latencies. This callback
interface allows drivers to drive compaction based on their own policies
like the current level of external fragmentation for a particular order,
system load etc.
Signed-off-by: Nitin Gupta
---
include/linux/compaction.h |
On Mon, 2019-08-26 at 12:47 +0100, Mel Gorman wrote:
> On Thu, Aug 22, 2019 at 09:57:22PM +0000, Nitin Gupta wrote:
> > > Note that proactive compaction may reduce allocation latency but
> > > it is not
> > > free either. Even though the scanning and migratio
> -Original Message-
> From: owner-linux...@kvack.org On Behalf
> Of Mel Gorman
> Sent: Thursday, August 22, 2019 1:52 AM
> To: Nitin Gupta
> Cc: a...@linux-foundation.org; vba...@suse.cz; mho...@suse.com;
> dan.j.willi...@intel.com; Yu Zhao ; Matthew Wilcox
> -Original Message-
> From: owner-linux...@kvack.org On Behalf
> Of Matthew Wilcox
> Sent: Tuesday, August 20, 2019 3:21 PM
> To: Nitin Gupta
> Cc: a...@linux-foundation.org; vba...@suse.cz;
> mgor...@techsingularity.net; mho...@suse.com;
> dan.j.willi...@i
> -Original Message-
> From: Vlastimil Babka
> Sent: Tuesday, August 20, 2019 1:46 AM
> To: Nitin Gupta ; a...@linux-foundation.org;
> mgor...@techsingularity.net; mho...@suse.com;
> dan.j.willi...@intel.com
> Cc: Yu Zhao ; Matthew Wilcox ;
> Qian Cai ; Andrey Rya
tion till extfrag < extfrag_low for order-9.
The patch has plenty of rough edges but posting it early to see if I'm
going in the right direction and to get some early feedback.
Signed-off-by: Nitin Gupta
---
include/linux/compaction.h | 12 ++
mm/compaction.c
On 01/25/2018 01:13 PM, Mel Gorman wrote:
> On Thu, Jan 25, 2018 at 11:41:03AM -0800, Nitin Gupta wrote:
>>>> It's not really about memory scarcity but a more efficient use of it.
>>>> Applications may want hugepage benefits without requiring any changes to
>
On 01/25/2018 01:13 PM, Mel Gorman wrote:
> On Thu, Jan 25, 2018 at 11:41:03AM -0800, Nitin Gupta wrote:
>>>> It's not really about memory scarcity but a more efficient use of it.
>>>> Applications may want hugepage benefits without requiring any changes to
>
On 01/24/2018 04:47 PM, Zi Yan wrote:
With this change, whenever an application issues MADV_DONTNEED on a
memory region, the region is marked as "space-efficient". For such
regions, a hugepage is not immediately allocated on first write.
>>> Kirill didn't like it in the previous
On 01/24/2018 04:47 PM, Zi Yan wrote:
With this change, whenever an application issues MADV_DONTNEED on a
memory region, the region is marked as "space-efficient". For such
regions, a hugepage is not immediately allocated on first write.
>>> Kirill didn't like it in the previous
On 1/19/18 4:49 AM, Michal Hocko wrote:
> On Thu 18-01-18 15:33:16, Nitin Gupta wrote:
>> From: Nitin Gupta <nitin.m.gu...@oracle.com>
>>
>> Currently, if the THP enabled policy is "always", or the mode
>> is "madvise" and a region is marked a
On 1/19/18 4:49 AM, Michal Hocko wrote:
> On Thu 18-01-18 15:33:16, Nitin Gupta wrote:
>> From: Nitin Gupta
>>
>> Currently, if the THP enabled policy is "always", or the mode
>> is "madvise" and a region is marked as MADV_HUGEPAGE, a hugepage
&g
On 12/15/17 2:01 AM, Kirill A. Shutemov wrote:
> On Thu, Dec 14, 2017 at 05:28:52PM -0800, Nitin Gupta wrote:
>> diff --git a/mm/madvise.c b/mm/madvise.c
>> index 751e97a..b2ec07b 100644
>> --- a/mm/madvise.c
>> +++ b/mm/madvise.c
>> @@ -508,6 +508,7 @@ static
On 12/15/17 2:01 AM, Kirill A. Shutemov wrote:
> On Thu, Dec 14, 2017 at 05:28:52PM -0800, Nitin Gupta wrote:
>> diff --git a/mm/madvise.c b/mm/madvise.c
>> index 751e97a..b2ec07b 100644
>> --- a/mm/madvise.c
>> +++ b/mm/madvise.c
>> @@ -508,6 +508,7 @@ static
On 12/15/17 2:00 AM, Kirill A. Shutemov wrote:
> On Thu, Dec 14, 2017 at 05:28:52PM -0800, Nitin Gupta wrote:
>> Currently, if the THP enabled policy is "always", or the mode
>> is "madvise" and a region is marked as MADV_HUGEPAGE, a hugepage
>> is al
On 12/15/17 2:00 AM, Kirill A. Shutemov wrote:
> On Thu, Dec 14, 2017 at 05:28:52PM -0800, Nitin Gupta wrote:
>> Currently, if the THP enabled policy is "always", or the mode
>> is "madvise" and a region is marked as MADV_HUGEPAGE, a hugepage
>> is al
For a PUD hugepage entry, we need to propagate bits [32:22]
from virtual address to resolve at 4M granularity. However,
the current code was incorrectly propagating bits [29:19].
This bug can cause incorrect data to be returned for pages
backed with 16G hugepages.
Signed-off-by: Nitin Gupta
For a PUD hugepage entry, we need to propagate bits [32:22]
from virtual address to resolve at 4M granularity. However,
the current code was incorrectly propagating bits [29:19].
This bug can cause incorrect data to be returned for pages
backed with 16G hugepages.
Signed-off-by: Nitin Gupta
;kirill.shute...@linux.intel.com>
>> Cc: Minchan Kim <minc...@kernel.org>
>> Cc: Nitin Gupta <ngu...@vflare.org>
>> Cc: Sergey Senozhatsky <sergey.senozhatsky.w...@gmail.com>
> Acked-by: Minchan Kim <minc...@kernel.org>
>
> Nitin:
>
>
ble for CONFIG_X86_5LEVEL=y
>> configuration to define zsmalloc data structures.
>>
>> The patch introduces MAX_POSSIBLE_PHYSMEM_BITS to cover such case.
>> It also suits well to handle PAE special case.
>>
>> Signed-off-by: Kirill A. Shutemov
>> Cc: Minchan Kim
&
On Fri, Oct 20, 2017 at 12:59 PM, Kirill A. Shutemov
wrote:
> With boot-time switching between paging mode we will have variable
> MAX_PHYSMEM_BITS.
>
> Let's use the maximum variable possible for CONFIG_X86_5LEVEL=y
> configuration to define zsmalloc data
On Fri, Oct 20, 2017 at 12:59 PM, Kirill A. Shutemov
wrote:
> With boot-time switching between paging mode we will have variable
> MAX_PHYSMEM_BITS.
>
> Let's use the maximum variable possible for CONFIG_X86_5LEVEL=y
> configuration to define zsmalloc data structures.
>
> The patch introduces
On Mon, Oct 16, 2017 at 7:44 AM, Kirill A. Shutemov
<kir...@shutemov.name> wrote:
> On Fri, Oct 13, 2017 at 05:00:12PM -0700, Nitin Gupta wrote:
>> On Fri, Sep 29, 2017 at 7:08 AM, Kirill A. Shutemov
>> <kirill.shute...@linux.intel.com> wrote:
>> > With boot-t
On Mon, Oct 16, 2017 at 7:44 AM, Kirill A. Shutemov
wrote:
> On Fri, Oct 13, 2017 at 05:00:12PM -0700, Nitin Gupta wrote:
>> On Fri, Sep 29, 2017 at 7:08 AM, Kirill A. Shutemov
>> wrote:
>> > With boot-time switching between paging mode we will have variab
define zsmalloc data structures.
>
> The patch introduces MAX_POSSIBLE_PHYSMEM_BITS to cover such case.
> It also suits well to handle PAE special case.
>
> Signed-off-by: Kirill A. Shutemov <kirill.shute...@linux.intel.com>
> Cc: Minchan Kim <minc...@kernel.org>
> Cc: Ni
The patch introduces MAX_POSSIBLE_PHYSMEM_BITS to cover such case.
> It also suits well to handle PAE special case.
>
> Signed-off-by: Kirill A. Shutemov
> Cc: Minchan Kim
> Cc: Nitin Gupta
> Cc: Sergey Senozhatsky
> ---
> arch/x86/include/asm/pgtable-3level_types.h | 1
Flatten out nested code structure in huge_pte_offset()
and huge_pte_alloc().
Signed-off-by: Nitin Gupta <nitin.m.gu...@oracle.com>
---
arch/sparc/mm/hugetlbpage.c | 54 +
1 file changed, 20 insertions(+), 34 deletions(-)
diff --git a/arch/sp
get_user_pages() is used to do direct IO. It already
handles the case where the address range is backed
by PMD huge pages. This patch now adds the case where
the range could be backed by PUD huge pages.
Signed-off-by: Nitin Gupta <nitin.m.gu...@oracle.com>
---
arch/sparc/include/asm/pgtabl
Flatten out nested code structure in huge_pte_offset()
and huge_pte_alloc().
Signed-off-by: Nitin Gupta
---
arch/sparc/mm/hugetlbpage.c | 54 +
1 file changed, 20 insertions(+), 34 deletions(-)
diff --git a/arch/sparc/mm/hugetlbpage.c b/arch/sparc/mm
get_user_pages() is used to do direct IO. It already
handles the case where the address range is backed
by PMD huge pages. This patch now adds the case where
the range could be backed by PUD huge pages.
Signed-off-by: Nitin Gupta
---
arch/sparc/include/asm/pgtable_64.h | 15 +++--
arch
Cc: Anthony Yznaga <anthony.yzn...@oracle.com>
Reviewed-by: Bob Picco <bob.pi...@oracle.com>
Signed-off-by: Nitin Gupta <nitin.m.gu...@oracle.com>
---
arch/sparc/include/asm/hugetlb.h| 7
arch/sparc/include/asm/page_64.h| 3 +-
arch/sparc/include/asm/pgtable_64.h |
Cc: Anthony Yznaga
Reviewed-by: Bob Picco
Signed-off-by: Nitin Gupta
---
arch/sparc/include/asm/hugetlb.h| 7
arch/sparc/include/asm/page_64.h| 3 +-
arch/sparc/include/asm/pgtable_64.h | 5 +++
arch/sparc/include/asm/tsb.h| 36 ++
arch/sparc/kernel/head_64
get_user_pages() is used to do direct IO. It already
handles the case where the address range is backed
by PMD huge pages. This patch now adds the case where
the range could be backed by PUD huge pages.
Signed-off-by: Nitin Gupta <nitin.m.gu...@oracle.com>
---
arch/sparc/include/asm/pgtabl
get_user_pages() is used to do direct IO. It already
handles the case where the address range is backed
by PMD huge pages. This patch now adds the case where
the range could be backed by PUD huge pages.
Signed-off-by: Nitin Gupta
---
arch/sparc/include/asm/pgtable_64.h | 15 +++--
arch
Signed-off-by: Nitin Gupta <nitin.m.gu...@oracle.com>
---
arch/sparc/include/asm/hugetlb.h| 7
arch/sparc/include/asm/page_64.h| 3 +-
arch/sparc/include/asm/pgtable_64.h | 5 +++
arch/sparc/include/asm/tsb.h| 36 ++
arch/sparc/kernel/tsb.S
Signed-off-by: Nitin Gupta
---
arch/sparc/include/asm/hugetlb.h| 7
arch/sparc/include/asm/page_64.h| 3 +-
arch/sparc/include/asm/pgtable_64.h | 5 +++
arch/sparc/include/asm/tsb.h| 36 ++
arch/sparc/kernel/tsb.S | 2 +-
arch/sparc/kernel
Flatten out nested code structure in huge_pte_offset()
and huge_pte_alloc().
Signed-off-by: Nitin Gupta <nitin.m.gu...@oracle.com>
---
arch/sparc/mm/hugetlbpage.c | 54 +
1 file changed, 20 insertions(+), 34 deletions(-)
diff --git a/arch/sp
Flatten out nested code structure in huge_pte_offset()
and huge_pte_alloc().
Signed-off-by: Nitin Gupta
---
arch/sparc/mm/hugetlbpage.c | 54 +
1 file changed, 20 insertions(+), 34 deletions(-)
diff --git a/arch/sparc/mm/hugetlbpage.c b/arch/sparc/mm
On 07/20/2017 01:04 PM, David Miller wrote:
> From: Nitin Gupta <nitin.m.gu...@oracle.com>
> Date: Thu, 13 Jul 2017 14:53:24 -0700
>
>> Testing:
>>
>> Tested with the stream benchmark which allocates 48G of
>> arrays backed by 16G hugepages and d
On 07/20/2017 01:04 PM, David Miller wrote:
> From: Nitin Gupta
> Date: Thu, 13 Jul 2017 14:53:24 -0700
>
>> Testing:
>>
>> Tested with the stream benchmark which allocates 48G of
>> arrays backed by 16G hugepages and does RW operation on
>> them in
com>
Signed-off-by: Nitin Gupta <nitin.m.gu...@oracle.com>
---
arch/sparc/mm/init_64.c | 25 -
1 file changed, 24 insertions(+), 1 deletion(-)
diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c
index 3c40ebd..fed73f1 100644
--- a/arch/sparc/mm/init_64.c
+++
hugepage
sizes are available.
case 2: default_hugepagesz=[64K|256M|2G]
When specifying only a default_hugepagesz parameter, the default
hugepage size isn't really changed and it stays at 8M. This is again
different from x86_64.
Orabug: 25869946
Reviewed-by: Bob Picco
Signed-off-by: Nitin
Signed-off-by: Nitin Gupta <nitin.m.gu...@oracle.com>
---
arch/sparc/include/asm/page_64.h| 3 +-
arch/sparc/include/asm/pgtable_64.h | 5 +++
arch/sparc/include/asm/tsb.h| 30 +++
arch/sparc/kernel/tsb.S | 2 +-
arch/sparc/mm/hugetlbpage.c
Signed-off-by: Nitin Gupta
---
arch/sparc/include/asm/page_64.h| 3 +-
arch/sparc/include/asm/pgtable_64.h | 5 +++
arch/sparc/include/asm/tsb.h| 30 +++
arch/sparc/kernel/tsb.S | 2 +-
arch/sparc/mm/hugetlbpage.c | 74
Flatten out nested code structure in huge_pte_offset()
and huge_pte_alloc().
Signed-off-by: Nitin Gupta <nitin.m.gu...@oracle.com>
---
arch/sparc/mm/hugetlbpage.c | 54 +
1 file changed, 20 insertions(+), 34 deletions(-)
diff --git a/arch/sp
Flatten out nested code structure in huge_pte_offset()
and huge_pte_alloc().
Signed-off-by: Nitin Gupta
---
arch/sparc/mm/hugetlbpage.c | 54 +
1 file changed, 20 insertions(+), 34 deletions(-)
diff --git a/arch/sparc/mm/hugetlbpage.c b/arch/sparc/mm
get_user_pages() is used to do direct IO. It already
handles the case where the address range is backed
by PMD huge pages. This patch now adds the case where
the range could be backed by PUD huge pages.
Signed-off-by: Nitin Gupta <nitin.m.gu...@oracle.com>
---
arch/sparc/include/asm/pgtabl
get_user_pages() is used to do direct IO. It already
handles the case where the address range is backed
by PMD huge pages. This patch now adds the case where
the range could be backed by PUD huge pages.
Signed-off-by: Nitin Gupta
---
arch/sparc/include/asm/pgtable_64.h | 15 ++--
arch
The function assumes that each PMD points to head of a
huge page. This is not correct as a PMD can point to
start of any 8M region with a, say 256M, hugepage. The
fix ensures that it points to the correct head of any PMD
huge page.
Cc: Julian Calaby <julian.cal...@gmail.com>
Signed-off-by:
The function assumes that each PMD points to head of a
huge page. This is not correct as a PMD can point to
start of any 8M region with a, say 256M, hugepage. The
fix ensures that it points to the correct head of any PMD
huge page.
Cc: Julian Calaby
Signed-off-by: Nitin Gupta
---
Changes since
Hi Julian,
On 6/22/17 3:53 AM, Julian Calaby wrote:
On Thu, Jun 22, 2017 at 7:50 AM, Nitin Gupta <nitin.m.gu...@oracle.com> wrote:
The function assumes that each PMD points to head of a
huge page. This is not correct as a PMD can point to
start of any 8M region with a, say 256M, hu
Hi Julian,
On 6/22/17 3:53 AM, Julian Calaby wrote:
On Thu, Jun 22, 2017 at 7:50 AM, Nitin Gupta wrote:
The function assumes that each PMD points to head of a
huge page. This is not correct as a PMD can point to
start of any 8M region with a, say 256M, hugepage. The
fix ensures
The function assumes that each PMD points to head of a
huge page. This is not correct as a PMD can point to
start of any 8M region with a, say 256M, hugepage. The
fix ensures that it points to the correct head of any PMD
huge page.
Signed-off-by: Nitin Gupta <nitin.m.gu...@oracle.com>
---
The function assumes that each PMD points to head of a
huge page. This is not correct as a PMD can point to
start of any 8M region with a, say 256M, hugepage. The
fix ensures that it points to the correct head of any PMD
huge page.
Signed-off-by: Nitin Gupta
---
arch/sparc/mm/gup.c | 2 ++
1
The function assumes that each PMD points to head of a
huge page. This is not correct as a PMD can point to
start of any 8M region with a, say 256M, hugepage. The
fix ensures that it points to the correct head of any PMD
huge page.
Signed-off-by: Nitin Gupta <nitin.m.gu...@oracle.com>
---
The function assumes that each PMD points to head of a
huge page. This is not correct as a PMD can point to
start of any 8M region with a, say 256M, hugepage. The
fix ensures that it points to the correct head of any PMD
huge page.
Signed-off-by: Nitin Gupta
---
arch/sparc/mm/gup.c | 2 ++
1
get_user_pages() is used to do direct IO. It already
handles the case where the address range is backed
by PMD huge pages. This patch now adds the case where
the range could be backed by PUD huge pages.
Signed-off-by: Nitin Gupta <nitin.m.gu...@oracle.com>
---
arch/sparc/include/asm/pgtabl
get_user_pages() is used to do direct IO. It already
handles the case where the address range is backed
by PMD huge pages. This patch now adds the case where
the range could be backed by PUD huge pages.
Signed-off-by: Nitin Gupta
---
arch/sparc/include/asm/pgtable_64.h | 15 ++--
arch
Signed-off-by: Nitin Gupta <nitin.m.gu...@oracle.com>
---
arch/sparc/include/asm/page_64.h| 3 +-
arch/sparc/include/asm/pgtable_64.h | 5 +++
arch/sparc/include/asm/tsb.h| 30 +++
arch/sparc/kernel/tsb.S | 2 +-
arch/sparc/mm/hugetlbpage.c
Signed-off-by: Nitin Gupta
---
arch/sparc/include/asm/page_64.h| 3 +-
arch/sparc/include/asm/pgtable_64.h | 5 +++
arch/sparc/include/asm/tsb.h| 30 +++
arch/sparc/kernel/tsb.S | 2 +-
arch/sparc/mm/hugetlbpage.c | 74
Flatten out nested code structure in huge_pte_offset()
and huge_pte_alloc().
Signed-off-by: Nitin Gupta <nitin.m.gu...@oracle.com>
---
arch/sparc/mm/hugetlbpage.c | 54 +
1 file changed, 20 insertions(+), 34 deletions(-)
diff --git a/arch/sp
Flatten out nested code structure in huge_pte_offset()
and huge_pte_alloc().
Signed-off-by: Nitin Gupta
---
arch/sparc/mm/hugetlbpage.c | 54 +
1 file changed, 20 insertions(+), 34 deletions(-)
diff --git a/arch/sparc/mm/hugetlbpage.c b/arch/sparc/mm
The function assumes that each PMD points to head of a
huge page. This is not correct as a PMD can point to
start of any 8M region with a, say 256M, hugepage. The
fix ensures that it points to the correct head of any PMD
huge page.
Signed-off-by: Nitin Gupta <nitin.m.gu...@oracle.com>
---
The function assumes that each PMD points to head of a
huge page. This is not correct as a PMD can point to
start of any 8M region with a, say 256M, hugepage. The
fix ensures that it points to the correct head of any PMD
huge page.
Signed-off-by: Nitin Gupta
---
arch/sparc/mm/gup.c | 2 ++
1
Flatten out nested code structure in huge_pte_offset()
and huge_pte_alloc().
Signed-off-by: Nitin Gupta <nitin.m.gu...@oracle.com>
---
arch/sparc/mm/hugetlbpage.c | 54 +
1 file changed, 20 insertions(+), 34 deletions(-)
diff --git a/arch/sp
Flatten out nested code structure in huge_pte_offset()
and huge_pte_alloc().
Signed-off-by: Nitin Gupta
---
arch/sparc/mm/hugetlbpage.c | 54 +
1 file changed, 20 insertions(+), 34 deletions(-)
diff --git a/arch/sparc/mm/hugetlbpage.c b/arch/sparc/mm
get_user_pages() is used to do direct IO. It already
handles the case where the address range is backed
by PMD huge pages. This patch now adds the case where
the range could be backed by PUD huge pages.
Signed-off-by: Nitin Gupta <nitin.m.gu...@oracle.com>
---
arch/sparc/include/asm/pgtabl
Signed-off-by: Nitin Gupta <nitin.m.gu...@oracle.com>
---
Changelog v3 vs v2:
- Fixed email headers so the subject shows up correctly
Changelog v2 vs v1:
- Remove redundant brgez,pn (Bob Picco)
- Remove unncessary label rename from 700 to 701 (Rob Gardner)
- Add patch description (Paul)
get_user_pages() is used to do direct IO. It already
handles the case where the address range is backed
by PMD huge pages. This patch now adds the case where
the range could be backed by PUD huge pages.
Signed-off-by: Nitin Gupta
---
arch/sparc/include/asm/pgtable_64.h | 15 ++--
arch
Signed-off-by: Nitin Gupta
---
Changelog v3 vs v2:
- Fixed email headers so the subject shows up correctly
Changelog v2 vs v1:
- Remove redundant brgez,pn (Bob Picco)
- Remove unncessary label rename from 700 to 701 (Rob Gardner)
- Add patch description (Paul)
- Add 16G case to get_user_pages
Please ignore this patch series. I will resend again with correct email
headers.
Nitin
On 6/19/17 2:48 PM, Nitin Gupta wrote:
Adds support for 16GB hugepage size. To use this page size
use kernel parameters as:
default_hugepagesz=16G hugepagesz=16G hugepages=10
Testing:
Tested
Please ignore this patch series. I will resend again with correct email
headers.
Nitin
On 6/19/17 2:48 PM, Nitin Gupta wrote:
Adds support for 16GB hugepage size. To use this page size
use kernel parameters as:
default_hugepagesz=16G hugepagesz=16G hugepages=10
Testing:
Tested
The function assumes that each PMD points to head of a
huge page. This is not correct as a PMD can point to
start of any 8M region with a, say 256M, hugepage. The
fix ensures that it points to the correct head of any PMD
huge page.
Signed-off-by: Nitin Gupta <nitin.m.gu...@oracle.com>
---
The function assumes that each PMD points to head of a
huge page. This is not correct as a PMD can point to
start of any 8M region with a, say 256M, hugepage. The
fix ensures that it points to the correct head of any PMD
huge page.
Signed-off-by: Nitin Gupta
---
arch/sparc/mm/gup.c | 2 ++
1
1 - 100 of 416 matches
Mail list logo