> -Original Message-
> From: owner-linux...@kvack.org On Behalf
> Of pi...@codeaurora.org
> Sent: Wednesday, March 3, 2021 6:34 AM
> To: Nitin Gupta
> Cc: linux-kernel@vger.kernel.org; a...@linux-foundation.org; linux-
> m...@kvack.org; linux-fsde...@vger.k
...@codeaurora.org;
> iamjoonsoo@lge.com; sh_...@163.com; mateusznos...@gmail.com;
> b...@redhat.com; Nitin Gupta ; vba...@suse.cz;
> yzai...@google.com; keesc...@chromium.org; mcg...@kernel.org;
> mgor...@techsingularity.net
> Cc: pintu.p...@gmail.com
> Subject: [PATCH] m
Fix compile error when COMPACTION_HPAGE_ORDER is assigned
to HUGETLB_PAGE_ORDER. The correct way to check if this
constant is defined is to check for CONFIG_HUGETLBFS.
Signed-off-by: Nitin Gupta
To: Andrew Morton
Reported-by: Nathan Chancellor
Tested-by: Nathan Chancellor
---
mm/compaction.c
On 6/22/20 9:57 PM, Nathan Chancellor wrote:
> On Mon, Jun 22, 2020 at 09:32:12PM -0700, Nitin Gupta wrote:
>> On 6/22/20 7:26 PM, Nathan Chancellor wrote:
>>> On Tue, Jun 16, 2020 at 01:45:27PM -0700, Nitin Gupta wrote:
>>>> For some applications, we need to
On 6/22/20 7:26 PM, Nathan Chancellor wrote:
> On Tue, Jun 16, 2020 at 01:45:27PM -0700, Nitin Gupta wrote:
>> For some applications, we need to allocate almost all memory as
>> hugepages. However, on a running system, higher-order allocations can
>> fail if the memory is fra
On 6/18/20 6:41 AM, Baoquan He wrote:
> On 06/17/20 at 06:03pm, Nitin Gupta wrote:
>> Proactive compaction uses per-node/zone "fragmentation score" which
>> is always in range [0, 100], so use unsigned type of these scores
>> as well as for related constants.
&
Proactive compaction uses per-node/zone "fragmentation score" which
is always in range [0, 100], so use unsigned type of these scores
as well as for related constants.
Signed-off-by: Nitin Gupta
---
include/linux/compaction.h | 4 ++--
kernel/sysctl.c| 2 +-
mm/co
On 6/17/20 1:53 PM, Andrew Morton wrote:
On Tue, 16 Jun 2020 13:45:27 -0700 Nitin Gupta wrote:
For some applications, we need to allocate almost all memory as
hugepages. However, on a running system, higher-order allocations can
fail if the memory is fragmented. Linux kernel currently does
active_compact_node() is deferred maximum number of times
with HPAGE_FRAG_CHECK_INTERVAL_MSEC of wait between each check
(=> ~30 seconds between retries).
[1] https://patchwork.kernel.org/patch/11098289/
[2] https://lore.kernel.org/linux-mm/20161230131412.gi13...@dhcp22.suse.cz/
[3] https://lwn.net/A
On 6/16/20 2:46 AM, Oleksandr Natalenko wrote:
> Hello.
>
> Please see the notes inline.
>
> On Mon, Jun 15, 2020 at 07:36:14AM -0700, Nitin Gupta wrote:
>> For some applications, we need to allocate almost all memory as
>> hugepages. However, on a running system, h
active_compact_node() is deferred maximum number of times
with HPAGE_FRAG_CHECK_INTERVAL_MSEC of wait between each check
(=> ~30 seconds between retries).
[1] https://patchwork.kernel.org/patch/11098289/
[2] https://lore.kernel.org/linux-mm/20161230131412.gi13...@dhcp22.suse.cz/
[3] https://lwn.net/A
On 6/15/20 7:25 AM, Oleksandr Natalenko wrote:
> On Mon, Jun 15, 2020 at 10:29:01AM +0200, Oleksandr Natalenko wrote:
>> Just to let you know, this fails to compile for me with THP disabled on
>> v5.8-rc1:
>>
>> CC mm/compaction.o
>> In file included from ./include/linux/dev_printk.h:14,
>>
On 6/9/20 12:23 PM, Khalid Aziz wrote:
> On Mon, 2020-06-01 at 12:48 -0700, Nitin Gupta wrote:
>> For some applications, we need to allocate almost all memory as
>> hugepages. However, on a running system, higher-order allocations can
>> fail if the memory is fragmented. L
On Mon, Jun 1, 2020 at 12:48 PM Nitin Gupta wrote:
>
> For some applications, we need to allocate almost all memory as
> hugepages. However, on a running system, higher-order allocations can
> fail if the memory is fragmented. Linux kernel currently does on-demand
> compaction as
active_compact_node() is deferred maximum number of times
with HPAGE_FRAG_CHECK_INTERVAL_MSEC of wait between each check
(=> ~30 seconds between retries).
[1] https://patchwork.kernel.org/patch/11098289/
[2] https://lore.kernel.org/linux-mm/20161230131412.gi13...@dhcp22.suse.cz/
[3] https://lwn.net/A
ased upon their workload. More comments below.
>
Tunables like the one this patch introduces, and similar ones like 'swappiness'
will always require some experimentations from the user.
> On Mon, 2020-05-18 at 11:14 -0700, Nitin Gupta wrote:
> > For some applications, we
On Wed, May 27, 2020 at 3:18 AM Vlastimil Babka wrote:
>
> On 5/18/20 8:14 PM, Nitin Gupta wrote:
> > For some applications, we need to allocate almost all memory as
> > hugepages. However, on a running system, higher-order allocations can
> > fail if the memory is
On Thu, May 28, 2020 at 2:50 AM Vlastimil Babka wrote:
>
> On 5/28/20 11:15 AM, Holger Hoffstätte wrote:
> >
> > On 5/18/20 8:14 PM, Nitin Gupta wrote:
> > [patch v5 :)]
> >
> > I've been successfully using this in my tree and it works great, but a
> &
r of times
with HPAGE_FRAG_CHECK_INTERVAL_MSEC of wait between each check
(=> ~30 seconds between retries).
[1] https://patchwork.kernel.org/patch/11098289/
Signed-off-by: Nitin Gupta
To: Mel Gorman
To: Michal Hocko
To: Vlastimil Babka
CC: Matthew Wilcox
CC: Andrew Morton
CC: Mik
um number of times
with HPAGE_FRAG_CHECK_INTERVAL_MSEC of wait between each check
(=> ~30 seconds between retries).
[1] https://patchwork.kernel.org/patch/11098289/
Signed-off-by: Nitin Gupta
To: Mel Gorman
To: Michal Hocko
To: Vlastimil Babka
CC: Matthew Wilcox
CC: Andrew Morton
CC: Mik
On Tue, 2019-08-20 at 10:46 +0200, Vlastimil Babka wrote:
> > This patch is largely based on ideas from Michal Hocko posted here:
> > https://lore.kernel.org/linux-mm/20161230131412.gi13...@dhcp22.suse.cz/
> >
> > Testing done (on x86):
> > - Set /sys/kernel/mm/compaction/order-9/extfrag_{low,hi
On Thu, 2019-08-22 at 09:51 +0100, Mel Gorman wrote:
> As unappealing as it sounds, I think it is better to try improve the
> allocation latency itself instead of trying to hide the cost in a kernel
> thread. It's far harder to implement as compaction is not easy but it
> would be more obvious what
On Mon, 2019-09-16 at 13:16 -0700, David Rientjes wrote:
> On Fri, 16 Aug 2019, Nitin Gupta wrote:
>
> > For some applications we need to allocate almost all memory as
> > hugepages. However, on a running system, higher order allocations can
> > fail if the memory is
On Thu, 2019-09-12 at 17:11 +0530, Bharath Vedartham wrote:
> Hi Nitin,
> On Wed, Sep 11, 2019 at 10:33:39PM +, Nitin Gupta wrote:
> > On Wed, 2019-09-11 at 08:45 +0200, Michal Hocko wrote:
> > > On Tue 10-09-19 22:27:53, Nitin Gupta wrote:
> > > [...]
> > &
On Wed, 2019-09-11 at 08:45 +0200, Michal Hocko wrote:
> On Tue 10-09-19 22:27:53, Nitin Gupta wrote:
> [...]
> > > On Tue 10-09-19 13:07:32, Nitin Gupta wrote:
> > > > For some applications we need to allocate almost all memory as
> > > > hugepages.
>
> -Original Message-
> From: owner-linux...@kvack.org On Behalf
> Of Michal Hocko
> Sent: Tuesday, September 10, 2019 1:19 PM
> To: Nitin Gupta
> Cc: a...@linux-foundation.org; vba...@suse.cz;
> mgor...@techsingularity.net; dan.j.willi...@intel.com;
> khalid.
certain scenarios to reduce hugepage allocation latencies. This callback
interface allows drivers to drive compaction based on their own policies
like the current level of external fragmentation for a particular order,
system load etc.
Signed-off-by: Nitin Gupta
---
include/linux/compactio
On Mon, 2019-08-26 at 12:47 +0100, Mel Gorman wrote:
> On Thu, Aug 22, 2019 at 09:57:22PM +0000, Nitin Gupta wrote:
> > > Note that proactive compaction may reduce allocation latency but
> > > it is not
> > > free either. Even though the scanning and migratio
> -Original Message-
> From: owner-linux...@kvack.org On Behalf
> Of Mel Gorman
> Sent: Thursday, August 22, 2019 1:52 AM
> To: Nitin Gupta
> Cc: a...@linux-foundation.org; vba...@suse.cz; mho...@suse.com;
> dan.j.willi...@intel.com; Yu Zhao ; Matthew Wilcox
> -Original Message-
> From: owner-linux...@kvack.org On Behalf
> Of Matthew Wilcox
> Sent: Tuesday, August 20, 2019 3:21 PM
> To: Nitin Gupta
> Cc: a...@linux-foundation.org; vba...@suse.cz;
> mgor...@techsingularity.net; mho...@suse.com;
> dan.j.willi...@i
> -Original Message-
> From: Vlastimil Babka
> Sent: Tuesday, August 20, 2019 1:46 AM
> To: Nitin Gupta ; a...@linux-foundation.org;
> mgor...@techsingularity.net; mho...@suse.com;
> dan.j.willi...@intel.com
> Cc: Yu Zhao ; Matthew Wilcox ;
> Qian Cai ; Andrey Rya
tion till extfrag < extfrag_low for order-9.
The patch has plenty of rough edges but posting it early to see if I'm
going in the right direction and to get some early feedback.
Signed-off-by: Nitin Gupta
---
include/linux/compaction.h | 12 ++
mm/compaction.c
On 01/25/2018 01:13 PM, Mel Gorman wrote:
> On Thu, Jan 25, 2018 at 11:41:03AM -0800, Nitin Gupta wrote:
>>>> It's not really about memory scarcity but a more efficient use of it.
>>>> Applications may want hugepage benefits without requiring any changes to
&g
On 01/24/2018 04:47 PM, Zi Yan wrote:
With this change, whenever an application issues MADV_DONTNEED on a
memory region, the region is marked as "space-efficient". For such
regions, a hugepage is not immediately allocated on first write.
>>> Kirill didn't like it in the previous ve
On 1/19/18 4:49 AM, Michal Hocko wrote:
> On Thu 18-01-18 15:33:16, Nitin Gupta wrote:
>> From: Nitin Gupta
>>
>> Currently, if the THP enabled policy is "always", or the mode
>> is "madvise" and a region is marked as MADV_HUGEPAGE, a hugepage
>&
On 12/15/17 2:01 AM, Kirill A. Shutemov wrote:
> On Thu, Dec 14, 2017 at 05:28:52PM -0800, Nitin Gupta wrote:
>> diff --git a/mm/madvise.c b/mm/madvise.c
>> index 751e97a..b2ec07b 100644
>> --- a/mm/madvise.c
>> +++ b/mm/madvise.c
>> @@ -508,6 +508,7 @@ static
On 12/15/17 2:00 AM, Kirill A. Shutemov wrote:
> On Thu, Dec 14, 2017 at 05:28:52PM -0800, Nitin Gupta wrote:
>> Currently, if the THP enabled policy is "always", or the mode
>> is "madvise" and a region is marked as MADV_HUGEPAGE, a hugepage
>> is alloc
For a PUD hugepage entry, we need to propagate bits [32:22]
from virtual address to resolve at 4M granularity. However,
the current code was incorrectly propagating bits [29:19].
This bug can cause incorrect data to be returned for pages
backed with 16G hugepages.
Signed-off-by: Nitin Gupta
possible for CONFIG_X86_5LEVEL=y
>> configuration to define zsmalloc data structures.
>>
>> The patch introduces MAX_POSSIBLE_PHYSMEM_BITS to cover such case.
>> It also suits well to handle PAE special case.
>>
>> Signed-off-by: Kirill A. Shutemov
>> Cc: Minchan K
On Fri, Oct 20, 2017 at 12:59 PM, Kirill A. Shutemov
wrote:
> With boot-time switching between paging mode we will have variable
> MAX_PHYSMEM_BITS.
>
> Let's use the maximum variable possible for CONFIG_X86_5LEVEL=y
> configuration to define zsmalloc data structures.
>
> The patch introduces MAX_
On Mon, Oct 16, 2017 at 7:44 AM, Kirill A. Shutemov
wrote:
> On Fri, Oct 13, 2017 at 05:00:12PM -0700, Nitin Gupta wrote:
>> On Fri, Sep 29, 2017 at 7:08 AM, Kirill A. Shutemov
>> wrote:
>> > With boot-time switching between paging mode we will have variab
> The patch introduces MAX_POSSIBLE_PHYSMEM_BITS to cover such case.
> It also suits well to handle PAE special case.
>
> Signed-off-by: Kirill A. Shutemov
> Cc: Minchan Kim
> Cc: Nitin Gupta
> Cc: Sergey Senozhatsky
> ---
> arch/x86/include/asm/pgtable-3level_types.h
Flatten out nested code structure in huge_pte_offset()
and huge_pte_alloc().
Signed-off-by: Nitin Gupta
---
arch/sparc/mm/hugetlbpage.c | 54 +
1 file changed, 20 insertions(+), 34 deletions(-)
diff --git a/arch/sparc/mm/hugetlbpage.c b/arch/sparc/mm
get_user_pages() is used to do direct IO. It already
handles the case where the address range is backed
by PMD huge pages. This patch now adds the case where
the range could be backed by PUD huge pages.
Signed-off-by: Nitin Gupta
---
arch/sparc/include/asm/pgtable_64.h | 15 +++--
arch
Cc: Anthony Yznaga
Reviewed-by: Bob Picco
Signed-off-by: Nitin Gupta
---
arch/sparc/include/asm/hugetlb.h| 7
arch/sparc/include/asm/page_64.h| 3 +-
arch/sparc/include/asm/pgtable_64.h | 5 +++
arch/sparc/include/asm/tsb.h| 36 ++
arch/sparc/kernel/head_64
get_user_pages() is used to do direct IO. It already
handles the case where the address range is backed
by PMD huge pages. This patch now adds the case where
the range could be backed by PUD huge pages.
Signed-off-by: Nitin Gupta
---
arch/sparc/include/asm/pgtable_64.h | 15 +++--
arch
Signed-off-by: Nitin Gupta
---
arch/sparc/include/asm/hugetlb.h| 7
arch/sparc/include/asm/page_64.h| 3 +-
arch/sparc/include/asm/pgtable_64.h | 5 +++
arch/sparc/include/asm/tsb.h| 36 ++
arch/sparc/kernel/tsb.S | 2 +-
arch/sparc/kernel
Flatten out nested code structure in huge_pte_offset()
and huge_pte_alloc().
Signed-off-by: Nitin Gupta
---
arch/sparc/mm/hugetlbpage.c | 54 +
1 file changed, 20 insertions(+), 34 deletions(-)
diff --git a/arch/sparc/mm/hugetlbpage.c b/arch/sparc/mm
On 07/20/2017 01:04 PM, David Miller wrote:
> From: Nitin Gupta
> Date: Thu, 13 Jul 2017 14:53:24 -0700
>
>> Testing:
>>
>> Tested with the stream benchmark which allocates 48G of
>> arrays backed by 16G hugepages and does RW operation on
>> them in
nd 1G hugepage
sizes are available.
case 2: default_hugepagesz=[64K|256M|2G]
When specifying only a default_hugepagesz parameter, the default
hugepage size isn't really changed and it stays at 8M. This is again
different from x86_64.
Orabug: 25869946
Reviewed-by: Bob Picco
Signed-off
Signed-off-by: Nitin Gupta
---
arch/sparc/include/asm/page_64.h| 3 +-
arch/sparc/include/asm/pgtable_64.h | 5 +++
arch/sparc/include/asm/tsb.h| 30 +++
arch/sparc/kernel/tsb.S | 2 +-
arch/sparc/mm/hugetlbpage.c | 74
Flatten out nested code structure in huge_pte_offset()
and huge_pte_alloc().
Signed-off-by: Nitin Gupta
---
arch/sparc/mm/hugetlbpage.c | 54 +
1 file changed, 20 insertions(+), 34 deletions(-)
diff --git a/arch/sparc/mm/hugetlbpage.c b/arch/sparc/mm
get_user_pages() is used to do direct IO. It already
handles the case where the address range is backed
by PMD huge pages. This patch now adds the case where
the range could be backed by PUD huge pages.
Signed-off-by: Nitin Gupta
---
arch/sparc/include/asm/pgtable_64.h | 15 ++--
arch
The function assumes that each PMD points to head of a
huge page. This is not correct as a PMD can point to
start of any 8M region with a, say 256M, hugepage. The
fix ensures that it points to the correct head of any PMD
huge page.
Cc: Julian Calaby
Signed-off-by: Nitin Gupta
---
Changes since
Hi Julian,
On 6/22/17 3:53 AM, Julian Calaby wrote:
On Thu, Jun 22, 2017 at 7:50 AM, Nitin Gupta wrote:
The function assumes that each PMD points to head of a
huge page. This is not correct as a PMD can point to
start of any 8M region with a, say 256M, hugepage. The
fix ensures that it
The function assumes that each PMD points to head of a
huge page. This is not correct as a PMD can point to
start of any 8M region with a, say 256M, hugepage. The
fix ensures that it points to the correct head of any PMD
huge page.
Signed-off-by: Nitin Gupta
---
arch/sparc/mm/gup.c | 2 ++
1
The function assumes that each PMD points to head of a
huge page. This is not correct as a PMD can point to
start of any 8M region with a, say 256M, hugepage. The
fix ensures that it points to the correct head of any PMD
huge page.
Signed-off-by: Nitin Gupta
---
arch/sparc/mm/gup.c | 2 ++
1
get_user_pages() is used to do direct IO. It already
handles the case where the address range is backed
by PMD huge pages. This patch now adds the case where
the range could be backed by PUD huge pages.
Signed-off-by: Nitin Gupta
---
arch/sparc/include/asm/pgtable_64.h | 15 ++--
arch
Signed-off-by: Nitin Gupta
---
arch/sparc/include/asm/page_64.h| 3 +-
arch/sparc/include/asm/pgtable_64.h | 5 +++
arch/sparc/include/asm/tsb.h| 30 +++
arch/sparc/kernel/tsb.S | 2 +-
arch/sparc/mm/hugetlbpage.c | 74
Flatten out nested code structure in huge_pte_offset()
and huge_pte_alloc().
Signed-off-by: Nitin Gupta
---
arch/sparc/mm/hugetlbpage.c | 54 +
1 file changed, 20 insertions(+), 34 deletions(-)
diff --git a/arch/sparc/mm/hugetlbpage.c b/arch/sparc/mm
The function assumes that each PMD points to head of a
huge page. This is not correct as a PMD can point to
start of any 8M region with a, say 256M, hugepage. The
fix ensures that it points to the correct head of any PMD
huge page.
Signed-off-by: Nitin Gupta
---
arch/sparc/mm/gup.c | 2 ++
1
Flatten out nested code structure in huge_pte_offset()
and huge_pte_alloc().
Signed-off-by: Nitin Gupta
---
arch/sparc/mm/hugetlbpage.c | 54 +
1 file changed, 20 insertions(+), 34 deletions(-)
diff --git a/arch/sparc/mm/hugetlbpage.c b/arch/sparc/mm
get_user_pages() is used to do direct IO. It already
handles the case where the address range is backed
by PMD huge pages. This patch now adds the case where
the range could be backed by PUD huge pages.
Signed-off-by: Nitin Gupta
---
arch/sparc/include/asm/pgtable_64.h | 15 ++--
arch
Signed-off-by: Nitin Gupta
---
Changelog v3 vs v2:
- Fixed email headers so the subject shows up correctly
Changelog v2 vs v1:
- Remove redundant brgez,pn (Bob Picco)
- Remove unncessary label rename from 700 to 701 (Rob Gardner)
- Add patch description (Paul)
- Add 16G case to get_user_pages
Please ignore this patch series. I will resend again with correct email
headers.
Nitin
On 6/19/17 2:48 PM, Nitin Gupta wrote:
Adds support for 16GB hugepage size. To use this page size
use kernel parameters as:
default_hugepagesz=16G hugepagesz=16G hugepages=10
Testing:
Tested with the
The function assumes that each PMD points to head of a
huge page. This is not correct as a PMD can point to
start of any 8M region with a, say 256M, hugepage. The
fix ensures that it points to the correct head of any PMD
huge page.
Signed-off-by: Nitin Gupta
---
arch/sparc/mm/gup.c | 2 ++
1
Signed-off-by: Nitin Gupta
---
Changelog v2 vs v1:
- Remove redundant brgez,pn (Bob Picco)
- Remove unncessary label rename from 700 to 701 (Rob Gardner)
- Add patch description (Paul)
- Add 16G case to get_user_pages()
arch/sparc/include/asm/page_64.h| 3 +-
arch/sparc/include/asm
Flatten out nested code structure in huge_pte_offset()
and huge_pte_alloc().
Signed-off-by: Nitin Gupta
---
arch/sparc/mm/hugetlbpage.c | 54 +
1 file changed, 20 insertions(+), 34 deletions(-)
diff --git a/arch/sparc/mm/hugetlbpage.c b/arch/sparc/mm
get_user_pages() is used to do direct IO. It already
handles the case where the address range is backed
by PMD huge pages. This patch now adds the case where
the range could be backed by PUD huge pages.
Signed-off-by: Nitin Gupta
---
arch/sparc/include/asm/pgtable_64.h | 15 ++--
arch
On 5/24/17 8:45 PM, David Miller wrote:
> From: Paul Gortmaker
> Date: Wed, 24 May 2017 23:34:42 -0400
>
>> [[PATCH] sparc64: Add 16GB hugepage support] On 24/05/2017 (Wed 17:29) Nitin
>> Gupta wrote:
>>
>>> Orabug: 25362942
>>>
>>> Signed-o
Orabug: 25362942
Signed-off-by: Nitin Gupta
---
arch/sparc/include/asm/page_64.h| 3 +-
arch/sparc/include/asm/pgtable_64.h | 5 +++
arch/sparc/include/asm/tsb.h| 35 +-
arch/sparc/kernel/tsb.S | 2 +-
arch/sparc/mm/hugetlbpage.c | 74
An incorrect huge page alignment check caused
mmap failure for 64K pages when MAP_FIXED is used
with address not aligned to HPAGE_SIZE.
Orabug: 25885991
Signed-off-by: Nitin Gupta
---
arch/sparc/include/asm/hugetlb.h | 6 --
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a
Make sure the start adderess is aligned to PMD_SIZE
boundary when freeing page table backing a hugepage
region. The issue was causing segfaults when a region
backed by 64K pages was unmapped since such a region
is in general not PMD_SIZE aligned.
Signed-off-by: Nitin Gupta
---
arch/sparc/mm
The memory corruption was happening due to incorrect
TLB/TSB flushing of hugepages.
Reported-by: David S. Miller
Signed-off-by: Nitin Gupta
---
arch/sparc/mm/tlb.c | 6 +++---
arch/sparc/mm/tsb.c | 4 ++--
2 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/arch/sparc/mm/tlb.c b
Signed-off-by: Nitin Gupta
---
arch/sparc/include/asm/page_64.h | 3 ++-
arch/sparc/mm/hugetlbpage.c | 7 +++
arch/sparc/mm/init_64.c | 4
3 files changed, 13 insertions(+), 1 deletion(-)
diff --git a/arch/sparc/include/asm/page_64.h b/arch/sparc/include/asm/page_64.h
Signed-off-by: Nitin Gupta
---
arch/sparc/mm/hugetlbpage.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/sparc/mm/hugetlbpage.c b/arch/sparc/mm/hugetlbpage.c
index 323bc6b..3016850 100644
--- a/arch/sparc/mm/hugetlbpage.c
+++ b/arch/sparc/mm/hugetlbpage.c
@@ -261,7
Patch "sparc64: Add 64K page size support"
unconditionally used __flush_huge_tsb_one_entry()
which is available only when hugetlb support is
enabled.
Another issue was incorrect TSB flushing for 64K
pages in flush_tsb_user().
Signed-off-by: Nitin Gupta
---
arch/sparc/mm/hugetlbp
This patch depends on:
[v6] sparc64: Multi-page size support
- Testing
Tested on Sonoma by running stream benchmark instance which allocated
48G worth of 64K pages.
boot params: default_hugepagesz=64K hugepagesz=64K hugepages=1310720
Signed-off-by: Nitin Gupta
---
arch/sparc/include/asm
pagesz=256M hugepagesz=256M hugepages=300 hugepagesz=8M
hugepages=1
Signed-off-by: Nitin Gupta
---
Changelog v6 vs v5:
- Fix _flush_huge_tsb_one_entry: add correct offset to base vaddr
Changelog v4 vs v5:
- Enable hugepage initialization on sun4u
Changelog v3 vs v4:
- Remove incorrect WARN
pagesz=256M hugepagesz=256M hugepages=300 hugepagesz=8M
hugepages=1
Signed-off-by: Nitin Gupta
---
Changelog v4 vs v5:
- Enable hugepage initialization on sun4u (this patch has been
tested only on sun4v).
Changelog v3 vs v4:
- Remove incorrect WARN_ON in __flush_huge_tsb_one_entry()
Cha
On 12/27/2016 09:34 AM, David Miller wrote:
> From: Nitin Gupta
> Date: Tue, 13 Dec 2016 10:03:18 -0800
>
>> +static unsigned int sun4u_huge_tte_to_shift(pte_t entry)
>> +{
>> +unsigned long tte_szbits = pte_val(entry) & _PAGE_SZALL_4V;
>> +un
pagesz=256M hugepagesz=256M hugepages=300 hugepagesz=8M
hugepages=1
Signed-off-by: Nitin Gupta
---
Changelog v3 vs v4:
- Remove incorrect WARN_ON in __flush_huge_tsb_one_entry()
Changelog v2 vs v3:
- Remove unused label in tsb.S (David)
- Order local variables from longest to shortes
On 12/11/2016 06:14 PM, David Miller wrote:
> From: David Miller
> Date: Sun, 11 Dec 2016 21:06:30 -0500 (EST)
>
>> Applied.
>
> Actually, I'm reverting.
>
> Just doing a simply "make -s -j128" kernel build on a T4-2 I'm
> getting kernel log warnings:
>
> [2024810.925975] IPv6: ADDRCONF(NETD
pagesz=256M hugepagesz=256M hugepages=300 hugepagesz=8M
hugepages=1
Signed-off-by: Nitin Gupta
---
Changelog v2 vs v3:
- Remove unused label in tsb.S (David)
- Order local variables from longest to shortest line (David)
Changelog v1 vs v2:
- Fix warning due to unused __flush_huge_tsb_one(
SLUB has better debugging support.
Signed-off-by: Nitin Gupta
---
arch/sparc/configs/sparc64_defconfig | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/arch/sparc/configs/sparc64_defconfig
b/arch/sparc/configs/sparc64_defconfig
index 3583d67..0a615b0 100644
--- a/arch
pagesz=256M hugepagesz=256M hugepages=300 hugepagesz=8M
hugepages=1
Signed-off-by: Nitin Gupta
---
Changelog v1 vs v2:
- Fix warning due to unused __flush_huge_tsb_one() when
CONFIG_HUGETLB is not defined.
arch/sparc/include/asm/page_64.h | 3 +-
arch/sparc/include/asm/pgtabl
pagesz=256M hugepagesz=256M hugepages=300 hugepagesz=8M
hugepages=1
Signed-off-by: Nitin Gupta
---
arch/sparc/include/asm/page_64.h | 3 +-
arch/sparc/include/asm/pgtable_64.h | 23 +++--
arch/sparc/include/asm/tlbflush_64.h | 5 +-
arch/sparc/kernel/tsb.S | 21 +
level.
Orabug: 22630259
Signed-off-by: Nitin Gupta
---
Changelog v2 vs v1
- Combine fix for page table freeing with the main trimming patch (Dave)
arch/sparc/include/asm/hugetlb.h| 12 +--
arch/sparc/include/asm/pgtable_64.h |7 ++-
arch/sparc/include/asm/tsb.h|2 +-
arch/
8M pages now allocate page tables till PMD level only.
So, when freeing page table for 8M hugepage backed region,
make sure we don't try to access non-existent PTE level.
Signed-off-by: Nitin Gupta
---
arch/sparc/include/asm/hugetlb.h | 12 ++---
arch/sparc/mm/hugetlbpage.c |
For PMD aligned (8M) hugepages, we currently allocate
all four page table levels which is wasteful. We now
allocate till PMD level only which saves memory usage
from page tables.
Orabug: 22630259
Signed-off-by: Nitin Gupta
---
Changelog v2 vs v1:
- Move sparc specific declaration of
8M pages now allocate page tables till PMD level only.
So, when freeing page table for 8M hugepage backed region,
make sure we don't try to access non-existent PTE level.
Signed-off-by: Nitin Gupta
---
arch/sparc/include/asm/hugetlb.h | 8
arch/sparc/mm/hugetlbpage.c
For PMD aligned (8M) hugepages, we currently allocate
all four page table levels which is wasteful. We now
allocate till PMD level only which saves memory usage
from page tables.
Orabug: 22630259
Signed-off-by: Nitin Gupta
---
arch/sparc/include/asm/pgtable_64.h | 7 +++-
arch/sparc/include
flush calls.
Orabug: 22365539, 22643230, 22995196
Signed-off-by: Nitin Gupta
---
Changelog v4 vs v3:
- Fix build error when CONFIG_HUGETLB_PAGE is not defined
- Tested build with randconfig, allyesconfig, allnoconfig
Changelog v3 vs v2:
- Changed patch title to reflect that both map/unmap
flush calls.
Orabug: 22365539, 22643230, 22995196
Signed-off-by: Nitin Gupta
---
Changelog v3 vs v2:
- Changed patch title to reflect that both map/unmap cases
are affected.
- Don't do TLB flush if original PTE wasn't valid (DaveM)
- Use tlb_batch_add() instead of directly ca
7 /* unused */
#define PC110PAD_MINOR 9 /* unused */
/*#define ADB_MOUSE_MINOR 10 FIXME OBSOLETE */
+#define DYNAMIC_MINOR_START 11
#define WATCHDOG_MINOR 130 /* Watchdog timer */
#define TEMP_MINOR 131 /* Temperature Sensor */
#define RTC_MINOR 135
--
1.7.9.5
Nitin Gupta
Logix Cyber Park • Plot
15 15 15
5: 15 15 15 15 15 10 15 15
6: 15 15 15 15 15 15 10 15
7: 15 15 15 15 15 15 15 10
Signed-off-by: Nitin Gupta
Reviewed-by: Chris Hyser
Reviewed-by: Santosh Shilimkar
---
Changelog v1 -> v2:
- Drop extern keyword for function prototype (Sam Ra
On 10/29/2015 11:50 AM, Sam Ravnborg wrote:
Small nit.
diff --git a/arch/sparc/include/asm/topology_64.h
b/arch/sparc/include/asm/topology_64.h
index 01d1704..ed3dfdd 100644
--- a/arch/sparc/include/asm/topology_64.h
+++ b/arch/sparc/include/asm/topology_64.h
@@ -31,6 +31,9 @@ static inline in
15 15 15
5: 15 15 15 15 15 10 15 15
6: 15 15 15 15 15 15 10 15
7: 15 15 15 15 15 15 15 10
Signed-off-by: Nitin Gupta
Reviewed-by: Chris Hyser
Reviewed-by: Santosh Shilimkar
---
arch/sparc/include/asm/topology_64.h |3 +
arch/sparc/mm/init_64.c |
On 11/12/13, 6:42 PM, Greg KH wrote:
On Wed, Nov 13, 2013 at 12:41:38AM +0900, Minchan Kim wrote:
We spent much time with preventing zram enhance since it have been in staging
and Greg never want to improve without promotion.
It's not "improve", it's "Greg does not want you adding new features
uct page **page,
> unsigned long *obj_idx)
> {
> *page = pfn_to_page(handle >> OBJ_INDEX_BITS);
> - *obj_idx = handle & OBJ_INDEX_MASK;
> + *obj_idx = (handle & OBJ_INDEX_MASK) - 1;
> }
>
> static unsigned long ob
1 - 100 of 224 matches
Mail list logo