RE: [PATCH] mm/compaction: remove unused variable sysctl_compact_memory

2021-03-03 Thread Nitin Gupta
> -Original Message- > From: owner-linux...@kvack.org On Behalf > Of pi...@codeaurora.org > Sent: Wednesday, March 3, 2021 6:34 AM > To: Nitin Gupta > Cc: linux-kernel@vger.kernel.org; a...@linux-foundation.org; linux- > m...@kvack.org; linux-fsde...@vger.k

RE: [PATCH] mm/compaction: remove unused variable sysctl_compact_memory

2021-03-02 Thread Nitin Gupta
...@codeaurora.org; > iamjoonsoo@lge.com; sh_...@163.com; mateusznos...@gmail.com; > b...@redhat.com; Nitin Gupta ; vba...@suse.cz; > yzai...@google.com; keesc...@chromium.org; mcg...@kernel.org; > mgor...@techsingularity.net > Cc: pintu.p...@gmail.com > Subject: [PATCH] m

[PATCH] mm: Fix compile error due to COMPACTION_HPAGE_ORDER

2020-06-23 Thread Nitin Gupta
Fix compile error when COMPACTION_HPAGE_ORDER is assigned to HUGETLB_PAGE_ORDER. The correct way to check if this constant is defined is to check for CONFIG_HUGETLBFS. Signed-off-by: Nitin Gupta To: Andrew Morton Reported-by: Nathan Chancellor Tested-by: Nathan Chancellor --- mm/compaction.c

Re: [PATCH v8] mm: Proactive compaction

2020-06-23 Thread Nitin Gupta
On 6/22/20 9:57 PM, Nathan Chancellor wrote: > On Mon, Jun 22, 2020 at 09:32:12PM -0700, Nitin Gupta wrote: >> On 6/22/20 7:26 PM, Nathan Chancellor wrote: >>> On Tue, Jun 16, 2020 at 01:45:27PM -0700, Nitin Gupta wrote: >>>> For some applications, we need

Re: [PATCH v8] mm: Proactive compaction

2020-06-22 Thread Nitin Gupta
On 6/22/20 7:26 PM, Nathan Chancellor wrote: > On Tue, Jun 16, 2020 at 01:45:27PM -0700, Nitin Gupta wrote: >> For some applications, we need to allocate almost all memory as >> hugepages. However, on a running system, higher-order allocations can >> fail if the memory is fra

Re: [PATCH] mm: Use unsigned types for fragmentation score

2020-06-18 Thread Nitin Gupta
On 6/18/20 6:41 AM, Baoquan He wrote: > On 06/17/20 at 06:03pm, Nitin Gupta wrote: >> Proactive compaction uses per-node/zone "fragmentation score" which >> is always in range [0, 100], so use unsigned type of these scores >> as well as for related constants. &

[PATCH] mm: Use unsigned types for fragmentation score

2020-06-17 Thread Nitin Gupta
Proactive compaction uses per-node/zone "fragmentation score" which is always in range [0, 100], so use unsigned type of these scores as well as for related constants. Signed-off-by: Nitin Gupta --- include/linux/compaction.h | 4 ++-- kernel/sysctl.c| 2 +- mm/co

Re: [PATCH v8] mm: Proactive compaction

2020-06-17 Thread Nitin Gupta
On 6/17/20 1:53 PM, Andrew Morton wrote: On Tue, 16 Jun 2020 13:45:27 -0700 Nitin Gupta wrote: For some applications, we need to allocate almost all memory as hugepages. However, on a running system, higher-order allocations can fail if the memory is fragmented. Linux kernel currently does

[PATCH v8] mm: Proactive compaction

2020-06-16 Thread Nitin Gupta
erred maximum number of times with HPAGE_FRAG_CHECK_INTERVAL_MSEC of wait between each check (=> ~30 seconds between retries). [1] https://patchwork.kernel.org/patch/11098289/ [2] https://lore.kernel.org/linux-mm/20161230131412.gi13...@dhcp22.suse.cz/ [3] https://lwn.net/Articles/817905/ Signed-off-by: Nit

Re: [PATCH v7] mm: Proactive compaction

2020-06-16 Thread Nitin Gupta
On 6/16/20 2:46 AM, Oleksandr Natalenko wrote: > Hello. > > Please see the notes inline. > > On Mon, Jun 15, 2020 at 07:36:14AM -0700, Nitin Gupta wrote: >> For some applications, we need to allocate almost all memory as >> hugepages. However, on a running system, h

[PATCH v7] mm: Proactive compaction

2020-06-15 Thread Nitin Gupta
erred maximum number of times with HPAGE_FRAG_CHECK_INTERVAL_MSEC of wait between each check (=> ~30 seconds between retries). [1] https://patchwork.kernel.org/patch/11098289/ [2] https://lore.kernel.org/linux-mm/20161230131412.gi13...@dhcp22.suse.cz/ [3] https://lwn.net/Articles/817905/ Signed-off-by: Nit

Re: [PATCH v6] mm: Proactive compaction

2020-06-15 Thread Nitin Gupta
On 6/15/20 7:25 AM, Oleksandr Natalenko wrote: > On Mon, Jun 15, 2020 at 10:29:01AM +0200, Oleksandr Natalenko wrote: >> Just to let you know, this fails to compile for me with THP disabled on >> v5.8-rc1: >> >> CC mm/compaction.o >> In file included from ./include/linux/dev_printk.h:14, >>

Re: [PATCH v6] mm: Proactive compaction

2020-06-11 Thread Nitin Gupta
On 6/9/20 12:23 PM, Khalid Aziz wrote: > On Mon, 2020-06-01 at 12:48 -0700, Nitin Gupta wrote: >> For some applications, we need to allocate almost all memory as >> hugepages. However, on a running system, higher-order allocations can >> fail if the memory is fragmented. L

Re: [PATCH v6] mm: Proactive compaction

2020-06-09 Thread Nitin Gupta
On Mon, Jun 1, 2020 at 12:48 PM Nitin Gupta wrote: > > For some applications, we need to allocate almost all memory as > hugepages. However, on a running system, higher-order allocations can > fail if the memory is fragmented. Linux kernel currently does on-demand > compaction as

[PATCH v6] mm: Proactive compaction

2020-06-01 Thread Nitin Gupta
erred maximum number of times with HPAGE_FRAG_CHECK_INTERVAL_MSEC of wait between each check (=> ~30 seconds between retries). [1] https://patchwork.kernel.org/patch/11098289/ [2] https://lore.kernel.org/linux-mm/20161230131412.gi13...@dhcp22.suse.cz/ [3] https://lwn.net/Articles/817905/ Signed-off-by: Nit

Re: [PATCH v5] mm: Proactive compaction

2020-05-28 Thread Nitin Gupta
this based upon their workload. More comments below. > Tunables like the one this patch introduces, and similar ones like 'swappiness' will always require some experimentations from the user. > On Mon, 2020-05-18 at 11:14 -0700, Nitin Gupta wrote: > > For some applications, we need to

Re: [PATCH v5] mm: Proactive compaction

2020-05-28 Thread Nitin Gupta
On Wed, May 27, 2020 at 3:18 AM Vlastimil Babka wrote: > > On 5/18/20 8:14 PM, Nitin Gupta wrote: > > For some applications, we need to allocate almost all memory as > > hugepages. However, on a running system, higher-order allocations can > > fail if the memory is

Re: [PATCH v5] mm: Proactive compaction

2020-05-28 Thread Nitin Gupta
On Thu, May 28, 2020 at 2:50 AM Vlastimil Babka wrote: > > On 5/28/20 11:15 AM, Holger Hoffstätte wrote: > > > > On 5/18/20 8:14 PM, Nitin Gupta wrote: > > [patch v5 :)] > > > > I've been successfully using this in my tree and it works great, but a > > f

[PATCH v5] mm: Proactive compaction

2020-05-18 Thread Nitin Gupta
INTERVAL_MSEC of wait between each check (=> ~30 seconds between retries). [1] https://patchwork.kernel.org/patch/11098289/ Signed-off-by: Nitin Gupta To: Mel Gorman To: Michal Hocko To: Vlastimil Babka CC: Matthew Wilcox CC: Andrew Morton CC: Mike Kravetz CC: Joonsoo Kim CC: Dav

[PATCH v4] mm: Proactive compaction

2020-04-28 Thread Nitin Gupta
G_CHECK_INTERVAL_MSEC of wait between each check (=> ~30 seconds between retries). [1] https://patchwork.kernel.org/patch/11098289/ Signed-off-by: Nitin Gupta To: Mel Gorman To: Michal Hocko To: Vlastimil Babka CC: Matthew Wilcox CC: Andrew Morton CC: Mike Kravetz CC: Joonsoo Kim CC: Dav

Re: [RFC] mm: Proactive compaction

2019-09-19 Thread Nitin Gupta
On Tue, 2019-08-20 at 10:46 +0200, Vlastimil Babka wrote: > > This patch is largely based on ideas from Michal Hocko posted here: > > https://lore.kernel.org/linux-mm/20161230131412.gi13...@dhcp22.suse.cz/ > > > > Testing done (on x86): > > - Set

Re: [RFC] mm: Proactive compaction

2019-09-19 Thread Nitin Gupta
On Thu, 2019-08-22 at 09:51 +0100, Mel Gorman wrote: > As unappealing as it sounds, I think it is better to try improve the > allocation latency itself instead of trying to hide the cost in a kernel > thread. It's far harder to implement as compaction is not easy but it > would be more obvious

Re: [RFC] mm: Proactive compaction

2019-09-16 Thread Nitin Gupta
On Mon, 2019-09-16 at 13:16 -0700, David Rientjes wrote: > On Fri, 16 Aug 2019, Nitin Gupta wrote: > > > For some applications we need to allocate almost all memory as > > hugepages. However, on a running system, higher order allocations can > > fail if the memory is

Re: [PATCH] mm: Add callback for defining compaction completion

2019-09-12 Thread Nitin Gupta
On Thu, 2019-09-12 at 17:11 +0530, Bharath Vedartham wrote: > Hi Nitin, > On Wed, Sep 11, 2019 at 10:33:39PM +, Nitin Gupta wrote: > > On Wed, 2019-09-11 at 08:45 +0200, Michal Hocko wrote: > > > On Tue 10-09-19 22:27:53, Nitin Gupta wrote: > > > [...] > > &

Re: [PATCH] mm: Add callback for defining compaction completion

2019-09-11 Thread Nitin Gupta
On Wed, 2019-09-11 at 08:45 +0200, Michal Hocko wrote: > On Tue 10-09-19 22:27:53, Nitin Gupta wrote: > [...] > > > On Tue 10-09-19 13:07:32, Nitin Gupta wrote: > > > > For some applications we need to allocate almost all memory as > > > > hugepages. >

RE: [PATCH] mm: Add callback for defining compaction completion

2019-09-10 Thread Nitin Gupta
> -Original Message- > From: owner-linux...@kvack.org On Behalf > Of Michal Hocko > Sent: Tuesday, September 10, 2019 1:19 PM > To: Nitin Gupta > Cc: a...@linux-foundation.org; vba...@suse.cz; > mgor...@techsingularity.net; dan.j.willi...@intel.com; > khalid.

[PATCH] mm: Add callback for defining compaction completion

2019-09-10 Thread Nitin Gupta
ain scenarios to reduce hugepage allocation latencies. This callback interface allows drivers to drive compaction based on their own policies like the current level of external fragmentation for a particular order, system load etc. Signed-off-by: Nitin Gupta --- include/linux/compaction.h |

Re: [RFC] mm: Proactive compaction

2019-08-27 Thread Nitin Gupta
On Mon, 2019-08-26 at 12:47 +0100, Mel Gorman wrote: > On Thu, Aug 22, 2019 at 09:57:22PM +0000, Nitin Gupta wrote: > > > Note that proactive compaction may reduce allocation latency but > > > it is not > > > free either. Even though the scanning and migratio

Re: [RFC] mm: Proactive compaction

2019-08-22 Thread Nitin Gupta
> -Original Message- > From: owner-linux...@kvack.org On Behalf > Of Mel Gorman > Sent: Thursday, August 22, 2019 1:52 AM > To: Nitin Gupta > Cc: a...@linux-foundation.org; vba...@suse.cz; mho...@suse.com; > dan.j.willi...@intel.com; Yu Zhao ; Matthew Wilcox

RE: [RFC] mm: Proactive compaction

2019-08-21 Thread Nitin Gupta
> -Original Message- > From: owner-linux...@kvack.org On Behalf > Of Matthew Wilcox > Sent: Tuesday, August 20, 2019 3:21 PM > To: Nitin Gupta > Cc: a...@linux-foundation.org; vba...@suse.cz; > mgor...@techsingularity.net; mho...@suse.com; > dan.j.willi...@i

RE: [RFC] mm: Proactive compaction

2019-08-20 Thread Nitin Gupta
> -Original Message- > From: Vlastimil Babka > Sent: Tuesday, August 20, 2019 1:46 AM > To: Nitin Gupta ; a...@linux-foundation.org; > mgor...@techsingularity.net; mho...@suse.com; > dan.j.willi...@intel.com > Cc: Yu Zhao ; Matthew Wilcox ; > Qian Cai ; Andrey Rya

[RFC] mm: Proactive compaction

2019-08-16 Thread Nitin Gupta
tion till extfrag < extfrag_low for order-9. The patch has plenty of rough edges but posting it early to see if I'm going in the right direction and to get some early feedback. Signed-off-by: Nitin Gupta --- include/linux/compaction.h | 12 ++ mm/compaction.c

Re: [PATCH v2] mm: Reduce memory bloat with THP

2018-01-31 Thread Nitin Gupta
On 01/25/2018 01:13 PM, Mel Gorman wrote: > On Thu, Jan 25, 2018 at 11:41:03AM -0800, Nitin Gupta wrote: >>>> It's not really about memory scarcity but a more efficient use of it. >>>> Applications may want hugepage benefits without requiring any changes to >

Re: [PATCH v2] mm: Reduce memory bloat with THP

2018-01-31 Thread Nitin Gupta
On 01/25/2018 01:13 PM, Mel Gorman wrote: > On Thu, Jan 25, 2018 at 11:41:03AM -0800, Nitin Gupta wrote: >>>> It's not really about memory scarcity but a more efficient use of it. >>>> Applications may want hugepage benefits without requiring any changes to >

Re: [PATCH v2] mm: Reduce memory bloat with THP

2018-01-25 Thread Nitin Gupta
On 01/24/2018 04:47 PM, Zi Yan wrote: With this change, whenever an application issues MADV_DONTNEED on a memory region, the region is marked as "space-efficient". For such regions, a hugepage is not immediately allocated on first write. >>> Kirill didn't like it in the previous

Re: [PATCH v2] mm: Reduce memory bloat with THP

2018-01-25 Thread Nitin Gupta
On 01/24/2018 04:47 PM, Zi Yan wrote: With this change, whenever an application issues MADV_DONTNEED on a memory region, the region is marked as "space-efficient". For such regions, a hugepage is not immediately allocated on first write. >>> Kirill didn't like it in the previous

Re: [PATCH v2] mm: Reduce memory bloat with THP

2018-01-24 Thread Nitin Gupta
On 1/19/18 4:49 AM, Michal Hocko wrote: > On Thu 18-01-18 15:33:16, Nitin Gupta wrote: >> From: Nitin Gupta <nitin.m.gu...@oracle.com> >> >> Currently, if the THP enabled policy is "always", or the mode >> is "madvise" and a region is marked a

Re: [PATCH v2] mm: Reduce memory bloat with THP

2018-01-24 Thread Nitin Gupta
On 1/19/18 4:49 AM, Michal Hocko wrote: > On Thu 18-01-18 15:33:16, Nitin Gupta wrote: >> From: Nitin Gupta >> >> Currently, if the THP enabled policy is "always", or the mode >> is "madvise" and a region is marked as MADV_HUGEPAGE, a hugepage &g

Re: [PATCH] mm: Reduce memory bloat with THP

2017-12-15 Thread Nitin Gupta
On 12/15/17 2:01 AM, Kirill A. Shutemov wrote: > On Thu, Dec 14, 2017 at 05:28:52PM -0800, Nitin Gupta wrote: >> diff --git a/mm/madvise.c b/mm/madvise.c >> index 751e97a..b2ec07b 100644 >> --- a/mm/madvise.c >> +++ b/mm/madvise.c >> @@ -508,6 +508,7 @@ static

Re: [PATCH] mm: Reduce memory bloat with THP

2017-12-15 Thread Nitin Gupta
On 12/15/17 2:01 AM, Kirill A. Shutemov wrote: > On Thu, Dec 14, 2017 at 05:28:52PM -0800, Nitin Gupta wrote: >> diff --git a/mm/madvise.c b/mm/madvise.c >> index 751e97a..b2ec07b 100644 >> --- a/mm/madvise.c >> +++ b/mm/madvise.c >> @@ -508,6 +508,7 @@ static

Re: [PATCH] mm: Reduce memory bloat with THP

2017-12-15 Thread Nitin Gupta
On 12/15/17 2:00 AM, Kirill A. Shutemov wrote: > On Thu, Dec 14, 2017 at 05:28:52PM -0800, Nitin Gupta wrote: >> Currently, if the THP enabled policy is "always", or the mode >> is "madvise" and a region is marked as MADV_HUGEPAGE, a hugepage >> is al

Re: [PATCH] mm: Reduce memory bloat with THP

2017-12-15 Thread Nitin Gupta
On 12/15/17 2:00 AM, Kirill A. Shutemov wrote: > On Thu, Dec 14, 2017 at 05:28:52PM -0800, Nitin Gupta wrote: >> Currently, if the THP enabled policy is "always", or the mode >> is "madvise" and a region is marked as MADV_HUGEPAGE, a hugepage >> is al

[PATCH] sparc64: Fix page table walk for PUD hugepages

2017-11-03 Thread Nitin Gupta
For a PUD hugepage entry, we need to propagate bits [32:22] from virtual address to resolve at 4M granularity. However, the current code was incorrectly propagating bits [29:19]. This bug can cause incorrect data to be returned for pages backed with 16G hugepages. Signed-off-by: Nitin Gupta

[PATCH] sparc64: Fix page table walk for PUD hugepages

2017-11-03 Thread Nitin Gupta
For a PUD hugepage entry, we need to propagate bits [32:22] from virtual address to resolve at 4M granularity. However, the current code was incorrectly propagating bits [29:19]. This bug can cause incorrect data to be returned for pages backed with 16G hugepages. Signed-off-by: Nitin Gupta

Re: [PATCH 1/4] mm/zsmalloc: Prepare to variable MAX_PHYSMEM_BITS

2017-10-22 Thread Nitin Gupta
;kirill.shute...@linux.intel.com> >> Cc: Minchan Kim <minc...@kernel.org> >> Cc: Nitin Gupta <ngu...@vflare.org> >> Cc: Sergey Senozhatsky <sergey.senozhatsky.w...@gmail.com> > Acked-by: Minchan Kim <minc...@kernel.org> > > Nitin: > >

Re: [PATCH 1/4] mm/zsmalloc: Prepare to variable MAX_PHYSMEM_BITS

2017-10-22 Thread Nitin Gupta
ble for CONFIG_X86_5LEVEL=y >> configuration to define zsmalloc data structures. >> >> The patch introduces MAX_POSSIBLE_PHYSMEM_BITS to cover such case. >> It also suits well to handle PAE special case. >> >> Signed-off-by: Kirill A. Shutemov >> Cc: Minchan Kim &

Re: [PATCH 1/4] mm/zsmalloc: Prepare to variable MAX_PHYSMEM_BITS

2017-10-20 Thread Nitin Gupta
On Fri, Oct 20, 2017 at 12:59 PM, Kirill A. Shutemov wrote: > With boot-time switching between paging mode we will have variable > MAX_PHYSMEM_BITS. > > Let's use the maximum variable possible for CONFIG_X86_5LEVEL=y > configuration to define zsmalloc data

Re: [PATCH 1/4] mm/zsmalloc: Prepare to variable MAX_PHYSMEM_BITS

2017-10-20 Thread Nitin Gupta
On Fri, Oct 20, 2017 at 12:59 PM, Kirill A. Shutemov wrote: > With boot-time switching between paging mode we will have variable > MAX_PHYSMEM_BITS. > > Let's use the maximum variable possible for CONFIG_X86_5LEVEL=y > configuration to define zsmalloc data structures. > > The patch introduces

Re: [PATCH 2/6] mm/zsmalloc: Prepare to variable MAX_PHYSMEM_BITS

2017-10-18 Thread Nitin Gupta
On Mon, Oct 16, 2017 at 7:44 AM, Kirill A. Shutemov <kir...@shutemov.name> wrote: > On Fri, Oct 13, 2017 at 05:00:12PM -0700, Nitin Gupta wrote: >> On Fri, Sep 29, 2017 at 7:08 AM, Kirill A. Shutemov >> <kirill.shute...@linux.intel.com> wrote: >> > With boot-t

Re: [PATCH 2/6] mm/zsmalloc: Prepare to variable MAX_PHYSMEM_BITS

2017-10-18 Thread Nitin Gupta
On Mon, Oct 16, 2017 at 7:44 AM, Kirill A. Shutemov wrote: > On Fri, Oct 13, 2017 at 05:00:12PM -0700, Nitin Gupta wrote: >> On Fri, Sep 29, 2017 at 7:08 AM, Kirill A. Shutemov >> wrote: >> > With boot-time switching between paging mode we will have variab

Re: [PATCH 2/6] mm/zsmalloc: Prepare to variable MAX_PHYSMEM_BITS

2017-10-13 Thread Nitin Gupta
define zsmalloc data structures. > > The patch introduces MAX_POSSIBLE_PHYSMEM_BITS to cover such case. > It also suits well to handle PAE special case. > > Signed-off-by: Kirill A. Shutemov <kirill.shute...@linux.intel.com> > Cc: Minchan Kim <minc...@kernel.org> > Cc: Ni

Re: [PATCH 2/6] mm/zsmalloc: Prepare to variable MAX_PHYSMEM_BITS

2017-10-13 Thread Nitin Gupta
The patch introduces MAX_POSSIBLE_PHYSMEM_BITS to cover such case. > It also suits well to handle PAE special case. > > Signed-off-by: Kirill A. Shutemov > Cc: Minchan Kim > Cc: Nitin Gupta > Cc: Sergey Senozhatsky > --- > arch/x86/include/asm/pgtable-3level_types.h | 1

[PATCH v6 3/3] sparc64: Cleanup hugepage table walk functions

2017-08-11 Thread Nitin Gupta
Flatten out nested code structure in huge_pte_offset() and huge_pte_alloc(). Signed-off-by: Nitin Gupta <nitin.m.gu...@oracle.com> --- arch/sparc/mm/hugetlbpage.c | 54 + 1 file changed, 20 insertions(+), 34 deletions(-) diff --git a/arch/sp

[PATCH v6 1/3] sparc64: Support huge PUD case in get_user_pages

2017-08-11 Thread Nitin Gupta
get_user_pages() is used to do direct IO. It already handles the case where the address range is backed by PMD huge pages. This patch now adds the case where the range could be backed by PUD huge pages. Signed-off-by: Nitin Gupta <nitin.m.gu...@oracle.com> --- arch/sparc/include/asm/pgtabl

[PATCH v6 3/3] sparc64: Cleanup hugepage table walk functions

2017-08-11 Thread Nitin Gupta
Flatten out nested code structure in huge_pte_offset() and huge_pte_alloc(). Signed-off-by: Nitin Gupta --- arch/sparc/mm/hugetlbpage.c | 54 + 1 file changed, 20 insertions(+), 34 deletions(-) diff --git a/arch/sparc/mm/hugetlbpage.c b/arch/sparc/mm

[PATCH v6 1/3] sparc64: Support huge PUD case in get_user_pages

2017-08-11 Thread Nitin Gupta
get_user_pages() is used to do direct IO. It already handles the case where the address range is backed by PMD huge pages. This patch now adds the case where the range could be backed by PUD huge pages. Signed-off-by: Nitin Gupta --- arch/sparc/include/asm/pgtable_64.h | 15 +++-- arch

[PATCH v6 2/3] sparc64: Add 16GB hugepage support

2017-08-11 Thread Nitin Gupta
Cc: Anthony Yznaga <anthony.yzn...@oracle.com> Reviewed-by: Bob Picco <bob.pi...@oracle.com> Signed-off-by: Nitin Gupta <nitin.m.gu...@oracle.com> --- arch/sparc/include/asm/hugetlb.h| 7 arch/sparc/include/asm/page_64.h| 3 +- arch/sparc/include/asm/pgtable_64.h |

[PATCH v6 2/3] sparc64: Add 16GB hugepage support

2017-08-11 Thread Nitin Gupta
Cc: Anthony Yznaga Reviewed-by: Bob Picco Signed-off-by: Nitin Gupta --- arch/sparc/include/asm/hugetlb.h| 7 arch/sparc/include/asm/page_64.h| 3 +- arch/sparc/include/asm/pgtable_64.h | 5 +++ arch/sparc/include/asm/tsb.h| 36 ++ arch/sparc/kernel/head_64

[PATCH v5 1/3] sparc64: Support huge PUD case in get_user_pages

2017-07-29 Thread Nitin Gupta
get_user_pages() is used to do direct IO. It already handles the case where the address range is backed by PMD huge pages. This patch now adds the case where the range could be backed by PUD huge pages. Signed-off-by: Nitin Gupta <nitin.m.gu...@oracle.com> --- arch/sparc/include/asm/pgtabl

[PATCH v5 1/3] sparc64: Support huge PUD case in get_user_pages

2017-07-29 Thread Nitin Gupta
get_user_pages() is used to do direct IO. It already handles the case where the address range is backed by PMD huge pages. This patch now adds the case where the range could be backed by PUD huge pages. Signed-off-by: Nitin Gupta --- arch/sparc/include/asm/pgtable_64.h | 15 +++-- arch

[PATCH v5 2/3] sparc64: Add 16GB hugepage support

2017-07-29 Thread Nitin Gupta
Signed-off-by: Nitin Gupta <nitin.m.gu...@oracle.com> --- arch/sparc/include/asm/hugetlb.h| 7 arch/sparc/include/asm/page_64.h| 3 +- arch/sparc/include/asm/pgtable_64.h | 5 +++ arch/sparc/include/asm/tsb.h| 36 ++ arch/sparc/kernel/tsb.S

[PATCH v5 2/3] sparc64: Add 16GB hugepage support

2017-07-29 Thread Nitin Gupta
Signed-off-by: Nitin Gupta --- arch/sparc/include/asm/hugetlb.h| 7 arch/sparc/include/asm/page_64.h| 3 +- arch/sparc/include/asm/pgtable_64.h | 5 +++ arch/sparc/include/asm/tsb.h| 36 ++ arch/sparc/kernel/tsb.S | 2 +- arch/sparc/kernel

[PATCH v5 3/3] sparc64: Cleanup hugepage table walk functions

2017-07-29 Thread Nitin Gupta
Flatten out nested code structure in huge_pte_offset() and huge_pte_alloc(). Signed-off-by: Nitin Gupta <nitin.m.gu...@oracle.com> --- arch/sparc/mm/hugetlbpage.c | 54 + 1 file changed, 20 insertions(+), 34 deletions(-) diff --git a/arch/sp

[PATCH v5 3/3] sparc64: Cleanup hugepage table walk functions

2017-07-29 Thread Nitin Gupta
Flatten out nested code structure in huge_pte_offset() and huge_pte_alloc(). Signed-off-by: Nitin Gupta --- arch/sparc/mm/hugetlbpage.c | 54 + 1 file changed, 20 insertions(+), 34 deletions(-) diff --git a/arch/sparc/mm/hugetlbpage.c b/arch/sparc/mm

Re: [PATCH 2/3] sparc64: Add 16GB hugepage support

2017-07-26 Thread Nitin Gupta
On 07/20/2017 01:04 PM, David Miller wrote: > From: Nitin Gupta <nitin.m.gu...@oracle.com> > Date: Thu, 13 Jul 2017 14:53:24 -0700 > >> Testing: >> >> Tested with the stream benchmark which allocates 48G of >> arrays backed by 16G hugepages and d

Re: [PATCH 2/3] sparc64: Add 16GB hugepage support

2017-07-26 Thread Nitin Gupta
On 07/20/2017 01:04 PM, David Miller wrote: > From: Nitin Gupta > Date: Thu, 13 Jul 2017 14:53:24 -0700 > >> Testing: >> >> Tested with the stream benchmark which allocates 48G of >> arrays backed by 16G hugepages and does RW operation on >> them in

[PATCH] sparc64: Register hugepages during arch init

2017-07-19 Thread Nitin Gupta
com> Signed-off-by: Nitin Gupta <nitin.m.gu...@oracle.com> --- arch/sparc/mm/init_64.c | 25 - 1 file changed, 24 insertions(+), 1 deletion(-) diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c index 3c40ebd..fed73f1 100644 --- a/arch/sparc/mm/init_64.c +++

[PATCH] sparc64: Register hugepages during arch init

2017-07-19 Thread Nitin Gupta
hugepage sizes are available. case 2: default_hugepagesz=[64K|256M|2G] When specifying only a default_hugepagesz parameter, the default hugepage size isn't really changed and it stays at 8M. This is again different from x86_64. Orabug: 25869946 Reviewed-by: Bob Picco Signed-off-by: Nitin

[PATCH 2/3] sparc64: Add 16GB hugepage support

2017-07-13 Thread Nitin Gupta
Signed-off-by: Nitin Gupta <nitin.m.gu...@oracle.com> --- arch/sparc/include/asm/page_64.h| 3 +- arch/sparc/include/asm/pgtable_64.h | 5 +++ arch/sparc/include/asm/tsb.h| 30 +++ arch/sparc/kernel/tsb.S | 2 +- arch/sparc/mm/hugetlbpage.c

[PATCH 2/3] sparc64: Add 16GB hugepage support

2017-07-13 Thread Nitin Gupta
Signed-off-by: Nitin Gupta --- arch/sparc/include/asm/page_64.h| 3 +- arch/sparc/include/asm/pgtable_64.h | 5 +++ arch/sparc/include/asm/tsb.h| 30 +++ arch/sparc/kernel/tsb.S | 2 +- arch/sparc/mm/hugetlbpage.c | 74

[PATCH 3/3] sparc64: Cleanup hugepage table walk functions

2017-07-13 Thread Nitin Gupta
Flatten out nested code structure in huge_pte_offset() and huge_pte_alloc(). Signed-off-by: Nitin Gupta <nitin.m.gu...@oracle.com> --- arch/sparc/mm/hugetlbpage.c | 54 + 1 file changed, 20 insertions(+), 34 deletions(-) diff --git a/arch/sp

[PATCH 3/3] sparc64: Cleanup hugepage table walk functions

2017-07-13 Thread Nitin Gupta
Flatten out nested code structure in huge_pte_offset() and huge_pte_alloc(). Signed-off-by: Nitin Gupta --- arch/sparc/mm/hugetlbpage.c | 54 + 1 file changed, 20 insertions(+), 34 deletions(-) diff --git a/arch/sparc/mm/hugetlbpage.c b/arch/sparc/mm

[PATCH 1/3] sparc64: Support huge PUD case in get_user_pages

2017-07-13 Thread Nitin Gupta
get_user_pages() is used to do direct IO. It already handles the case where the address range is backed by PMD huge pages. This patch now adds the case where the range could be backed by PUD huge pages. Signed-off-by: Nitin Gupta <nitin.m.gu...@oracle.com> --- arch/sparc/include/asm/pgtabl

[PATCH 1/3] sparc64: Support huge PUD case in get_user_pages

2017-07-13 Thread Nitin Gupta
get_user_pages() is used to do direct IO. It already handles the case where the address range is backed by PMD huge pages. This patch now adds the case where the range could be backed by PUD huge pages. Signed-off-by: Nitin Gupta --- arch/sparc/include/asm/pgtable_64.h | 15 ++-- arch

[PATCH v2] sparc64: Fix gup_huge_pmd

2017-06-22 Thread Nitin Gupta
The function assumes that each PMD points to head of a huge page. This is not correct as a PMD can point to start of any 8M region with a, say 256M, hugepage. The fix ensures that it points to the correct head of any PMD huge page. Cc: Julian Calaby <julian.cal...@gmail.com> Signed-off-by:

[PATCH v2] sparc64: Fix gup_huge_pmd

2017-06-22 Thread Nitin Gupta
The function assumes that each PMD points to head of a huge page. This is not correct as a PMD can point to start of any 8M region with a, say 256M, hugepage. The fix ensures that it points to the correct head of any PMD huge page. Cc: Julian Calaby Signed-off-by: Nitin Gupta --- Changes since

Re: [PATCH] sparc64: Fix gup_huge_pmd

2017-06-22 Thread Nitin Gupta
Hi Julian, On 6/22/17 3:53 AM, Julian Calaby wrote: On Thu, Jun 22, 2017 at 7:50 AM, Nitin Gupta <nitin.m.gu...@oracle.com> wrote: The function assumes that each PMD points to head of a huge page. This is not correct as a PMD can point to start of any 8M region with a, say 256M, hu

Re: [PATCH] sparc64: Fix gup_huge_pmd

2017-06-22 Thread Nitin Gupta
Hi Julian, On 6/22/17 3:53 AM, Julian Calaby wrote: On Thu, Jun 22, 2017 at 7:50 AM, Nitin Gupta wrote: The function assumes that each PMD points to head of a huge page. This is not correct as a PMD can point to start of any 8M region with a, say 256M, hugepage. The fix ensures

[PATCH] sparc64: Fix gup_huge_pmd

2017-06-21 Thread Nitin Gupta
The function assumes that each PMD points to head of a huge page. This is not correct as a PMD can point to start of any 8M region with a, say 256M, hugepage. The fix ensures that it points to the correct head of any PMD huge page. Signed-off-by: Nitin Gupta <nitin.m.gu...@oracle.com> ---

[PATCH] sparc64: Fix gup_huge_pmd

2017-06-21 Thread Nitin Gupta
The function assumes that each PMD points to head of a huge page. This is not correct as a PMD can point to start of any 8M region with a, say 256M, hugepage. The fix ensures that it points to the correct head of any PMD huge page. Signed-off-by: Nitin Gupta --- arch/sparc/mm/gup.c | 2 ++ 1

[PATCH 3/4] sparc64: Fix gup_huge_pmd

2017-06-20 Thread Nitin Gupta
The function assumes that each PMD points to head of a huge page. This is not correct as a PMD can point to start of any 8M region with a, say 256M, hugepage. The fix ensures that it points to the correct head of any PMD huge page. Signed-off-by: Nitin Gupta <nitin.m.gu...@oracle.com> ---

[PATCH 3/4] sparc64: Fix gup_huge_pmd

2017-06-20 Thread Nitin Gupta
The function assumes that each PMD points to head of a huge page. This is not correct as a PMD can point to start of any 8M region with a, say 256M, hugepage. The fix ensures that it points to the correct head of any PMD huge page. Signed-off-by: Nitin Gupta --- arch/sparc/mm/gup.c | 2 ++ 1

[PATCH 2/4] sparc64: Support huge PUD case in get_user_pages

2017-06-20 Thread Nitin Gupta
get_user_pages() is used to do direct IO. It already handles the case where the address range is backed by PMD huge pages. This patch now adds the case where the range could be backed by PUD huge pages. Signed-off-by: Nitin Gupta <nitin.m.gu...@oracle.com> --- arch/sparc/include/asm/pgtabl

[PATCH 2/4] sparc64: Support huge PUD case in get_user_pages

2017-06-20 Thread Nitin Gupta
get_user_pages() is used to do direct IO. It already handles the case where the address range is backed by PMD huge pages. This patch now adds the case where the range could be backed by PUD huge pages. Signed-off-by: Nitin Gupta --- arch/sparc/include/asm/pgtable_64.h | 15 ++-- arch

[PATCH 1/4] sparc64: Add 16GB hugepage support

2017-06-20 Thread Nitin Gupta
Signed-off-by: Nitin Gupta <nitin.m.gu...@oracle.com> --- arch/sparc/include/asm/page_64.h| 3 +- arch/sparc/include/asm/pgtable_64.h | 5 +++ arch/sparc/include/asm/tsb.h| 30 +++ arch/sparc/kernel/tsb.S | 2 +- arch/sparc/mm/hugetlbpage.c

[PATCH 1/4] sparc64: Add 16GB hugepage support

2017-06-20 Thread Nitin Gupta
Signed-off-by: Nitin Gupta --- arch/sparc/include/asm/page_64.h| 3 +- arch/sparc/include/asm/pgtable_64.h | 5 +++ arch/sparc/include/asm/tsb.h| 30 +++ arch/sparc/kernel/tsb.S | 2 +- arch/sparc/mm/hugetlbpage.c | 74

[PATCH 4/4] sparc64: Cleanup hugepage table walk functions

2017-06-20 Thread Nitin Gupta
Flatten out nested code structure in huge_pte_offset() and huge_pte_alloc(). Signed-off-by: Nitin Gupta <nitin.m.gu...@oracle.com> --- arch/sparc/mm/hugetlbpage.c | 54 + 1 file changed, 20 insertions(+), 34 deletions(-) diff --git a/arch/sp

[PATCH 4/4] sparc64: Cleanup hugepage table walk functions

2017-06-20 Thread Nitin Gupta
Flatten out nested code structure in huge_pte_offset() and huge_pte_alloc(). Signed-off-by: Nitin Gupta --- arch/sparc/mm/hugetlbpage.c | 54 + 1 file changed, 20 insertions(+), 34 deletions(-) diff --git a/arch/sparc/mm/hugetlbpage.c b/arch/sparc/mm

[PATCH v3 3/4] sparc64: Fix gup_huge_pmd

2017-06-19 Thread Nitin Gupta
The function assumes that each PMD points to head of a huge page. This is not correct as a PMD can point to start of any 8M region with a, say 256M, hugepage. The fix ensures that it points to the correct head of any PMD huge page. Signed-off-by: Nitin Gupta <nitin.m.gu...@oracle.com> ---

[PATCH v3 3/4] sparc64: Fix gup_huge_pmd

2017-06-19 Thread Nitin Gupta
The function assumes that each PMD points to head of a huge page. This is not correct as a PMD can point to start of any 8M region with a, say 256M, hugepage. The fix ensures that it points to the correct head of any PMD huge page. Signed-off-by: Nitin Gupta --- arch/sparc/mm/gup.c | 2 ++ 1

[PATCH v3 4/4] sparc64: Cleanup hugepage table walk functions

2017-06-19 Thread Nitin Gupta
Flatten out nested code structure in huge_pte_offset() and huge_pte_alloc(). Signed-off-by: Nitin Gupta <nitin.m.gu...@oracle.com> --- arch/sparc/mm/hugetlbpage.c | 54 + 1 file changed, 20 insertions(+), 34 deletions(-) diff --git a/arch/sp

[PATCH v3 4/4] sparc64: Cleanup hugepage table walk functions

2017-06-19 Thread Nitin Gupta
Flatten out nested code structure in huge_pte_offset() and huge_pte_alloc(). Signed-off-by: Nitin Gupta --- arch/sparc/mm/hugetlbpage.c | 54 + 1 file changed, 20 insertions(+), 34 deletions(-) diff --git a/arch/sparc/mm/hugetlbpage.c b/arch/sparc/mm

[PATCH v3 2/4] sparc64: Support huge PUD case in get_user_pages

2017-06-19 Thread Nitin Gupta
get_user_pages() is used to do direct IO. It already handles the case where the address range is backed by PMD huge pages. This patch now adds the case where the range could be backed by PUD huge pages. Signed-off-by: Nitin Gupta <nitin.m.gu...@oracle.com> --- arch/sparc/include/asm/pgtabl

[PATCH v3 1/4] sparc64: Add 16GB hugepage support

2017-06-19 Thread Nitin Gupta
Signed-off-by: Nitin Gupta <nitin.m.gu...@oracle.com> --- Changelog v3 vs v2: - Fixed email headers so the subject shows up correctly Changelog v2 vs v1: - Remove redundant brgez,pn (Bob Picco) - Remove unncessary label rename from 700 to 701 (Rob Gardner) - Add patch description (Paul)

[PATCH v3 2/4] sparc64: Support huge PUD case in get_user_pages

2017-06-19 Thread Nitin Gupta
get_user_pages() is used to do direct IO. It already handles the case where the address range is backed by PMD huge pages. This patch now adds the case where the range could be backed by PUD huge pages. Signed-off-by: Nitin Gupta --- arch/sparc/include/asm/pgtable_64.h | 15 ++-- arch

[PATCH v3 1/4] sparc64: Add 16GB hugepage support

2017-06-19 Thread Nitin Gupta
Signed-off-by: Nitin Gupta --- Changelog v3 vs v2: - Fixed email headers so the subject shows up correctly Changelog v2 vs v1: - Remove redundant brgez,pn (Bob Picco) - Remove unncessary label rename from 700 to 701 (Rob Gardner) - Add patch description (Paul) - Add 16G case to get_user_pages

Re: From: Nitin Gupta <nitin.m.gu...@oracle.com>

2017-06-19 Thread Nitin Gupta
Please ignore this patch series. I will resend again with correct email headers. Nitin On 6/19/17 2:48 PM, Nitin Gupta wrote: Adds support for 16GB hugepage size. To use this page size use kernel parameters as: default_hugepagesz=16G hugepagesz=16G hugepages=10 Testing: Tested

Re: From: Nitin Gupta

2017-06-19 Thread Nitin Gupta
Please ignore this patch series. I will resend again with correct email headers. Nitin On 6/19/17 2:48 PM, Nitin Gupta wrote: Adds support for 16GB hugepage size. To use this page size use kernel parameters as: default_hugepagesz=16G hugepagesz=16G hugepages=10 Testing: Tested

[PATCH v2 3/4] sparc64: Fix gup_huge_pmd

2017-06-19 Thread Nitin Gupta
The function assumes that each PMD points to head of a huge page. This is not correct as a PMD can point to start of any 8M region with a, say 256M, hugepage. The fix ensures that it points to the correct head of any PMD huge page. Signed-off-by: Nitin Gupta <nitin.m.gu...@oracle.com> ---

[PATCH v2 3/4] sparc64: Fix gup_huge_pmd

2017-06-19 Thread Nitin Gupta
The function assumes that each PMD points to head of a huge page. This is not correct as a PMD can point to start of any 8M region with a, say 256M, hugepage. The fix ensures that it points to the correct head of any PMD huge page. Signed-off-by: Nitin Gupta --- arch/sparc/mm/gup.c | 2 ++ 1

  1   2   3   4   5   >