The commit c8f2f0db1 ("zram: Fix handling of incompressible pages")
introduced a bug which caused a kunmap()'ed buffer to be used in case
of partial writes where the data was found to be incompressible.
This fixes bug 50081:
https://bugzilla.kernel.org/show_bug.cgi?id=50081
Signed-off
-off-by: Nitin Gupta
---
drivers/staging/zsmalloc/zsmalloc-main.c | 177 +-
drivers/staging/zsmalloc/zsmalloc.h |1 +
2 files changed, 127 insertions(+), 51 deletions(-)
diff --git a/drivers/staging/zsmalloc/zsmalloc-main.c
b/drivers/staging/zsmalloc
an invalid value (-1) to mark such
pages. Lastly, the count field was unused, so was simply removed.
Signed-off-by: Nitin Gupta
---
drivers/staging/zram/zram_drv.c | 80 ++-
drivers/staging/zram/zram_drv.h | 18 +
2 files changed, 31 insertions(+), 67
On 11/27/2012 08:22 AM, Jerome Marchand wrote:
On 11/27/2012 08:26 AM, Nitin Gupta wrote:
For every allocated object, zram maintains the the handle, size,
flags and count fields. Of these, only the handle is required
since zsmalloc now provides the object size given the handle.
The flags field
On 11/28/2012 05:33 PM, Minchan Kim wrote:
On Wed, Nov 28, 2012 at 02:15:05PM +0900, Minchan Kim wrote:
Hi Nitin,
On Mon, Nov 26, 2012 at 11:26:07PM -0800, Nitin Gupta wrote:
The commit c8f2f0db1 ("zram: Fix handling of incompressible pages")
introduced a bug which caused a kunmap()
a is found to be incompressible.
Fixes bug 50081:
https://bugzilla.kernel.org/show_bug.cgi?id=50081
Signed-off-by: Nitin Gupta
Reported-by: Mihail Kasadjikov
Reported-by: Tomas M
Reviewed-by: Minchan Kim
---
drivers/staging/zram/zram_drv.c | 39 ---
1 file c
us the object size.
Signed-off-by: Nitin Gupta
---
drivers/staging/zsmalloc/zsmalloc-main.c | 177 +-
drivers/staging/zsmalloc/zsmalloc.h |1 +
2 files changed, 127 insertions(+), 51 deletions(-)
diff --git a/drivers/staging/zsmalloc/zsmalloc-main.c
b
was needed only to mark a given page as zero-filled.
Instead of this field, we now use an invalid value (-1) to mark such
pages. Lastly, the count field was unused, so was simply removed.
Signed-off-by: Nitin Gupta
Reviewed-by: Jerome Marchand
---
drivers/staging/zram/zram_drv.c | 97
On 11/28/2012 11:55 PM, Minchan Kim wrote:
Hi Nitin,
On Wed, Nov 28, 2012 at 11:45:06PM -0800, Nitin Gupta wrote:
Changelog v2 vs v1:
- Changelog message now correctly explains the problem
Fixes a bug introduced by commit c8f2f0db1 ("zram: Fix handling
of incompressible pages") wh
On 11/28/2012 11:45 PM, Minchan Kim wrote:
On Mon, Nov 26, 2012 at 11:26:40PM -0800, Nitin Gupta wrote:
Adds zs_get_object_size(handle) which provides the size of
the given object. This is useful since the user (zram etc.)
now do not have to maintain object sizes separately, saving
on some
kfree()'ed buffer.
Fixes bug 50081:
https://bugzilla.kernel.org/show_bug.cgi?id=50081
Signed-off-by: Nitin Gupta
Reported-by: Mihail Kasadjikov
Reported-by: Tomas M
Reviewed-by: Minchan Kim
---
drivers/staging/zram/zram_drv.c | 39 ---
1 file changed, 24
On Mon, Oct 22, 2012 at 1:43 PM, Greg KH wrote:
> On Wed, Oct 10, 2012 at 05:42:18PM -0700, Nitin Gupta wrote:
>> Change 130f315a (staging: zram: remove special handle of uncompressed page)
>> introduced a bug in the handling of incompressible pages which resulted in
>> memo
On 11/21/2012 12:37 AM, Minchan Kim wrote:
Hi alls,
Today, I saw below complain of lockdep.
As a matter of fact, I knew it long time ago but forgot that.
The reason lockdep complains is that now zram uses GFP_KERNEL
in reclaim path(ex, __zram_make_request) :(
I can fix it via replacing
On 11/22/2012 12:31 AM, Minchan Kim wrote:
Hi Nitin,
On Wed, Nov 21, 2012 at 10:06:33AM -0800, Nitin Gupta wrote:
On 11/21/2012 12:37 AM, Minchan Kim wrote:
Hi alls,
Today, I saw below complain of lockdep.
As a matter of fact, I knew it long time ago but forgot that.
The reason lockdep
The commit c8f2f0db1 ("zram: Fix handling of incompressible pages")
introduced a bug which caused a kunmap()'ed buffer to be used in case
of partial writes where the data was found to be incompressible.
This fixes bug 50081:
https://bugzilla.kernel.org/show_bug.cgi?id=50081
Signed-off
On 08/13/2012 07:35 PM, Greg Kroah-Hartman wrote:
> On Wed, Aug 08, 2012 at 03:12:13PM +0900, Minchan Kim wrote:
>> This patchset promotes zram/zsmalloc from staging.
>> Both are very clean and zram is used by many embedded product
>> for a long time.
>>
>> [1-3] are patches not merged into
On 09/07/2012 07:37 AM, Konrad Rzeszutek Wilk wrote:
significant design challenges exist, many of which are already resolved in
the new codebase ("zcache2"). These design issues include:
.. snip..
Before other key mm maintainers read and comment on zcache, I think
it would be most wise to
On 07/02/2012 02:15 PM, Seth Jennings wrote:
> This patch replaces the page table assisted object mapping
> method, which has x86 dependencies, with a arch-independent
> method that does a simple copy into a temporary per-cpu
> buffer.
>
> While a copy seems like it would be worse than mapping
On Wed, Jul 11, 2012 at 1:32 PM, Seth Jennings
wrote:
> On 07/11/2012 01:26 PM, Nitin Gupta wrote:
>> On 07/02/2012 02:15 PM, Seth Jennings wrote:
>>> This patch replaces the page table assisted object mapping
>>> method, which has x86 dependencies, with a arch-indep
On Mon, Sep 17, 2012 at 1:42 PM, Dan Magenheimer
wrote:
>> From: Nitin Gupta [mailto:ngu...@vflare.org]
>> Subject: Re: [RFC] mm: add support for zsmalloc and zcache
>>
>> The problem is that zbud performs well only when a (compressed) page is
>> either PAGE_SIZE/2
into .c
> 5. zsmalloc: promote to mm/
>
> Minchan Kim (2):
> 6. zram: promote zram from staging
> 7. zram: select ZSMALLOC when ZRAM is configured
>
All the changes look good to me. FWIW, for the entire series:
Acked-by: Nitin Gupta
Thanks for all the work.
Nitin
--
On Tue, Dec 11, 2012 at 10:27 AM, Greg KH wrote:
> On Thu, Nov 29, 2012 at 10:45:09PM -0800, Nitin Gupta wrote:
>> Fixes a bug introduced by commit c8f2f0db1 ("zram: Fix handling
>> of incompressible pages") which caused invalid memory references
>> during disk
On 12/18/2012 07:49 PM, Greg KH wrote:
> On Tue, Dec 18, 2012 at 01:12:05PM -0800, Nitin Gupta wrote:
>> On Tue, Dec 11, 2012 at 10:27 AM, Greg KH wrote:
>>> On Thu, Nov 29, 2012 at 10:45:09PM -0800, Nitin Gupta wrote:
>>>> Fixes a bug introduced by commit
On 12/19/2012 07:08 AM, Greg KH wrote:
> On Tue, Dec 18, 2012 at 11:21:28PM -0800, Nitin Gupta wrote:
>> On 12/18/2012 07:49 PM, Greg KH wrote:
>>> On Tue, Dec 18, 2012 at 01:12:05PM -0800, Nitin Gupta wrote:
>>>> On Tue, Dec 11, 2012 at 10:27 AM, Greg KH
>>
On 12/19/2012 08:17 AM, Greg KH wrote:
> On Wed, Dec 19, 2012 at 07:53:36AM -0800, Nitin Gupta wrote:
>> On 12/19/2012 07:08 AM, Greg KH wrote:
>>> On Tue, Dec 18, 2012 at 11:21:28PM -0800, Nitin Gupta wrote:
>>>> On 12/18/2012 07:49 PM, Greg KH wrote:
>>&g
On Wed, Dec 19, 2012 at 9:39 AM, Mitch Harder
wrote:
> On Wed, Dec 19, 2012 at 11:21 AM, Nitin Gupta wrote:
>> On 12/19/2012 08:17 AM, Greg KH wrote:
>>> On Wed, Dec 19, 2012 at 07:53:36AM -0800, Nitin Gupta wrote:
>>>> On 12/19/2012 07:08 AM, Greg KH wrote:
>&
On 12/11/2012 10:27 AM, Greg KH wrote:
> On Thu, Nov 29, 2012 at 10:45:09PM -0800, Nitin Gupta wrote:
>> Fixes a bug introduced by commit c8f2f0db1 ("zram: Fix handling
>> of incompressible pages") which caused invalid memory references
>> during disk write. Invali
On Mon, Jan 14, 2013 at 11:19 AM, Greg KH wrote:
> On Wed, Dec 19, 2012 at 11:39:15AM -0600, Mitch Harder wrote:
>> On Wed, Dec 19, 2012 at 11:21 AM, Nitin Gupta wrote:
>> > On 12/19/2012 08:17 AM, Greg KH wrote:
>> >> On Wed, Dec 19, 2012 at 07:53:36AM -0800, Niti
On 01/20/2013 09:18 PM, Minchan Kim wrote:
On Fri, Jan 18, 2013 at 01:34:18PM -0800, Nitin Gupta wrote:
On Wed, Jan 16, 2013 at 6:12 PM, Minchan Kim wrote:
Lockdep complains about recursive deadlock of zram->init_lock.
[1] made it false positive because we can't request IO to zram
bef
On 01/21/2013 04:07 PM, Minchan Kim wrote:
Now zram allocates new page with GFP_KERNEL in zram I/O path
if IO is partial. Unfortunately, It may cuase deadlock with
reclaim path so this patch solves the problem.
Cc: Nitin Gupta
Cc: Jerome Marchand
Cc: sta...@vger.kernel.org
Signed-off
On Tue, Jan 22, 2013 at 3:52 PM, Minchan Kim wrote:
> Now zram allocates new page with GFP_KERNEL in zram I/O path
> if IO is partial. Unfortunately, It may cuase deadlock with
> reclaim path so this patch solves the problem.
>
> Cc: Jerome Marchand
> Cc: sta...@vger.kernel.org
d bio page.
- Partial (non PAGE_SIZE) write with incompressible data: In this
case, reference was made to a kfree()'ed buffer.
Fixes bug 50081:
https://bugzilla.kernel.org/show_bug.cgi?id=50081
Signed-off-by: Nitin Gupta
Reported-by: Mihail Kasadjikov
Reported-by: Tomas M
Reviewed-by: Minchan Kim
--
On Wed, Jan 2, 2013 at 8:53 AM, Nitin Gupta wrote:
> Fixes a bug introduced by commit c8f2f0db1 ("zram: Fix handling
> of incompressible pages") which caused invalid memory references
> during disk write. Invalid references could occur in two cases:
> - Incoming data
On Wed, Jan 16, 2013 at 6:12 PM, Minchan Kim wrote:
> Lockdep complains about recursive deadlock of zram->init_lock.
> [1] made it false positive because we can't request IO to zram
> before setting disksize. Anyway, we should shut lockdep up to
> avoid many reporting from user.
>
> This patch
On 08/13/2012 11:22 PM, Minchan Kim wrote:
> Hi Greg,
>
> On Mon, Aug 13, 2012 at 07:35:30PM -0700, Greg Kroah-Hartman wrote:
>> On Wed, Aug 08, 2012 at 03:12:13PM +0900, Minchan Kim wrote:
>>> This patchset promotes zram/zsmalloc from staging.
>>> Both are very clean and zram is used by many
On 02/11/2013 10:16 AM, Greg Kroah-Hartman wrote:
On Mon, Feb 11, 2013 at 10:07:45AM -0800, Davidlohr Bueso wrote:
On Sun, 2013-02-10 at 21:41 -0800, Greg Kroah-Hartman wrote:
On Sun, Feb 10, 2013 at 08:29:06PM -0800, Davidlohr Bueso wrote:
Instead of having one sysfs file per zram statistic,
On Mon, 2019-09-16 at 13:16 -0700, David Rientjes wrote:
> On Fri, 16 Aug 2019, Nitin Gupta wrote:
>
> > For some applications we need to allocate almost all memory as
> > hugepages. However, on a running system, higher order allocations can
> > fail if the memory is
ain scenarios to reduce hugepage allocation latencies. This callback
interface allows drivers to drive compaction based on their own policies
like the current level of external fragmentation for a particular order,
system load etc.
Signed-off-by: Nitin Gupta
---
include/linux/compaction.h |
> -Original Message-
> From: owner-linux...@kvack.org On Behalf
> Of Michal Hocko
> Sent: Tuesday, September 10, 2019 1:19 PM
> To: Nitin Gupta
> Cc: a...@linux-foundation.org; vba...@suse.cz;
> mgor...@techsingularity.net; dan.j.willi...@intel.com;
> khalid.
On Wed, 2019-09-11 at 08:45 +0200, Michal Hocko wrote:
> On Tue 10-09-19 22:27:53, Nitin Gupta wrote:
> [...]
> > > On Tue 10-09-19 13:07:32, Nitin Gupta wrote:
> > > > For some applications we need to allocate almost all memory as
> > > > hugepages.
>
On Thu, May 28, 2020 at 2:50 AM Vlastimil Babka wrote:
>
> On 5/28/20 11:15 AM, Holger Hoffstätte wrote:
> >
> > On 5/18/20 8:14 PM, Nitin Gupta wrote:
> > [patch v5 :)]
> >
> > I've been successfully using this in my tree and it works great, but a
> > f
On Wed, May 27, 2020 at 3:18 AM Vlastimil Babka wrote:
>
> On 5/18/20 8:14 PM, Nitin Gupta wrote:
> > For some applications, we need to allocate almost all memory as
> > hugepages. However, on a running system, higher-order allocations can
> > fail if the memory is
erred maximum number of times
with HPAGE_FRAG_CHECK_INTERVAL_MSEC of wait between each check
(=> ~30 seconds between retries).
[1] https://patchwork.kernel.org/patch/11098289/
[2] https://lore.kernel.org/linux-mm/20161230131412.gi13...@dhcp22.suse.cz/
[3] https://lwn.net/Articles/817905/
Signed-off-by: Nit
this based upon their workload. More comments below.
>
Tunables like the one this patch introduces, and similar ones like 'swappiness'
will always require some experimentations from the user.
> On Mon, 2020-05-18 at 11:14 -0700, Nitin Gupta wrote:
> > For some applications, we need to
On 1/19/18 4:49 AM, Michal Hocko wrote:
> On Thu 18-01-18 15:33:16, Nitin Gupta wrote:
>> From: Nitin Gupta
>>
>> Currently, if the THP enabled policy is "always", or the mode
>> is "madvise" and a region is marked as MADV_HUGEPAGE, a hugepage
&g
On 01/25/2018 01:13 PM, Mel Gorman wrote:
> On Thu, Jan 25, 2018 at 11:41:03AM -0800, Nitin Gupta wrote:
>>>> It's not really about memory scarcity but a more efficient use of it.
>>>> Applications may want hugepage benefits without requiring any changes to
>
On Fri, Oct 20, 2017 at 12:59 PM, Kirill A. Shutemov
wrote:
> With boot-time switching between paging mode we will have variable
> MAX_PHYSMEM_BITS.
>
> Let's use the maximum variable possible for CONFIG_X86_5LEVEL=y
> configuration to define zsmalloc data structures.
>
> The patch introduces
ble for CONFIG_X86_5LEVEL=y
>> configuration to define zsmalloc data structures.
>>
>> The patch introduces MAX_POSSIBLE_PHYSMEM_BITS to cover such case.
>> It also suits well to handle PAE special case.
>>
>> Signed-off-by: Kirill A. Shutemov
>> Cc: Minchan Kim
&
On 01/24/2018 04:47 PM, Zi Yan wrote:
With this change, whenever an application issues MADV_DONTNEED on a
memory region, the region is marked as "space-efficient". For such
regions, a hugepage is not immediately allocated on first write.
>>> Kirill didn't like it in the previous
On Thu, 2019-09-12 at 17:11 +0530, Bharath Vedartham wrote:
> Hi Nitin,
> On Wed, Sep 11, 2019 at 10:33:39PM +, Nitin Gupta wrote:
> > On Wed, 2019-09-11 at 08:45 +0200, Michal Hocko wrote:
> > > On Tue 10-09-19 22:27:53, Nitin Gupta wrote:
> > > [...]
> > &
On Thu, 2019-08-22 at 09:51 +0100, Mel Gorman wrote:
> As unappealing as it sounds, I think it is better to try improve the
> allocation latency itself instead of trying to hide the cost in a kernel
> thread. It's far harder to implement as compaction is not easy but it
> would be more obvious
On Tue, 2019-08-20 at 10:46 +0200, Vlastimil Babka wrote:
> > This patch is largely based on ideas from Michal Hocko posted here:
> > https://lore.kernel.org/linux-mm/20161230131412.gi13...@dhcp22.suse.cz/
> >
> > Testing done (on x86):
> > - Set
On Mon, 2019-08-26 at 12:47 +0100, Mel Gorman wrote:
> On Thu, Aug 22, 2019 at 09:57:22PM +0000, Nitin Gupta wrote:
> > > Note that proactive compaction may reduce allocation latency but
> > > it is not
> > > free either. Even though the scanning and migratio
> -Original Message-
> From: owner-linux...@kvack.org On Behalf
> Of Matthew Wilcox
> Sent: Tuesday, August 20, 2019 3:21 PM
> To: Nitin Gupta
> Cc: a...@linux-foundation.org; vba...@suse.cz;
> mgor...@techsingularity.net; mho...@suse.com;
> dan.j.willi...@i
> -Original Message-
> From: owner-linux...@kvack.org On Behalf
> Of Mel Gorman
> Sent: Thursday, August 22, 2019 1:52 AM
> To: Nitin Gupta
> Cc: a...@linux-foundation.org; vba...@suse.cz; mho...@suse.com;
> dan.j.willi...@intel.com; Yu Zhao ; Matthew Wilcox
G_CHECK_INTERVAL_MSEC of wait between each check
(=> ~30 seconds between retries).
[1] https://patchwork.kernel.org/patch/11098289/
Signed-off-by: Nitin Gupta
To: Mel Gorman
To: Michal Hocko
To: Vlastimil Babka
CC: Matthew Wilcox
CC: Andrew Morton
CC: Mike Kravetz
CC: Joonsoo Kim
CC: Dav
INTERVAL_MSEC of wait between each check
(=> ~30 seconds between retries).
[1] https://patchwork.kernel.org/patch/11098289/
Signed-off-by: Nitin Gupta
To: Mel Gorman
To: Michal Hocko
To: Vlastimil Babka
CC: Matthew Wilcox
CC: Andrew Morton
CC: Mike Kravetz
CC: Joonsoo Kim
CC: Dav
On Mon, Jun 1, 2020 at 12:48 PM Nitin Gupta wrote:
>
> For some applications, we need to allocate almost all memory as
> hugepages. However, on a running system, higher-order allocations can
> fail if the memory is fragmented. Linux kernel currently does on-demand
> compaction as
On 6/17/20 1:53 PM, Andrew Morton wrote:
On Tue, 16 Jun 2020 13:45:27 -0700 Nitin Gupta wrote:
For some applications, we need to allocate almost all memory as
hugepages. However, on a running system, higher-order allocations can
fail if the memory is fragmented. Linux kernel currently does
Proactive compaction uses per-node/zone "fragmentation score" which
is always in range [0, 100], so use unsigned type of these scores
as well as for related constants.
Signed-off-by: Nitin Gupta
---
include/linux/compaction.h | 4 ++--
kernel/sysctl.c| 2 +-
mm/co
On 6/18/20 6:41 AM, Baoquan He wrote:
> On 06/17/20 at 06:03pm, Nitin Gupta wrote:
>> Proactive compaction uses per-node/zone "fragmentation score" which
>> is always in range [0, 100], so use unsigned type of these scores
>> as well as for related constants.
&
On 6/9/20 12:23 PM, Khalid Aziz wrote:
> On Mon, 2020-06-01 at 12:48 -0700, Nitin Gupta wrote:
>> For some applications, we need to allocate almost all memory as
>> hugepages. However, on a running system, higher-order allocations can
>> fail if the memory is fragmented. L
On 6/16/20 2:46 AM, Oleksandr Natalenko wrote:
> Hello.
>
> Please see the notes inline.
>
> On Mon, Jun 15, 2020 at 07:36:14AM -0700, Nitin Gupta wrote:
>> For some applications, we need to allocate almost all memory as
>> hugepages. However, on a running system, h
erred maximum number of times
with HPAGE_FRAG_CHECK_INTERVAL_MSEC of wait between each check
(=> ~30 seconds between retries).
[1] https://patchwork.kernel.org/patch/11098289/
[2] https://lore.kernel.org/linux-mm/20161230131412.gi13...@dhcp22.suse.cz/
[3] https://lwn.net/Articles/817905/
Signed-off-by: Nit
On 6/22/20 7:26 PM, Nathan Chancellor wrote:
> On Tue, Jun 16, 2020 at 01:45:27PM -0700, Nitin Gupta wrote:
>> For some applications, we need to allocate almost all memory as
>> hugepages. However, on a running system, higher-order allocations can
>> fail if the memory is fra
On 6/22/20 9:57 PM, Nathan Chancellor wrote:
> On Mon, Jun 22, 2020 at 09:32:12PM -0700, Nitin Gupta wrote:
>> On 6/22/20 7:26 PM, Nathan Chancellor wrote:
>>> On Tue, Jun 16, 2020 at 01:45:27PM -0700, Nitin Gupta wrote:
>>>> For some applications, we need
Fix compile error when COMPACTION_HPAGE_ORDER is assigned
to HUGETLB_PAGE_ORDER. The correct way to check if this
constant is defined is to check for CONFIG_HUGETLBFS.
Signed-off-by: Nitin Gupta
To: Andrew Morton
Reported-by: Nathan Chancellor
Tested-by: Nathan Chancellor
---
mm/compaction.c
On 6/15/20 7:25 AM, Oleksandr Natalenko wrote:
> On Mon, Jun 15, 2020 at 10:29:01AM +0200, Oleksandr Natalenko wrote:
>> Just to let you know, this fails to compile for me with THP disabled on
>> v5.8-rc1:
>>
>> CC mm/compaction.o
>> In file included from ./include/linux/dev_printk.h:14,
>>
erred maximum number of times
with HPAGE_FRAG_CHECK_INTERVAL_MSEC of wait between each check
(=> ~30 seconds between retries).
[1] https://patchwork.kernel.org/patch/11098289/
[2] https://lore.kernel.org/linux-mm/20161230131412.gi13...@dhcp22.suse.cz/
[3] https://lwn.net/Articles/817905/
Signed-off-by: Nit
pagesz=256M hugepagesz=256M hugepages=300 hugepagesz=8M
hugepages=1
Signed-off-by: Nitin Gupta
---
Changelog v2 vs v3:
- Remove unused label in tsb.S (David)
- Order local variables from longest to shortest line (David)
Changelog v1 vs v2:
- Fix warning due to unused __flush_huge_tsb_one(
SLUB has better debugging support.
Signed-off-by: Nitin Gupta
---
arch/sparc/configs/sparc64_defconfig | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/arch/sparc/configs/sparc64_defconfig
b/arch/sparc/configs/sparc64_defconfig
index 3583d67..0a615b0 100644
--- a/arch
pagesz=256M hugepagesz=256M hugepages=300 hugepagesz=8M
hugepages=1
Signed-off-by: Nitin Gupta
---
Changelog v3 vs v4:
- Remove incorrect WARN_ON in __flush_huge_tsb_one_entry()
Changelog v2 vs v3:
- Remove unused label in tsb.S (David)
- Order local variables from longest to shortes
On 12/27/2016 09:34 AM, David Miller wrote:
> From: Nitin Gupta
> Date: Tue, 13 Dec 2016 10:03:18 -0800
>
>> +static unsigned int sun4u_huge_tte_to_shift(pte_t entry)
>> +{
>> +unsigned long tte_szbits = pte_val(entry) & _PAGE_SZALL_4V;
>> +un
pagesz=256M hugepagesz=256M hugepages=300 hugepagesz=8M
hugepages=1
Signed-off-by: Nitin Gupta
---
Changelog v4 vs v5:
- Enable hugepage initialization on sun4u (this patch has been
tested only on sun4v).
Changelog v3 vs v4:
- Remove incorrect WARN_ON in __flush_huge_tsb_one_entry()
Cha
Orabug: 25362942
Signed-off-by: Nitin Gupta
---
arch/sparc/include/asm/page_64.h| 3 +-
arch/sparc/include/asm/pgtable_64.h | 5 +++
arch/sparc/include/asm/tsb.h| 35 +-
arch/sparc/kernel/tsb.S | 2 +-
arch/sparc/mm/hugetlbpage.c | 74
On 5/24/17 8:45 PM, David Miller wrote:
> From: Paul Gortmaker
> Date: Wed, 24 May 2017 23:34:42 -0400
>
>> [[PATCH] sparc64: Add 16GB hugepage support] On 24/05/2017 (Wed 17:29) Nitin
>> Gupta wrote:
>>
>>> Orabug: 25362942
>>>
>>> Signed-o
An incorrect huge page alignment check caused
mmap failure for 64K pages when MAP_FIXED is used
with address not aligned to HPAGE_SIZE.
Orabug: 25885991
Signed-off-by: Nitin Gupta
---
arch/sparc/include/asm/hugetlb.h | 6 --
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git
pagesz=256M hugepagesz=256M hugepages=300 hugepagesz=8M
hugepages=1
Signed-off-by: Nitin Gupta
---
arch/sparc/include/asm/page_64.h | 3 +-
arch/sparc/include/asm/pgtable_64.h | 23 +++--
arch/sparc/include/asm/tlbflush_64.h | 5 +-
arch/sparc/kernel/tsb.S | 21 +
pagesz=256M hugepagesz=256M hugepages=300 hugepagesz=8M
hugepages=1
Signed-off-by: Nitin Gupta
---
Changelog v1 vs v2:
- Fix warning due to unused __flush_huge_tsb_one() when
CONFIG_HUGETLB is not defined.
arch/sparc/include/asm/page_64.h | 3 +-
arch/sparc/include/asm/pgtabl
Make sure the start adderess is aligned to PMD_SIZE
boundary when freeing page table backing a hugepage
region. The issue was causing segfaults when a region
backed by 64K pages was unmapped since such a region
is in general not PMD_SIZE aligned.
Signed-off-by: Nitin Gupta
---
arch/sparc/mm
get_user_pages() is used to do direct IO. It already
handles the case where the address range is backed
by PMD huge pages. This patch now adds the case where
the range could be backed by PUD huge pages.
Signed-off-by: Nitin Gupta
---
arch/sparc/include/asm/pgtable_64.h | 15 ++--
arch
Flatten out nested code structure in huge_pte_offset()
and huge_pte_alloc().
Signed-off-by: Nitin Gupta
---
arch/sparc/mm/hugetlbpage.c | 54 +
1 file changed, 20 insertions(+), 34 deletions(-)
diff --git a/arch/sparc/mm/hugetlbpage.c b/arch/sparc/mm
Signed-off-by: Nitin Gupta
---
Changelog v2 vs v1:
- Remove redundant brgez,pn (Bob Picco)
- Remove unncessary label rename from 700 to 701 (Rob Gardner)
- Add patch description (Paul)
- Add 16G case to get_user_pages()
arch/sparc/include/asm/page_64.h| 3 +-
arch/sparc/include/asm
The function assumes that each PMD points to head of a
huge page. This is not correct as a PMD can point to
start of any 8M region with a, say 256M, hugepage. The
fix ensures that it points to the correct head of any PMD
huge page.
Signed-off-by: Nitin Gupta
---
arch/sparc/mm/gup.c | 2 ++
1
Please ignore this patch series. I will resend again with correct email
headers.
Nitin
On 6/19/17 2:48 PM, Nitin Gupta wrote:
Adds support for 16GB hugepage size. To use this page size
use kernel parameters as:
default_hugepagesz=16G hugepagesz=16G hugepages=10
Testing:
Tested
get_user_pages() is used to do direct IO. It already
handles the case where the address range is backed
by PMD huge pages. This patch now adds the case where
the range could be backed by PUD huge pages.
Signed-off-by: Nitin Gupta
---
arch/sparc/include/asm/pgtable_64.h | 15 ++--
arch
Signed-off-by: Nitin Gupta
---
Changelog v3 vs v2:
- Fixed email headers so the subject shows up correctly
Changelog v2 vs v1:
- Remove redundant brgez,pn (Bob Picco)
- Remove unncessary label rename from 700 to 701 (Rob Gardner)
- Add patch description (Paul)
- Add 16G case to get_user_pages
Flatten out nested code structure in huge_pte_offset()
and huge_pte_alloc().
Signed-off-by: Nitin Gupta
---
arch/sparc/mm/hugetlbpage.c | 54 +
1 file changed, 20 insertions(+), 34 deletions(-)
diff --git a/arch/sparc/mm/hugetlbpage.c b/arch/sparc/mm
The function assumes that each PMD points to head of a
huge page. This is not correct as a PMD can point to
start of any 8M region with a, say 256M, hugepage. The
fix ensures that it points to the correct head of any PMD
huge page.
Signed-off-by: Nitin Gupta
---
arch/sparc/mm/gup.c | 2 ++
1
Flatten out nested code structure in huge_pte_offset()
and huge_pte_alloc().
Signed-off-by: Nitin Gupta
---
arch/sparc/mm/hugetlbpage.c | 54 +
1 file changed, 20 insertions(+), 34 deletions(-)
diff --git a/arch/sparc/mm/hugetlbpage.c b/arch/sparc/mm
Signed-off-by: Nitin Gupta
---
arch/sparc/include/asm/page_64.h| 3 +-
arch/sparc/include/asm/pgtable_64.h | 5 +++
arch/sparc/include/asm/tsb.h| 30 +++
arch/sparc/kernel/tsb.S | 2 +-
arch/sparc/mm/hugetlbpage.c | 74
get_user_pages() is used to do direct IO. It already
handles the case where the address range is backed
by PMD huge pages. This patch now adds the case where
the range could be backed by PUD huge pages.
Signed-off-by: Nitin Gupta
---
arch/sparc/include/asm/pgtable_64.h | 15 ++--
arch
The function assumes that each PMD points to head of a
huge page. This is not correct as a PMD can point to
start of any 8M region with a, say 256M, hugepage. The
fix ensures that it points to the correct head of any PMD
huge page.
Signed-off-by: Nitin Gupta
---
arch/sparc/mm/gup.c | 2 ++
1
The function assumes that each PMD points to head of a
huge page. This is not correct as a PMD can point to
start of any 8M region with a, say 256M, hugepage. The
fix ensures that it points to the correct head of any PMD
huge page.
Signed-off-by: Nitin Gupta
---
arch/sparc/mm/gup.c | 2 ++
1
Hi Julian,
On 6/22/17 3:53 AM, Julian Calaby wrote:
On Thu, Jun 22, 2017 at 7:50 AM, Nitin Gupta wrote:
The function assumes that each PMD points to head of a
huge page. This is not correct as a PMD can point to
start of any 8M region with a, say 256M, hugepage. The
fix ensures
The function assumes that each PMD points to head of a
huge page. This is not correct as a PMD can point to
start of any 8M region with a, say 256M, hugepage. The
fix ensures that it points to the correct head of any PMD
huge page.
Cc: Julian Calaby
Signed-off-by: Nitin Gupta
---
Changes since
pagesz=256M hugepagesz=256M hugepages=300 hugepagesz=8M
hugepages=1
Signed-off-by: Nitin Gupta
---
Changelog v6 vs v5:
- Fix _flush_huge_tsb_one_entry: add correct offset to base vaddr
Changelog v4 vs v5:
- Enable hugepage initialization on sun4u
Changelog v3 vs v4:
- Remove incorrect W
Patch "sparc64: Add 64K page size support"
unconditionally used __flush_huge_tsb_one_entry()
which is available only when hugetlb support is
enabled.
Another issue was incorrect TSB flushing for 64K
pages in flush_tsb_user().
Signed-off-by: Nitin Gupta
---
arch/sparc/mm/hugetlbp
This patch depends on:
[v6] sparc64: Multi-page size support
- Testing
Tested on Sonoma by running stream benchmark instance which allocated
48G worth of 64K pages.
boot params: default_hugepagesz=64K hugepagesz=64K hugepages=1310720
Signed-off-by: Nitin Gupta
---
arch/sparc/include/asm
Signed-off-by: Nitin Gupta
---
arch/sparc/include/asm/page_64.h | 3 ++-
arch/sparc/mm/hugetlbpage.c | 7 +++
arch/sparc/mm/init_64.c | 4
3 files changed, 13 insertions(+), 1 deletion(-)
diff --git a/arch/sparc/include/asm/page_64.h b/arch/sparc/include/asm/page_64.h
301 - 400 of 416 matches
Mail list logo