[PATCH] mn10300: Use is_vmalloc_addr

2017-09-30 Thread Min-Hua Chen
To is_vmalloc_addr() to check if an address is a vmalloc address
instead of checking VMALLOC_START and VMALLOC_END manually.

Signed-off-by: Min-Hua Chen <orca.c...@gmail.com>
---
 arch/mn10300/kernel/gdb-stub.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/mn10300/kernel/gdb-stub.c b/arch/mn10300/kernel/gdb-stub.c
index a128c57..f69026a 100644
--- a/arch/mn10300/kernel/gdb-stub.c
+++ b/arch/mn10300/kernel/gdb-stub.c
@@ -441,7 +441,7 @@ static const unsigned char gdbstub_insn_sizes[256] =
 static int __gdbstub_mark_bp(u8 *addr, int ix)
 {
/* vmalloc area */
-   if (((u8 *) VMALLOC_START <= addr) && (addr < (u8 *) VMALLOC_END))
+   if (is_vmalloc_addr((void *)addr))
goto okay;
/* SRAM, SDRAM */
if (((u8 *) 0x8000UL <= addr) && (addr < (u8 *) 0xa000UL))
-- 
2.7.4



[PATCH] mn10300: Use is_vmalloc_addr

2017-09-30 Thread Min-Hua Chen
To is_vmalloc_addr() to check if an address is a vmalloc address
instead of checking VMALLOC_START and VMALLOC_END manually.

Signed-off-by: Min-Hua Chen 
---
 arch/mn10300/kernel/gdb-stub.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/mn10300/kernel/gdb-stub.c b/arch/mn10300/kernel/gdb-stub.c
index a128c57..f69026a 100644
--- a/arch/mn10300/kernel/gdb-stub.c
+++ b/arch/mn10300/kernel/gdb-stub.c
@@ -441,7 +441,7 @@ static const unsigned char gdbstub_insn_sizes[256] =
 static int __gdbstub_mark_bp(u8 *addr, int ix)
 {
/* vmalloc area */
-   if (((u8 *) VMALLOC_START <= addr) && (addr < (u8 *) VMALLOC_END))
+   if (is_vmalloc_addr((void *)addr))
goto okay;
/* SRAM, SDRAM */
if (((u8 *) 0x8000UL <= addr) && (addr < (u8 *) 0xa000UL))
-- 
2.7.4



Re: [PATCHv3 1/2] arm: fix non-section-aligned low memory mapping

2015-06-10 Thread Min-Hua Chen
On Wed, Jun 10, 2015 at 11:40:59PM +0100, Russell King - ARM Linux wrote:
> On Thu, Jun 11, 2015 at 02:59:32AM +0800, Min-Hua Chen wrote:
> > In current design, the memblock.current_limit is set to
> > a section-aligned value in sanity_check_meminfo().
> > 
> > However, the section-aligned memblock may become non-section-aligned
> > after arm_memblock_init(). For example, the first section-aligned
> > memblock is 0x-0x0100 and sanity_check_meminfo sets
> > current_limit to 0x0100. After arm_memblock_init, two memory blocks
> > [0x00c0 - 0x00d0] and [0x00ff - 0x0100] are reserved
> > by memblock_reserve() and make the original memory block
> > [0x-0x0100] becomes:
> 
> There isn't a problem with memblock_reserve().  That just marks the
> memory as reserved, it doesn't steal the memory from the lowmem
> mappings - in fact, it is still expected that reserved memory
> claimed in this way will be mapped.
> 
> Somehow, I don't think this is what you're doing though, because you
> go on to describe a problem which can only happen if you steal memory
> after arm_memblock_init() has returned.

Yes, your are right. The probelm is not caused by memblock_reserve().
It's caused by the memory reserving code in early_init_fdt_scan_reserved_mem(),
which is in arm_memblock_init().

The memory reserving code in of_of_reserved_mem.c allows the reserved
memory blocks to have a "no-map" property. When a reserved-memory
is marked "no-map", the mapping will be removed by memblock_remove() like
arm_memblock_steal() does.

> Don't do this.  There is a specific point in the boot sequence where you
> are permitted to steal memory, which is done inside arm_memblock_init().
> Stealing outside of that is not permitted.
> 
> arm_memblock_steal() is written to BUG_ON() if you attempt to do this
> outside of the permissible code paths.
> 
> -- 
> FTTC broadband for 0.8mile line: currently at 10.5Mbps down 400kbps up
> according to speedtest.net.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCHv3 0/2] creating non-section aligned lowmem mappings

2015-06-10 Thread Min-Hua Chen
Hi,

I found problems when I reserved non-section aligned memory blocks
by the device tree. The problem is that the memblock_set_current_limit()
point to the first section aligned memblock in the
first place (sanity_check_meminfo), but the memblock may be split
into non-section aligned memblocks because the memblock_reserve() calls
in arm_memblock_init().


*** BLURB HERE ***

Min-Hua Chen (2):
  arm: fix non-section-aligned low memory mapping
  arm: use max_lowmem_limit in find_limit()

 arch/arm/mm/init.c |2 +-
 arch/arm/mm/mmu.c  |   48 ++--
 2 files changed, 15 insertions(+), 35 deletions(-)

-- 
1.7.10.4

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCHv3 2/2] arm: use max_lowmem_limit in find_limit()

2015-06-10 Thread Min-Hua Chen
In commit: 1c2f87c22566cd057bc8cde10c37ae9da1a1bb76, the max_low is
set by memblock_get_current_limit(). However memblock.current_limit
can be changed by memblock_set_current_limit() any point before
find_limits().

It's better to use arm_lowmem_limit to be max_lowmem in two ways:
First, arm_lowmem_limit cannot be changed by a public API. Second, the
high_memory is set by arm_lowmem_limit and is a natural limit of
low memory area in bootmem_init().

Signed-off-by: Min-Hua Chen 
---
 arch/arm/mm/init.c |2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
index be92fa0..b4f9513 100644
--- a/arch/arm/mm/init.c
+++ b/arch/arm/mm/init.c
@@ -89,7 +89,7 @@ __tagtable(ATAG_INITRD2, parse_tag_initrd2);
 static void __init find_limits(unsigned long *min, unsigned long *max_low,
   unsigned long *max_high)
 {
-   *max_low = PFN_DOWN(memblock_get_current_limit());
+   *max_low = PFN_DOWN(arm_lowmem_limit);
*min = PFN_UP(memblock_start_of_DRAM());
*max_high = PFN_DOWN(memblock_end_of_DRAM());
 }
-- 
1.7.10.4

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCHv3 1/2] arm: fix non-section-aligned low memory mapping

2015-06-10 Thread Min-Hua Chen
In current design, the memblock.current_limit is set to
a section-aligned value in sanity_check_meminfo().

However, the section-aligned memblock may become non-section-aligned
after arm_memblock_init(). For example, the first section-aligned
memblock is 0x-0x0100 and sanity_check_meminfo sets
current_limit to 0x0100. After arm_memblock_init, two memory blocks
[0x00c0 - 0x00d0] and [0x00ff - 0x0100] are reserved
by memblock_reserve() and make the original memory block
[0x-0x0100] becomes:

[0x-0x00c0]
[0x00d0-0x00ff]

When creating the low memory mapping for [0x00d0-0x00ff],
since the memory block is non-section-aligned, it will need to create
a second level page table. But the current_limit is set to 0x0100,
and it's possible to allocate a unmapped memory block.

call flow:

setup_arch
 + sanity_check_meminfo
 + arm_memblock_init
 + paging_init
+ map_lowmem
+ bootmem_init

Move the memblock_set_current_limit logic to map_lowmem(), we point
the memblock current_limit to the first section-aligned memblock block.
Since map_lowmem() is called after arm_memblock_init(), there is no way to
change memblock layout. So we can say that the first section-aligned limit
is valid during map_lowmem(). Hence fix the problem described above.

Signed-off-by: Min-Hua Chen 
---
 arch/arm/mm/mmu.c |   48 ++--
 1 file changed, 14 insertions(+), 34 deletions(-)

diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
index 4e6ef89..73e64ab 100644
--- a/arch/arm/mm/mmu.c
+++ b/arch/arm/mm/mmu.c
@@ -1068,7 +1068,6 @@ phys_addr_t arm_lowmem_limit __initdata = 0;
 
 void __init sanity_check_meminfo(void)
 {
-   phys_addr_t memblock_limit = 0;
int highmem = 0;
phys_addr_t vmalloc_limit = __pa(vmalloc_min - 1) + 1;
struct memblock_region *reg;
@@ -1110,43 +1109,10 @@ void __init sanity_check_meminfo(void)
else
arm_lowmem_limit = block_end;
}
-
-   /*
-* Find the first non-section-aligned page, and point
-* memblock_limit at it. This relies on rounding the
-* limit down to be section-aligned, which happens at
-* the end of this function.
-*
-* With this algorithm, the start or end of almost any
-* bank can be non-section-aligned. The only exception
-* is that the start of the bank 0 must be section-
-* aligned, since otherwise memory would need to be
-* allocated when mapping the start of bank 0, which
-* occurs before any free memory is mapped.
-*/
-   if (!memblock_limit) {
-   if (!IS_ALIGNED(block_start, SECTION_SIZE))
-   memblock_limit = block_start;
-   else if (!IS_ALIGNED(block_end, SECTION_SIZE))
-   memblock_limit = arm_lowmem_limit;
-   }
-
}
}
 
high_memory = __va(arm_lowmem_limit - 1) + 1;
-
-   /*
-* Round the memblock limit down to a section size.  This
-* helps to ensure that we will allocate memory from the
-* last full section, which should be mapped.
-*/
-   if (memblock_limit)
-   memblock_limit = round_down(memblock_limit, SECTION_SIZE);
-   if (!memblock_limit)
-   memblock_limit = arm_lowmem_limit;
-
-   memblock_set_current_limit(memblock_limit);
 }
 
 static inline void prepare_page_table(void)
@@ -1331,6 +1297,7 @@ static void __init map_lowmem(void)
struct memblock_region *reg;
phys_addr_t kernel_x_start = round_down(__pa(_stext), SECTION_SIZE);
phys_addr_t kernel_x_end = round_up(__pa(__init_end), SECTION_SIZE);
+   phys_addr_t section_block_limit = 0;
 
/* Map all the lowmem memory banks. */
for_each_memblock(memory, reg) {
@@ -1384,6 +1351,19 @@ static void __init map_lowmem(void)
create_mapping();
}
}
+
+   /*
+* The first memblock MUST be section-size-aligned. Otherwise
+* there is no valid low memory mapping to create 2nd level
+* page tables.
+* After the first mapping is created, other 2nd level
+* page tables can be created from the memory allocated
+* from the first memblock.
+*/
+   if (!section_memblock_limit) {
+   section_memblock_limit = end;
+   memblock_set_current_limit

[PATCHv3 0/2] creating non-section aligned lowmem mappings

2015-06-10 Thread Min-Hua Chen
Hi,

I found problems when I reserved non-section aligned memory blocks
by the device tree. The problem is that the memblock_set_current_limit()
point to the first section aligned memblock in the
first place (sanity_check_meminfo), but the memblock may be split
into non-section aligned memblocks because the memblock_reserve() calls
in arm_memblock_init().


*** BLURB HERE ***

Min-Hua Chen (2):
  arm: fix non-section-aligned low memory mapping
  arm: use max_lowmem_limit in find_limit()

 arch/arm/mm/init.c |2 +-
 arch/arm/mm/mmu.c  |   48 ++--
 2 files changed, 15 insertions(+), 35 deletions(-)

-- 
1.7.10.4

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCHv3 2/2] arm: use max_lowmem_limit in find_limit()

2015-06-10 Thread Min-Hua Chen
In commit: 1c2f87c22566cd057bc8cde10c37ae9da1a1bb76, the max_low is
set by memblock_get_current_limit(). However memblock.current_limit
can be changed by memblock_set_current_limit() any point before
find_limits().

It's better to use arm_lowmem_limit to be max_lowmem in two ways:
First, arm_lowmem_limit cannot be changed by a public API. Second, the
high_memory is set by arm_lowmem_limit and is a natural limit of
low memory area in bootmem_init().

Signed-off-by: Min-Hua Chen orca.c...@gmail.com
---
 arch/arm/mm/init.c |2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
index be92fa0..b4f9513 100644
--- a/arch/arm/mm/init.c
+++ b/arch/arm/mm/init.c
@@ -89,7 +89,7 @@ __tagtable(ATAG_INITRD2, parse_tag_initrd2);
 static void __init find_limits(unsigned long *min, unsigned long *max_low,
   unsigned long *max_high)
 {
-   *max_low = PFN_DOWN(memblock_get_current_limit());
+   *max_low = PFN_DOWN(arm_lowmem_limit);
*min = PFN_UP(memblock_start_of_DRAM());
*max_high = PFN_DOWN(memblock_end_of_DRAM());
 }
-- 
1.7.10.4

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCHv3 1/2] arm: fix non-section-aligned low memory mapping

2015-06-10 Thread Min-Hua Chen
In current design, the memblock.current_limit is set to
a section-aligned value in sanity_check_meminfo().

However, the section-aligned memblock may become non-section-aligned
after arm_memblock_init(). For example, the first section-aligned
memblock is 0x-0x0100 and sanity_check_meminfo sets
current_limit to 0x0100. After arm_memblock_init, two memory blocks
[0x00c0 - 0x00d0] and [0x00ff - 0x0100] are reserved
by memblock_reserve() and make the original memory block
[0x-0x0100] becomes:

[0x-0x00c0]
[0x00d0-0x00ff]

When creating the low memory mapping for [0x00d0-0x00ff],
since the memory block is non-section-aligned, it will need to create
a second level page table. But the current_limit is set to 0x0100,
and it's possible to allocate a unmapped memory block.

call flow:

setup_arch
 + sanity_check_meminfo
 + arm_memblock_init
 + paging_init
+ map_lowmem
+ bootmem_init

Move the memblock_set_current_limit logic to map_lowmem(), we point
the memblock current_limit to the first section-aligned memblock block.
Since map_lowmem() is called after arm_memblock_init(), there is no way to
change memblock layout. So we can say that the first section-aligned limit
is valid during map_lowmem(). Hence fix the problem described above.

Signed-off-by: Min-Hua Chen orca.c...@gmail.com
---
 arch/arm/mm/mmu.c |   48 ++--
 1 file changed, 14 insertions(+), 34 deletions(-)

diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
index 4e6ef89..73e64ab 100644
--- a/arch/arm/mm/mmu.c
+++ b/arch/arm/mm/mmu.c
@@ -1068,7 +1068,6 @@ phys_addr_t arm_lowmem_limit __initdata = 0;
 
 void __init sanity_check_meminfo(void)
 {
-   phys_addr_t memblock_limit = 0;
int highmem = 0;
phys_addr_t vmalloc_limit = __pa(vmalloc_min - 1) + 1;
struct memblock_region *reg;
@@ -1110,43 +1109,10 @@ void __init sanity_check_meminfo(void)
else
arm_lowmem_limit = block_end;
}
-
-   /*
-* Find the first non-section-aligned page, and point
-* memblock_limit at it. This relies on rounding the
-* limit down to be section-aligned, which happens at
-* the end of this function.
-*
-* With this algorithm, the start or end of almost any
-* bank can be non-section-aligned. The only exception
-* is that the start of the bank 0 must be section-
-* aligned, since otherwise memory would need to be
-* allocated when mapping the start of bank 0, which
-* occurs before any free memory is mapped.
-*/
-   if (!memblock_limit) {
-   if (!IS_ALIGNED(block_start, SECTION_SIZE))
-   memblock_limit = block_start;
-   else if (!IS_ALIGNED(block_end, SECTION_SIZE))
-   memblock_limit = arm_lowmem_limit;
-   }
-
}
}
 
high_memory = __va(arm_lowmem_limit - 1) + 1;
-
-   /*
-* Round the memblock limit down to a section size.  This
-* helps to ensure that we will allocate memory from the
-* last full section, which should be mapped.
-*/
-   if (memblock_limit)
-   memblock_limit = round_down(memblock_limit, SECTION_SIZE);
-   if (!memblock_limit)
-   memblock_limit = arm_lowmem_limit;
-
-   memblock_set_current_limit(memblock_limit);
 }
 
 static inline void prepare_page_table(void)
@@ -1331,6 +1297,7 @@ static void __init map_lowmem(void)
struct memblock_region *reg;
phys_addr_t kernel_x_start = round_down(__pa(_stext), SECTION_SIZE);
phys_addr_t kernel_x_end = round_up(__pa(__init_end), SECTION_SIZE);
+   phys_addr_t section_block_limit = 0;
 
/* Map all the lowmem memory banks. */
for_each_memblock(memory, reg) {
@@ -1384,6 +1351,19 @@ static void __init map_lowmem(void)
create_mapping(map);
}
}
+
+   /*
+* The first memblock MUST be section-size-aligned. Otherwise
+* there is no valid low memory mapping to create 2nd level
+* page tables.
+* After the first mapping is created, other 2nd level
+* page tables can be created from the memory allocated
+* from the first memblock.
+*/
+   if (!section_memblock_limit) {
+   section_memblock_limit = end;
+   memblock_set_current_limit

Re: [PATCHv3 1/2] arm: fix non-section-aligned low memory mapping

2015-06-10 Thread Min-Hua Chen
On Wed, Jun 10, 2015 at 11:40:59PM +0100, Russell King - ARM Linux wrote:
 On Thu, Jun 11, 2015 at 02:59:32AM +0800, Min-Hua Chen wrote:
  In current design, the memblock.current_limit is set to
  a section-aligned value in sanity_check_meminfo().
  
  However, the section-aligned memblock may become non-section-aligned
  after arm_memblock_init(). For example, the first section-aligned
  memblock is 0x-0x0100 and sanity_check_meminfo sets
  current_limit to 0x0100. After arm_memblock_init, two memory blocks
  [0x00c0 - 0x00d0] and [0x00ff - 0x0100] are reserved
  by memblock_reserve() and make the original memory block
  [0x-0x0100] becomes:
 
 There isn't a problem with memblock_reserve().  That just marks the
 memory as reserved, it doesn't steal the memory from the lowmem
 mappings - in fact, it is still expected that reserved memory
 claimed in this way will be mapped.
 
 Somehow, I don't think this is what you're doing though, because you
 go on to describe a problem which can only happen if you steal memory
 after arm_memblock_init() has returned.

Yes, your are right. The probelm is not caused by memblock_reserve().
It's caused by the memory reserving code in early_init_fdt_scan_reserved_mem(),
which is in arm_memblock_init().

The memory reserving code in of_of_reserved_mem.c allows the reserved
memory blocks to have a no-map property. When a reserved-memory
is marked no-map, the mapping will be removed by memblock_remove() like
arm_memblock_steal() does.

 Don't do this.  There is a specific point in the boot sequence where you
 are permitted to steal memory, which is done inside arm_memblock_init().
 Stealing outside of that is not permitted.
 
 arm_memblock_steal() is written to BUG_ON() if you attempt to do this
 outside of the permissible code paths.
 
 -- 
 FTTC broadband for 0.8mile line: currently at 10.5Mbps down 400kbps up
 according to speedtest.net.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2] arm: improve non-section-aligned low memory mapping

2015-05-07 Thread Min-Hua Chen
On Wed, May 06, 2015 at 11:32:49AM +0100, Russell King - ARM Linux wrote:
> On Sun, Apr 26, 2015 at 04:47:08PM +0800, Min-Hua Chen wrote:
> > @@ -1384,6 +1351,15 @@ static void __init map_lowmem(void)
> > create_mapping();
> > }
> > }
> > +
> > +   /*
> > +* Find the first section-aligned memblock and set
> > +* memblock_limit at it.
> > +*/
> > +   if (!section_memblock_limit && !(end & ~SECTION_MASK)) {
> > +   section_memblock_limit = end;
> > +   memblock_set_current_limit(section_memblock_limit);
> > +   }
> 
> I've suggested an alternative solution to this (which just means changing
> the alignment of the memblock limit to 2x SECTION_SIZE).

Sorry I do not understand your suggestion very well. Do you mean the 
alignment check should be 2X SECTION_SIZE?

if (!section_memblock_limit && !(end & (2 * SECTION_SIZE - 1))) {

I found that this solution is based on the fact that the first memory block is 
always SECTION_SIZE-aligned. So we do not have to check the
alignment of the first memblock:

if (!section_memblock_limit)) {
section_memblock_limit = end;
memblock_set_current_limit(section_memblock_limit);
}
 
> -- 
> FTTC broadband for 0.8mile line: currently at 10.5Mbps down 400kbps up
> according to speedtest.net.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2] arm: improve non-section-aligned low memory mapping

2015-05-07 Thread Min-Hua Chen
On Wed, May 06, 2015 at 11:32:49AM +0100, Russell King - ARM Linux wrote:
 On Sun, Apr 26, 2015 at 04:47:08PM +0800, Min-Hua Chen wrote:
  @@ -1384,6 +1351,15 @@ static void __init map_lowmem(void)
  create_mapping(map);
  }
  }
  +
  +   /*
  +* Find the first section-aligned memblock and set
  +* memblock_limit at it.
  +*/
  +   if (!section_memblock_limit  !(end  ~SECTION_MASK)) {
  +   section_memblock_limit = end;
  +   memblock_set_current_limit(section_memblock_limit);
  +   }
 
 I've suggested an alternative solution to this (which just means changing
 the alignment of the memblock limit to 2x SECTION_SIZE).

Sorry I do not understand your suggestion very well. Do you mean the 
alignment check should be 2X SECTION_SIZE?

if (!section_memblock_limit  !(end  (2 * SECTION_SIZE - 1))) {

I found that this solution is based on the fact that the first memory block is 
always SECTION_SIZE-aligned. So we do not have to check the
alignment of the first memblock:

if (!section_memblock_limit)) {
section_memblock_limit = end;
memblock_set_current_limit(section_memblock_limit);
}
 
 -- 
 FTTC broadband for 0.8mile line: currently at 10.5Mbps down 400kbps up
 according to speedtest.net.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v2] arm: improve non-section-aligned low memory mapping

2015-04-26 Thread Min-Hua Chen
Fix space errors.

In current design, the memblock.current_limit is set to
a section-aligned value in sanity_check_meminfo().

However, the section-aligned memblock may become non-section-aligned
after arm_memblock_init(). For example, the first section-aligned
memblock is 0x-0x0100 and sanity_check_meminfo sets
current_limit to 0x0100. After arm_memblock_init, two memory blocks
[0x00c0 - 0x00d0] and [0x00ff - 0x0100] are reserved
by memblock_reserve() and make the original memory block
[0x-0x0100] becomes:

[0x-0x00c0]
[0x00d0-0x00ff]

When creating the low memory mapping for [0x00d0-0x00ff],
since the memory block is non-section-aligned, it will need to create
a second level page table. But the current_limit is set to 0x0100,
and it's possible to allocate a unmapped memory block.

call flow:

setup_arch
 + sanity_check_meminfo
 + arm_memblock_init
 + paging_init
+ map_lowmem
+ bootmem_init

Move the memblock_set_current_limit logic to map_lowmem(), we point
the memblock current_limit to the first section-aligned memblock block.
Since map_lowmem() is called after arm_memblock_init(), there is no way to
change memblock layout. So we can say that the first section-aligned limit
is valid during map_lowmem(). Hence fix the problem described above.

Another change is change the implementation of find_limits().
In commit: 1c2f87c22566cd057bc8cde10c37ae9da1a1bb76, the max_low is
set by memblock_get_current_limit(). However memblock.current_limit
can be changed by memblock_set_current_limit() anypoint before
find_limits().

It's better to use arm_lowmem_limit to be max_lowmem in two ways:
First, arm_lowmem_limit cannot be changed by a public API. Second, the
high_memory is set by arm_lowmem_limit and is a natural limit of
low memory area in bootmem_init().

thanks,
Min-Hua

Signed-off-by: Min-Hua Chen 
---
 arch/arm/mm/init.c |2 +-
 arch/arm/mm/mmu.c  |   44 ++--
 2 files changed, 11 insertions(+), 35 deletions(-)

diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
index 2495c8c..6a618f9 100644
--- a/arch/arm/mm/init.c
+++ b/arch/arm/mm/init.c
@@ -138,7 +138,7 @@ void show_mem(unsigned int filter)
 static void __init find_limits(unsigned long *min, unsigned long *max_low,
   unsigned long *max_high)
 {
-   *max_low = PFN_DOWN(memblock_get_current_limit());
+   *max_low = PFN_DOWN(arm_lowmem_limit);
*min = PFN_UP(memblock_start_of_DRAM());
*max_high = PFN_DOWN(memblock_end_of_DRAM());
 }
diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
index 4e6ef89..dbc484d 100644
--- a/arch/arm/mm/mmu.c
+++ b/arch/arm/mm/mmu.c
@@ -1068,7 +1068,6 @@ phys_addr_t arm_lowmem_limit __initdata = 0;
 
 void __init sanity_check_meminfo(void)
 {
-   phys_addr_t memblock_limit = 0;
int highmem = 0;
phys_addr_t vmalloc_limit = __pa(vmalloc_min - 1) + 1;
struct memblock_region *reg;
@@ -1110,43 +1109,10 @@ void __init sanity_check_meminfo(void)
else
arm_lowmem_limit = block_end;
}
-
-   /*
-* Find the first non-section-aligned page, and point
-* memblock_limit at it. This relies on rounding the
-* limit down to be section-aligned, which happens at
-* the end of this function.
-*
-* With this algorithm, the start or end of almost any
-* bank can be non-section-aligned. The only exception
-* is that the start of the bank 0 must be section-
-* aligned, since otherwise memory would need to be
-* allocated when mapping the start of bank 0, which
-* occurs before any free memory is mapped.
-*/
-   if (!memblock_limit) {
-   if (!IS_ALIGNED(block_start, SECTION_SIZE))
-   memblock_limit = block_start;
-   else if (!IS_ALIGNED(block_end, SECTION_SIZE))
-   memblock_limit = arm_lowmem_limit;
-   }
-
}
}
 
high_memory = __va(arm_lowmem_limit - 1) + 1;
-
-   /*
-* Round the memblock limit down to a section size.  This
-* helps to ensure that we will allocate memory from the
-* last full section, which should be mapped.
-*/
-   if (memblock_limit)
-   memblock_limit = round_down(memblock_limit, SECTION_SIZE);
-   if (!memblock_limit)
-   memblock_limit = arm_lowmem_limit;
-
-   memblock_set_current_limit(memblock_limit);
 }
 
 static inline void prepare_page_table(void)
@@ -1331,6

[PATCH] arm: improve non-section-aligned low memory mapping

2015-04-26 Thread Min-Hua Chen
>From d8dbec3573b02afd8a23fe10f92bc0d324b0c951 Mon Sep 17 00:00:00 2001
From: Min-Hua Chen 
Date: Sun, 26 Apr 2015 15:07:44 +0800
Subject: [PATCH] arm: improve non-section-aligned low memory mapping

In current design, the memblock.current_limit is set to
a section-aligned value in sanity_check_meminfo().

However, the section-aligned memblock may become non-section-aligned
after arm_memblock_init(). For example, the first section-aligned
memblock is 0x-0x0100 and sanity_check_meminfo sets
current_limit to 0x0100. After arm_memblock_init, two memory blocks
[0x00c0 - 0x00d0] and [0x00ff - 0x0100] are reserved
by memblock_reserve() and make the original memory block
[0x-0x0100] becomes:

[0x-0x00c0]
[0x00d0-0x00ff]

When creating the low memory mapping for [0x00d0-0x00ff],
since the memory block is non-section-aligned, it will need to create
a second level page table. But the current_limit is set to 0x0100,
and it's possible to allocate a unmapped memory block.

call flow:

setup_arch
 + sanity_check_meminfo
 + arm_memblock_init
 + paging_init
+ map_lowmem
+ bootmem_init

Move the memblock_set_current_limit logic to map_lowmem(), we point
the memblock current_limit to the first section-aligned memblock block.
Since map_lowmem() is called after arm_memblock_init(), there is no way to
change memblock layout. So we can say that the first section-aligned limit
is valid during map_lowmem(). Hence fix the problem described above.

Another change is the implementation of find_limits().
In commit: 1c2f87c22566cd057bc8cde10c37ae9da1a1bb76, the max_low is
set by memblock_get_current_limit(). However memblock.current_limit
can be changed by memblock_set_current_limit() anypoint before
find_limits().

It's better to use arm_lowmem_limit to be max_lowmem in two ways:
First, arm_lowmem_limit cannot be changed by a public API. Second, the
high_memory is set by arm_lowmem_limit and is a natural limit of
low memory area in bootmem_init().

Signed-off-by: Min-Hua Chen 
---
 arch/arm/mm/init.c |2 +-
 arch/arm/mm/mmu.c  |   44 ++--
 2 files changed, 11 insertions(+), 35 deletions(-)

diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
index 2495c8c..6a618f9 100644
--- a/arch/arm/mm/init.c
+++ b/arch/arm/mm/init.c
@@ -138,7 +138,7 @@ void show_mem(unsigned int filter)
 static void __init find_limits(unsigned long *min, unsigned long *max_low,
unsigned long *max_high)
 {
- *max_low = PFN_DOWN(memblock_get_current_limit());
+ *max_low = PFN_DOWN(arm_lowmem_limit);
  *min = PFN_UP(memblock_start_of_DRAM());
  *max_high = PFN_DOWN(memblock_end_of_DRAM());
 }
diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
index 4e6ef89..dbc484d 100644
--- a/arch/arm/mm/mmu.c
+++ b/arch/arm/mm/mmu.c
@@ -1068,7 +1068,6 @@ phys_addr_t arm_lowmem_limit __initdata = 0;

 void __init sanity_check_meminfo(void)
 {
- phys_addr_t memblock_limit = 0;
  int highmem = 0;
  phys_addr_t vmalloc_limit = __pa(vmalloc_min - 1) + 1;
  struct memblock_region *reg;
@@ -1110,43 +1109,10 @@ void __init sanity_check_meminfo(void)
  else
  arm_lowmem_limit = block_end;
  }
-
- /*
- * Find the first non-section-aligned page, and point
- * memblock_limit at it. This relies on rounding the
- * limit down to be section-aligned, which happens at
- * the end of this function.
- *
- * With this algorithm, the start or end of almost any
- * bank can be non-section-aligned. The only exception
- * is that the start of the bank 0 must be section-
- * aligned, since otherwise memory would need to be
- * allocated when mapping the start of bank 0, which
- * occurs before any free memory is mapped.
- */
- if (!memblock_limit) {
- if (!IS_ALIGNED(block_start, SECTION_SIZE))
- memblock_limit = block_start;
- else if (!IS_ALIGNED(block_end, SECTION_SIZE))
- memblock_limit = arm_lowmem_limit;
- }
-
  }
  }

  high_memory = __va(arm_lowmem_limit - 1) + 1;
-
- /*
- * Round the memblock limit down to a section size.  This
- * helps to ensure that we will allocate memory from the
- * last full section, which should be mapped.
- */
- if (memblock_limit)
- memblock_limit = round_down(memblock_limit, SECTION_SIZE);
- if (!memblock_limit)
- memblock_limit = arm_lowmem_limit;
-
- memblock_set_current_limit(memblock_limit);
 }

 static inline void prepare_page_table(void)
@@ -1331,6 +1297,7 @@ static void __init map_lowmem(void)
  struct memblock_region *reg;
  phys_addr_t kernel_x_start = round_down(__pa(_stext), SECTION_SIZE);
  phys_addr_t kernel_x_end = round_up(__pa(__init_end), SECTION_SIZE);
+ phys_addr_t section_block_limit = 0;

  /* Map all the lowmem memory banks. */
  for_each_memblock(memory, reg) {
@@ -1384,6 +1351,15 @@ static void __init map_lowmem(void)
  create_mapping();
  }
  }
+
+ /*
+ * Find the first section-aligned memblock and set
+ * memblock_limit at it.
+ */
+ if (!section_memblock_limit && !(end &

[PATCH] arm: improve non-section-aligned low memory mapping

2015-04-26 Thread Min-Hua Chen
From d8dbec3573b02afd8a23fe10f92bc0d324b0c951 Mon Sep 17 00:00:00 2001
From: Min-Hua Chen orca.c...@gmail.com
Date: Sun, 26 Apr 2015 15:07:44 +0800
Subject: [PATCH] arm: improve non-section-aligned low memory mapping

In current design, the memblock.current_limit is set to
a section-aligned value in sanity_check_meminfo().

However, the section-aligned memblock may become non-section-aligned
after arm_memblock_init(). For example, the first section-aligned
memblock is 0x-0x0100 and sanity_check_meminfo sets
current_limit to 0x0100. After arm_memblock_init, two memory blocks
[0x00c0 - 0x00d0] and [0x00ff - 0x0100] are reserved
by memblock_reserve() and make the original memory block
[0x-0x0100] becomes:

[0x-0x00c0]
[0x00d0-0x00ff]

When creating the low memory mapping for [0x00d0-0x00ff],
since the memory block is non-section-aligned, it will need to create
a second level page table. But the current_limit is set to 0x0100,
and it's possible to allocate a unmapped memory block.

call flow:

setup_arch
 + sanity_check_meminfo
 + arm_memblock_init
 + paging_init
+ map_lowmem
+ bootmem_init

Move the memblock_set_current_limit logic to map_lowmem(), we point
the memblock current_limit to the first section-aligned memblock block.
Since map_lowmem() is called after arm_memblock_init(), there is no way to
change memblock layout. So we can say that the first section-aligned limit
is valid during map_lowmem(). Hence fix the problem described above.

Another change is the implementation of find_limits().
In commit: 1c2f87c22566cd057bc8cde10c37ae9da1a1bb76, the max_low is
set by memblock_get_current_limit(). However memblock.current_limit
can be changed by memblock_set_current_limit() anypoint before
find_limits().

It's better to use arm_lowmem_limit to be max_lowmem in two ways:
First, arm_lowmem_limit cannot be changed by a public API. Second, the
high_memory is set by arm_lowmem_limit and is a natural limit of
low memory area in bootmem_init().

Signed-off-by: Min-Hua Chen orca.c...@gmail.com
---
 arch/arm/mm/init.c |2 +-
 arch/arm/mm/mmu.c  |   44 ++--
 2 files changed, 11 insertions(+), 35 deletions(-)

diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
index 2495c8c..6a618f9 100644
--- a/arch/arm/mm/init.c
+++ b/arch/arm/mm/init.c
@@ -138,7 +138,7 @@ void show_mem(unsigned int filter)
 static void __init find_limits(unsigned long *min, unsigned long *max_low,
unsigned long *max_high)
 {
- *max_low = PFN_DOWN(memblock_get_current_limit());
+ *max_low = PFN_DOWN(arm_lowmem_limit);
  *min = PFN_UP(memblock_start_of_DRAM());
  *max_high = PFN_DOWN(memblock_end_of_DRAM());
 }
diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
index 4e6ef89..dbc484d 100644
--- a/arch/arm/mm/mmu.c
+++ b/arch/arm/mm/mmu.c
@@ -1068,7 +1068,6 @@ phys_addr_t arm_lowmem_limit __initdata = 0;

 void __init sanity_check_meminfo(void)
 {
- phys_addr_t memblock_limit = 0;
  int highmem = 0;
  phys_addr_t vmalloc_limit = __pa(vmalloc_min - 1) + 1;
  struct memblock_region *reg;
@@ -1110,43 +1109,10 @@ void __init sanity_check_meminfo(void)
  else
  arm_lowmem_limit = block_end;
  }
-
- /*
- * Find the first non-section-aligned page, and point
- * memblock_limit at it. This relies on rounding the
- * limit down to be section-aligned, which happens at
- * the end of this function.
- *
- * With this algorithm, the start or end of almost any
- * bank can be non-section-aligned. The only exception
- * is that the start of the bank 0 must be section-
- * aligned, since otherwise memory would need to be
- * allocated when mapping the start of bank 0, which
- * occurs before any free memory is mapped.
- */
- if (!memblock_limit) {
- if (!IS_ALIGNED(block_start, SECTION_SIZE))
- memblock_limit = block_start;
- else if (!IS_ALIGNED(block_end, SECTION_SIZE))
- memblock_limit = arm_lowmem_limit;
- }
-
  }
  }

  high_memory = __va(arm_lowmem_limit - 1) + 1;
-
- /*
- * Round the memblock limit down to a section size.  This
- * helps to ensure that we will allocate memory from the
- * last full section, which should be mapped.
- */
- if (memblock_limit)
- memblock_limit = round_down(memblock_limit, SECTION_SIZE);
- if (!memblock_limit)
- memblock_limit = arm_lowmem_limit;
-
- memblock_set_current_limit(memblock_limit);
 }

 static inline void prepare_page_table(void)
@@ -1331,6 +1297,7 @@ static void __init map_lowmem(void)
  struct memblock_region *reg;
  phys_addr_t kernel_x_start = round_down(__pa(_stext), SECTION_SIZE);
  phys_addr_t kernel_x_end = round_up(__pa(__init_end), SECTION_SIZE);
+ phys_addr_t section_block_limit = 0;

  /* Map all the lowmem memory banks. */
  for_each_memblock(memory, reg) {
@@ -1384,6 +1351,15 @@ static void __init map_lowmem(void)
  create_mapping(map);
  }
  }
+
+ /*
+ * Find the first section-aligned memblock and set
+ * memblock_limit at it.
+ */
+ if (!section_memblock_limit  !(end

[PATCH v2] arm: improve non-section-aligned low memory mapping

2015-04-26 Thread Min-Hua Chen
Fix space errors.

In current design, the memblock.current_limit is set to
a section-aligned value in sanity_check_meminfo().

However, the section-aligned memblock may become non-section-aligned
after arm_memblock_init(). For example, the first section-aligned
memblock is 0x-0x0100 and sanity_check_meminfo sets
current_limit to 0x0100. After arm_memblock_init, two memory blocks
[0x00c0 - 0x00d0] and [0x00ff - 0x0100] are reserved
by memblock_reserve() and make the original memory block
[0x-0x0100] becomes:

[0x-0x00c0]
[0x00d0-0x00ff]

When creating the low memory mapping for [0x00d0-0x00ff],
since the memory block is non-section-aligned, it will need to create
a second level page table. But the current_limit is set to 0x0100,
and it's possible to allocate a unmapped memory block.

call flow:

setup_arch
 + sanity_check_meminfo
 + arm_memblock_init
 + paging_init
+ map_lowmem
+ bootmem_init

Move the memblock_set_current_limit logic to map_lowmem(), we point
the memblock current_limit to the first section-aligned memblock block.
Since map_lowmem() is called after arm_memblock_init(), there is no way to
change memblock layout. So we can say that the first section-aligned limit
is valid during map_lowmem(). Hence fix the problem described above.

Another change is change the implementation of find_limits().
In commit: 1c2f87c22566cd057bc8cde10c37ae9da1a1bb76, the max_low is
set by memblock_get_current_limit(). However memblock.current_limit
can be changed by memblock_set_current_limit() anypoint before
find_limits().

It's better to use arm_lowmem_limit to be max_lowmem in two ways:
First, arm_lowmem_limit cannot be changed by a public API. Second, the
high_memory is set by arm_lowmem_limit and is a natural limit of
low memory area in bootmem_init().

thanks,
Min-Hua

Signed-off-by: Min-Hua Chen orca.c...@gmail.com
---
 arch/arm/mm/init.c |2 +-
 arch/arm/mm/mmu.c  |   44 ++--
 2 files changed, 11 insertions(+), 35 deletions(-)

diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
index 2495c8c..6a618f9 100644
--- a/arch/arm/mm/init.c
+++ b/arch/arm/mm/init.c
@@ -138,7 +138,7 @@ void show_mem(unsigned int filter)
 static void __init find_limits(unsigned long *min, unsigned long *max_low,
   unsigned long *max_high)
 {
-   *max_low = PFN_DOWN(memblock_get_current_limit());
+   *max_low = PFN_DOWN(arm_lowmem_limit);
*min = PFN_UP(memblock_start_of_DRAM());
*max_high = PFN_DOWN(memblock_end_of_DRAM());
 }
diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
index 4e6ef89..dbc484d 100644
--- a/arch/arm/mm/mmu.c
+++ b/arch/arm/mm/mmu.c
@@ -1068,7 +1068,6 @@ phys_addr_t arm_lowmem_limit __initdata = 0;
 
 void __init sanity_check_meminfo(void)
 {
-   phys_addr_t memblock_limit = 0;
int highmem = 0;
phys_addr_t vmalloc_limit = __pa(vmalloc_min - 1) + 1;
struct memblock_region *reg;
@@ -1110,43 +1109,10 @@ void __init sanity_check_meminfo(void)
else
arm_lowmem_limit = block_end;
}
-
-   /*
-* Find the first non-section-aligned page, and point
-* memblock_limit at it. This relies on rounding the
-* limit down to be section-aligned, which happens at
-* the end of this function.
-*
-* With this algorithm, the start or end of almost any
-* bank can be non-section-aligned. The only exception
-* is that the start of the bank 0 must be section-
-* aligned, since otherwise memory would need to be
-* allocated when mapping the start of bank 0, which
-* occurs before any free memory is mapped.
-*/
-   if (!memblock_limit) {
-   if (!IS_ALIGNED(block_start, SECTION_SIZE))
-   memblock_limit = block_start;
-   else if (!IS_ALIGNED(block_end, SECTION_SIZE))
-   memblock_limit = arm_lowmem_limit;
-   }
-
}
}
 
high_memory = __va(arm_lowmem_limit - 1) + 1;
-
-   /*
-* Round the memblock limit down to a section size.  This
-* helps to ensure that we will allocate memory from the
-* last full section, which should be mapped.
-*/
-   if (memblock_limit)
-   memblock_limit = round_down(memblock_limit, SECTION_SIZE);
-   if (!memblock_limit)
-   memblock_limit = arm_lowmem_limit;
-
-   memblock_set_current_limit(memblock_limit);
 }
 
 static inline void

Re: [PATCH] arm64: add ioremap physical address information

2015-01-06 Thread Min-Hua Chen
On Tue, Jan 6, 2015 at 10:17 PM, Will Deacon  wrote:
> On Fri, Dec 26, 2014 at 04:52:10PM +0000, Min-Hua Chen wrote:
>> In /proc/vmallocinfo, it's good to show the physical address
>> of each ioremap in vmallocinfo. Add physical address information
>> in arm64 ioremap.
>>
>> 0xc900047f2000-0xc900047f40008192 _nv013519rm+0x57/0xa0
>> [nvidia] phys=f810 ioremap
>> 0xc900047f4000-0xc900047f60008192 _nv013519rm+0x57/0xa0
>> [nvidia] phys=f8008000 ioremap
>> 0xc9000480-0xc90004821000  135168 e1000_probe+0x22c/0xb95
>> [e1000e] phys=f430 ioremap
>> 0xc900049c-0xc900049e1000  135168 _nv013521rm+0x4d/0xd0
>> [nvidia] phys=e014 ioremap
>>
>> Signed-off-by: Min-Hua Chen 
>> ---
>
> Thanks, this looks useful for debugging.
>
>   Acked-by: Will Deacon 
>
> I assume this can wait for 3.20?

Sure, thanks.

Min-Hua

>
> Will
>
>
>>  arch/arm64/mm/ioremap.c |1 +
>>  1 file changed, 1 insertion(+)
>>
>> diff --git a/arch/arm64/mm/ioremap.c b/arch/arm64/mm/ioremap.c
>> index cbb99c8..01e88c8 100644
>> --- a/arch/arm64/mm/ioremap.c
>> +++ b/arch/arm64/mm/ioremap.c
>> @@ -62,6 +62,7 @@ static void __iomem *__ioremap_caller(phys_addr_t
>> phys_addr, size_t size,
>>  if (!area)
>>  return NULL;
>>  addr = (unsigned long)area->addr;
>> +area->phys_addr = phys_addr;
>>
>>  err = ioremap_page_range(addr, addr + size, phys_addr, prot);
>>  if (err) {
>> --
>> 1.7.10.4
>>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] arm64: add ioremap physical address information

2015-01-06 Thread Min-Hua Chen
On Tue, Jan 6, 2015 at 10:17 PM, Will Deacon will.dea...@arm.com wrote:
 On Fri, Dec 26, 2014 at 04:52:10PM +, Min-Hua Chen wrote:
 In /proc/vmallocinfo, it's good to show the physical address
 of each ioremap in vmallocinfo. Add physical address information
 in arm64 ioremap.

 0xc900047f2000-0xc900047f40008192 _nv013519rm+0x57/0xa0
 [nvidia] phys=f810 ioremap
 0xc900047f4000-0xc900047f60008192 _nv013519rm+0x57/0xa0
 [nvidia] phys=f8008000 ioremap
 0xc9000480-0xc90004821000  135168 e1000_probe+0x22c/0xb95
 [e1000e] phys=f430 ioremap
 0xc900049c-0xc900049e1000  135168 _nv013521rm+0x4d/0xd0
 [nvidia] phys=e014 ioremap

 Signed-off-by: Min-Hua Chen orca.c...@gmail.com
 ---

 Thanks, this looks useful for debugging.

   Acked-by: Will Deacon will.dea...@arm.com

 I assume this can wait for 3.20?

Sure, thanks.

Min-Hua


 Will


  arch/arm64/mm/ioremap.c |1 +
  1 file changed, 1 insertion(+)

 diff --git a/arch/arm64/mm/ioremap.c b/arch/arm64/mm/ioremap.c
 index cbb99c8..01e88c8 100644
 --- a/arch/arm64/mm/ioremap.c
 +++ b/arch/arm64/mm/ioremap.c
 @@ -62,6 +62,7 @@ static void __iomem *__ioremap_caller(phys_addr_t
 phys_addr, size_t size,
  if (!area)
  return NULL;
  addr = (unsigned long)area-addr;
 +area-phys_addr = phys_addr;

  err = ioremap_page_range(addr, addr + size, phys_addr, prot);
  if (err) {
 --
 1.7.10.4

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] arm64: add ioremap physical address information

2014-12-26 Thread Min-Hua Chen
In /proc/vmallocinfo, it's good to show the physical address
of each ioremap in vmallocinfo. Add physical address information
in arm64 ioremap.

0xc900047f2000-0xc900047f40008192 _nv013519rm+0x57/0xa0
[nvidia] phys=f810 ioremap
0xc900047f4000-0xc900047f60008192 _nv013519rm+0x57/0xa0
[nvidia] phys=f8008000 ioremap
0xc9000480-0xc90004821000  135168 e1000_probe+0x22c/0xb95
[e1000e] phys=f430 ioremap
0xc900049c-0xc900049e1000  135168 _nv013521rm+0x4d/0xd0
[nvidia] phys=e014 ioremap

Signed-off-by: Min-Hua Chen 
---
 arch/arm64/mm/ioremap.c |1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/arm64/mm/ioremap.c b/arch/arm64/mm/ioremap.c
index cbb99c8..01e88c8 100644
--- a/arch/arm64/mm/ioremap.c
+++ b/arch/arm64/mm/ioremap.c
@@ -62,6 +62,7 @@ static void __iomem *__ioremap_caller(phys_addr_t
phys_addr, size_t size,
 if (!area)
 return NULL;
 addr = (unsigned long)area->addr;
+area->phys_addr = phys_addr;

 err = ioremap_page_range(addr, addr + size, phys_addr, prot);
 if (err) {
--
1.7.10.4
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] arm64: add ioremap physical address information

2014-12-26 Thread Min-Hua Chen
In /proc/vmallocinfo, it's good to show the physical address
of each ioremap in vmallocinfo. Add physical address information
in arm64 ioremap.

0xc900047f2000-0xc900047f40008192 _nv013519rm+0x57/0xa0
[nvidia] phys=f810 ioremap
0xc900047f4000-0xc900047f60008192 _nv013519rm+0x57/0xa0
[nvidia] phys=f8008000 ioremap
0xc9000480-0xc90004821000  135168 e1000_probe+0x22c/0xb95
[e1000e] phys=f430 ioremap
0xc900049c-0xc900049e1000  135168 _nv013521rm+0x4d/0xd0
[nvidia] phys=e014 ioremap

Signed-off-by: Min-Hua Chen orca.c...@gmail.com
---
 arch/arm64/mm/ioremap.c |1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/arm64/mm/ioremap.c b/arch/arm64/mm/ioremap.c
index cbb99c8..01e88c8 100644
--- a/arch/arm64/mm/ioremap.c
+++ b/arch/arm64/mm/ioremap.c
@@ -62,6 +62,7 @@ static void __iomem *__ioremap_caller(phys_addr_t
phys_addr, size_t size,
 if (!area)
 return NULL;
 addr = (unsigned long)area-addr;
+area-phys_addr = phys_addr;

 err = ioremap_page_range(addr, addr + size, phys_addr, prot);
 if (err) {
--
1.7.10.4
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] arm64: setup return path for el1_undef

2014-12-23 Thread Min-Hua Chen
On Tue, Dec 23, 2014 at 11:57 PM, Catalin Marinas
 wrote:
> On Tue, Dec 23, 2014 at 03:15:10PM +0000, Min-Hua Chen wrote:
>> Setup return path for el1_undef since el1_undef may
>> be handled by handlers.
>
> Did you find a real issue or it was just code inspection.

Thanks for your reply. It was just a code inspection.

Min-Hua

>
>> asmlinkage void __exception do_undefinstr(struct pt_regs *regs)
>> {
>> siginfo_t info;
>> void __user *pc = (void __user *)instruction_pointer(regs);
>>
>> /* check for AArch32 breakpoint instructions */
>> if (!aarch32_break_handler(regs))
>> return;
>>
>> if (call_undef_hook(regs) == 0)
>> return;
>>
>> ...
>> }
>> Signed-off-by: Min-Hua Chen 
>> ---
>>  arch/arm64/kernel/entry.S |3 ++-
>>  1 file changed, 2 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
>> index fd4fa37..86ebec5 100644
>> --- a/arch/arm64/kernel/entry.S
>> +++ b/arch/arm64/kernel/entry.S
>> @@ -313,7 +313,8 @@ el1_undef:
>>   */
>>  enable_dbg
>>  movx0, sp
>> -bdo_undefinstr
>> +bldo_undefinstr
>> +kernel_exit 1
>>  el1_dbg:
>>  /*
>>   * Debug exception handling
>
> I don't think this is needed. The code is pretty convoluted but for an
> EL1 undefined exception we should never return from do_undefinstr(). The
> call_undef_hook() function returns 1 if !user_mode(regs) and this should
> cause a kernel panic. Basically we do not allow any kind of undefined
> instructions in the arm64 kernel.
>
> --
> Catalin
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] arm64: setup return path for el1_undef

2014-12-23 Thread Min-Hua Chen
Setup return path for el1_undef since el1_undef may
be handled by handlers.

asmlinkage void __exception do_undefinstr(struct pt_regs *regs)
{
siginfo_t info;
void __user *pc = (void __user *)instruction_pointer(regs);

/* check for AArch32 breakpoint instructions */
if (!aarch32_break_handler(regs))
return;

if (call_undef_hook(regs) == 0)
return;

...
}

Signed-off-by: Min-Hua Chen 
---
 arch/arm64/kernel/entry.S |3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index fd4fa37..86ebec5 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -313,7 +313,8 @@ el1_undef:
  */
 enable_dbg
 movx0, sp
-bdo_undefinstr
+bldo_undefinstr
+kernel_exit 1
 el1_dbg:
 /*
  * Debug exception handling
-- 
1.7.10.4
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] arm64: setup return path for el1_undef

2014-12-23 Thread Min-Hua Chen
Setup return path for el1_undef since el1_undef may
be handled by handlers.

asmlinkage void __exception do_undefinstr(struct pt_regs *regs)
{
siginfo_t info;
void __user *pc = (void __user *)instruction_pointer(regs);

/* check for AArch32 breakpoint instructions */
if (!aarch32_break_handler(regs))
return;

if (call_undef_hook(regs) == 0)
return;

...
}

Signed-off-by: Min-Hua Chen orca.c...@gmail.com
---
 arch/arm64/kernel/entry.S |3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index fd4fa37..86ebec5 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -313,7 +313,8 @@ el1_undef:
  */
 enable_dbg
 movx0, sp
-bdo_undefinstr
+bldo_undefinstr
+kernel_exit 1
 el1_dbg:
 /*
  * Debug exception handling
-- 
1.7.10.4
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] arm64: setup return path for el1_undef

2014-12-23 Thread Min-Hua Chen
On Tue, Dec 23, 2014 at 11:57 PM, Catalin Marinas
catalin.mari...@arm.com wrote:
 On Tue, Dec 23, 2014 at 03:15:10PM +, Min-Hua Chen wrote:
 Setup return path for el1_undef since el1_undef may
 be handled by handlers.

 Did you find a real issue or it was just code inspection.

Thanks for your reply. It was just a code inspection.

Min-Hua


 asmlinkage void __exception do_undefinstr(struct pt_regs *regs)
 {
 siginfo_t info;
 void __user *pc = (void __user *)instruction_pointer(regs);

 /* check for AArch32 breakpoint instructions */
 if (!aarch32_break_handler(regs))
 return;

 if (call_undef_hook(regs) == 0)
 return;

 ...
 }
 Signed-off-by: Min-Hua Chen orca.c...@gmail.com
 ---
  arch/arm64/kernel/entry.S |3 ++-
  1 file changed, 2 insertions(+), 1 deletion(-)

 diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
 index fd4fa37..86ebec5 100644
 --- a/arch/arm64/kernel/entry.S
 +++ b/arch/arm64/kernel/entry.S
 @@ -313,7 +313,8 @@ el1_undef:
   */
  enable_dbg
  movx0, sp
 -bdo_undefinstr
 +bldo_undefinstr
 +kernel_exit 1
  el1_dbg:
  /*
   * Debug exception handling

 I don't think this is needed. The code is pretty convoluted but for an
 EL1 undefined exception we should never return from do_undefinstr(). The
 call_undef_hook() function returns 1 if !user_mode(regs) and this should
 cause a kernel panic. Basically we do not allow any kind of undefined
 instructions in the arm64 kernel.

 --
 Catalin
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] slub: fix confusing error messages in check_slab

2014-11-24 Thread Min-Hua Chen
In check_slab, s->name is passed incorrectly to the error
messages. It will cause confusing error messages if the object
check fails. This patch fix this bug by removing s->name.

Signed-off-by: Min-Hua Chen 
---
 mm/slub.c |4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index ae7b9f1..5da9f9f 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -849,12 +849,12 @@ static int check_slab(struct kmem_cache *s,
struct page *page)
 maxobj = order_objects(compound_order(page), s->size, s->reserved);
 if (page->objects > maxobj) {
 slab_err(s, page, "objects %u > max %u",
-s->name, page->objects, maxobj);
+ page->objects, maxobj);
 return 0;
 }
 if (page->inuse > page->objects) {
 slab_err(s, page, "inuse %u > max %u",
-s->name, page->inuse, page->objects);
+ page->inuse, page->objects);
 return 0;
 }
 /* Slab_pad_check fixes things up after itself */
-- 
1.7.10.4
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] slub: fix confusing error messages in check_slab

2014-11-24 Thread Min-Hua Chen
In check_slab, s-name is passed incorrectly to the error
messages. It will cause confusing error messages if the object
check fails. This patch fix this bug by removing s-name.

Signed-off-by: Min-Hua Chen orca.c...@gmail.com
---
 mm/slub.c |4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index ae7b9f1..5da9f9f 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -849,12 +849,12 @@ static int check_slab(struct kmem_cache *s,
struct page *page)
 maxobj = order_objects(compound_order(page), s-size, s-reserved);
 if (page-objects  maxobj) {
 slab_err(s, page, objects %u  max %u,
-s-name, page-objects, maxobj);
+ page-objects, maxobj);
 return 0;
 }
 if (page-inuse  page-objects) {
 slab_err(s, page, inuse %u  max %u,
-s-name, page-inuse, page-objects);
+ page-inuse, page-objects);
 return 0;
 }
 /* Slab_pad_check fixes things up after itself */
-- 
1.7.10.4
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Does kernel assume PHYS_OFFSET must be SECTION_SIZE aligned?

2014-11-17 Thread Min-Hua Chen
Hi,

I have a problem about kernel_x_start and kernel_x_end in map_lowmem.
If the start address of DRAM is 0x2000 and PHYS_OFFSET is 0x2010 (1MB)
and _start is 0xc0008000 and SECTION_SIZE is 0x20 (2MB).
Let's say the memory between 0x2000 and 0x2010 is used by some H/W,
and not available for kernel.

According to current implementation:

unsigned long kernel_x_start = round_down(__pa(_stext), SECTION_SIZE);
unsigned long kernel_x_end = round_up(__pa(__init_end), SECTION_SIZE);

Than we'll get kernel_x_start = round_down(__pa(0xc0008000), SECTION_SIZE)
  = round_down(0x20008000, SECTION_SIZE)
  = 0x2000

In this case, 0x2000 is not available for kernel memory.

Does kernel assume PHYS_OFFSET must be SECTION_SIZE aligned or we
should get kernel_x_start by rounding down _start first then convert
the virtual address to physical address.

phys_addr_t kernel_x_start = __pa(round_down(_stext, SECTION_SIZE));
phys_addr_t kernel_x_end = __pa(round_up(__init_end, SECTION_SIZE));

get kernel_x_start = __pa(round_down(0xc0008000, SECTION_SIZE))
   = __pa(round_down(0xc0008000, SECTION_SIZE))
   = __pa(0xc000)
   = 0x2010

thanks,
Min-Hua
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Does kernel assume PHYS_OFFSET must be SECTION_SIZE aligned?

2014-11-17 Thread Min-Hua Chen
Hi,

I have a problem about kernel_x_start and kernel_x_end in map_lowmem.
If the start address of DRAM is 0x2000 and PHYS_OFFSET is 0x2010 (1MB)
and _start is 0xc0008000 and SECTION_SIZE is 0x20 (2MB).
Let's say the memory between 0x2000 and 0x2010 is used by some H/W,
and not available for kernel.

According to current implementation:

unsigned long kernel_x_start = round_down(__pa(_stext), SECTION_SIZE);
unsigned long kernel_x_end = round_up(__pa(__init_end), SECTION_SIZE);

Than we'll get kernel_x_start = round_down(__pa(0xc0008000), SECTION_SIZE)
  = round_down(0x20008000, SECTION_SIZE)
  = 0x2000

In this case, 0x2000 is not available for kernel memory.

Does kernel assume PHYS_OFFSET must be SECTION_SIZE aligned or we
should get kernel_x_start by rounding down _start first then convert
the virtual address to physical address.

phys_addr_t kernel_x_start = __pa(round_down(_stext, SECTION_SIZE));
phys_addr_t kernel_x_end = __pa(round_up(__init_end, SECTION_SIZE));

get kernel_x_start = __pa(round_down(0xc0008000, SECTION_SIZE))
   = __pa(round_down(0xc0008000, SECTION_SIZE))
   = __pa(0xc000)
   = 0x2010

thanks,
Min-Hua
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] arm: use phys_addr_t to describe physical address

2014-10-30 Thread Min-Hua Chen
Hi,

Use phys_addr_t to describe physical_address. When LPAE
is enabled, physical address can be more than 32 bits, so
we have to use phys_addr_t to handle the case.

Signed-off-by: Min-Hua Chen 
---
 arch/arm/mm/mmu.c |4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
index 9f98cec..858aa11 100644
--- a/arch/arm/mm/mmu.c
+++ b/arch/arm/mm/mmu.c
@@ -1335,8 +1335,8 @@ static void __init kmap_init(void)
 static void __init map_lowmem(void)
 {
 struct memblock_region *reg;
-unsigned long kernel_x_start = round_down(__pa(_stext), SECTION_SIZE);
-unsigned long kernel_x_end = round_up(__pa(__init_end), SECTION_SIZE);
+phys_addr_t kernel_x_start = round_down(__pa(_stext), SECTION_SIZE);
+phys_addr_t kernel_x_end = round_up(__pa(__init_end), SECTION_SIZE);

 /* Map all the lowmem memory banks. */
 for_each_memblock(memory, reg) {
--
1.7.10.4

thanks,
Min-Hua
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] arm: use phys_addr_t to describe physical address

2014-10-30 Thread Min-Hua Chen
Hi,

Use phys_addr_t to describe physical_address. When LPAE
is enabled, physical address can be more than 32 bits, so
we have to use phys_addr_t to handle the case.

Signed-off-by: Min-Hua Chen orca.c...@gmail.com
---
 arch/arm/mm/mmu.c |4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
index 9f98cec..858aa11 100644
--- a/arch/arm/mm/mmu.c
+++ b/arch/arm/mm/mmu.c
@@ -1335,8 +1335,8 @@ static void __init kmap_init(void)
 static void __init map_lowmem(void)
 {
 struct memblock_region *reg;
-unsigned long kernel_x_start = round_down(__pa(_stext), SECTION_SIZE);
-unsigned long kernel_x_end = round_up(__pa(__init_end), SECTION_SIZE);
+phys_addr_t kernel_x_start = round_down(__pa(_stext), SECTION_SIZE);
+phys_addr_t kernel_x_end = round_up(__pa(__init_end), SECTION_SIZE);

 /* Map all the lowmem memory banks. */
 for_each_memblock(memory, reg) {
--
1.7.10.4

thanks,
Min-Hua
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] arm64: Fix data type for physical address

2014-10-08 Thread Min-Hua Chen
Use phys_addr_t for physical address in alloc_init_pud. Although
phys_addr_t and unsigned long are 64 bit in arm64, it is better
to use phys_addr_t to describe physical addresses.

Signed-off-by: Min-Hua Chen 
---
 arch/arm64/mm/mmu.c |2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 6894ef3..c649ba5 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -202,7 +202,7 @@ static void __init alloc_init_pmd(pud_t *pud,
unsigned long addr,
 }

 static void __init alloc_init_pud(pgd_t *pgd, unsigned long addr,
-  unsigned long end, unsigned long phys,
+  unsigned long end, phys_addr_t phys,
   int map_io)
 {
 pud_t *pud;
-- 
1.7.10.4
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] arm64: Fix data type for physical address

2014-10-08 Thread Min-Hua Chen
Use phys_addr_t for physical address in alloc_init_pud. Although
phys_addr_t and unsigned long are 64 bit in arm64, it is better
to use phys_addr_t to describe physical addresses.

Signed-off-by: Min-Hua Chen orca.c...@gmail.com
---
 arch/arm64/mm/mmu.c |2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 6894ef3..c649ba5 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -202,7 +202,7 @@ static void __init alloc_init_pmd(pud_t *pud,
unsigned long addr,
 }

 static void __init alloc_init_pud(pgd_t *pgd, unsigned long addr,
-  unsigned long end, unsigned long phys,
+  unsigned long end, phys_addr_t phys,
   int map_io)
 {
 pud_t *pud;
-- 
1.7.10.4
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] slub: fix coding style problems

2014-10-02 Thread Min-Hua Chen
fix most obvious coding style problems reported by checkpatch.pl -f mm/slub.c

Signed-off-by: Min-Hua Chen 
---
 mm/slub.c |  121 -
 1 file changed, 63 insertions(+), 58 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index 3e8afcc..7ea162f 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -93,25 +93,25 @@
  *
  * Overloading of page flags that are otherwise used for LRU management.
  *
- * PageActive The slab is frozen and exempt from list processing.
- * This means that the slab is dedicated to a purpose
- * such as satisfying allocations for a specific
- * processor. Objects may be freed in the slab while
- * it is frozen but slab_free will then skip the usual
- * list operations. It is up to the processor holding
- * the slab to integrate the slab into the slab lists
- * when the slab is no longer needed.
+ * PageActiveThe slab is frozen and exempt from list processing.
+ *This means that the slab is dedicated to a purpose
+ *such as satisfying allocations for a specific
+ *processor. Objects may be freed in the slab while
+ *it is frozen but slab_free will then skip the usual
+ *list operations. It is up to the processor holding
+ *the slab to integrate the slab into the slab lists
+ *when the slab is no longer needed.
  *
- * One use of this flag is to mark slabs that are
- * used for allocations. Then such a slab becomes a cpu
- * slab. The cpu slab may be equipped with an additional
- * freelist that allows lockless access to
- * free objects in addition to the regular freelist
- * that requires the slab lock.
+ *One use of this flag is to mark slabs that are
+ *used for allocations. Then such a slab becomes a cpu
+ *slab. The cpu slab may be equipped with an additional
+ *freelist that allows lockless access to
+ *free objects in addition to the regular freelist
+ *that requires the slab lock.
  *
  * PageErrorSlab requires special handling due to debug
- * options set. This movesslab handling out of
- * the fast path and disables lockless freelists.
+ *options set. This movesslab handling out of
+ *the fast path and disables lockless freelists.
  */

 static inline int kmem_cache_debug(struct kmem_cache *s)
@@ -230,7 +230,7 @@ static inline void stat(const struct kmem_cache
*s, enum stat_item si)
 }

 /
- * Core slab cache functions
+ *Core slab cache functions
  ***/

 /* Verify that a pointer has an address that is valid within a slab page */
@@ -355,9 +355,11 @@ static __always_inline void slab_unlock(struct page *page)
 __bit_spin_unlock(PG_locked, >flags);
 }

-static inline void set_page_slub_counters(struct page *page, unsigned
long counters_new)
+static inline void set_page_slub_counters(struct page *page,
+  unsigned long counters_new)
 {
 struct page tmp;
+
 tmp.counters = counters_new;
 /*
  * page->counters can cover frozen/inuse/objects as well
@@ -371,14 +373,14 @@ static inline void set_page_slub_counters(struct
page *page, unsigned long count
 }

 /* Interrupts must be disabled (for the fallback code to work right) */
-static inline bool __cmpxchg_double_slab(struct kmem_cache *s, struct
page *page,
-void *freelist_old, unsigned long counters_old,
-void *freelist_new, unsigned long counters_new,
-const char *n)
+static inline bool __cmpxchg_double_slab(struct kmem_cache *s,
+struct page *page, void *freelist_old,
+unsigned long counters_old, void *freelist_new,
+unsigned long counters_new, const char *n)
 {
 VM_BUG_ON(!irqs_disabled());
 #if defined(CONFIG_HAVE_CMPXCHG_DOUBLE) && \
-defined(CONFIG_HAVE_ALIGNED_STRUCT_PAGE)
+defined(CONFIG_HAVE_ALIGNED_STRUCT_PAGE)
 if (s->flags & __CMPXCHG_DOUBLE) {
 if (cmpxchg_double(>freelist, >counters,
freelist_old, counters_old,
@@ -414,7 +416,7 @@ static inline bool cmpxchg_double_slab(struct
kmem_cache *s, struct page *page,
 const char *n)
 {
 #if defined(CONFIG_HAVE_CMPXCHG_DOUBLE) && \
-defined(CONFIG_HAVE_ALIGNED_STRUCT_PAGE)
+defined(CONFIG_HAVE_ALIGNED_STRUCT_PAGE)
 if (s->flags & __CMPXCHG_DOUBLE) {
 if (cmpxchg_double(>freelist, >counters,
freelist_old, counters_old,
@@ -550,6 +552,7 @@ static void print_track(const char *s, struct track *t)
 #ifdef CONFIG_STACKTRACE
 {
 int i;
+
 for (i = 0; i <

[PATCH] [RESEND]arm64: Use phys_addr_t type for physical address

2014-10-02 Thread Min-Hua Chen
Change the type of physical address from unsigned long to phys_addr_t,
make valid_phys_addr_range more readable.

Signed-off-by: Min-Hua Chen 
---
 arch/arm64/include/asm/io.h |2 +-
 arch/arm64/mm/mmap.c|2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/io.h b/arch/arm64/include/asm/io.h
index e0ecdcf..f771e8b 100644
--- a/arch/arm64/include/asm/io.h
+++ b/arch/arm64/include/asm/io.h
@@ -243,7 +243,7 @@ extern void __iomem *ioremap_cache(phys_addr_t
phys_addr, size_t size);
  * (PHYS_OFFSET and PHYS_MASK taken into account).
  */
 #define ARCH_HAS_VALID_PHYS_ADDR_RANGE
-extern int valid_phys_addr_range(unsigned long addr, size_t size);
+extern int valid_phys_addr_range(phys_addr_t addr, size_t size);
 extern int valid_mmap_phys_addr_range(unsigned long pfn, size_t size);

 extern int devmem_is_allowed(unsigned long pfn);
diff --git a/arch/arm64/mm/mmap.c b/arch/arm64/mm/mmap.c
index 8ed6cb1..1d73662 100644
--- a/arch/arm64/mm/mmap.c
+++ b/arch/arm64/mm/mmap.c
@@ -102,7 +102,7 @@ EXPORT_SYMBOL_GPL(arch_pick_mmap_layout);
  * You really shouldn't be using read() or write() on /dev/mem.  This might go
  * away in the future.
  */
-int valid_phys_addr_range(unsigned long addr, size_t size)
+int valid_phys_addr_range(phys_addr_t addr, size_t size)
 {
 if (addr < PHYS_OFFSET)
 return 0;
-- 
1.7.10.4
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] Use phys_addr_t type for physical address (arm64)

2014-10-02 Thread Min-Hua Chen
Change the type of physical address from unsigned long to phys_addr_t,
make valid_phys_addr_range more readable.

Signed-off-by: Min-Hua Chen 
---
 arch/arm64/include/asm/io.h |2 +-
 arch/arm64/mm/mmap.c|2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/io.h b/arch/arm64/include/asm/io.h
index e0ecdcf..f771e8b 100644
--- a/arch/arm64/include/asm/io.h
+++ b/arch/arm64/include/asm/io.h
@@ -243,7 +243,7 @@ extern void __iomem *ioremap_cache(phys_addr_t
phys_addr, size_t size);
  * (PHYS_OFFSET and PHYS_MASK taken into account).
  */
 #define ARCH_HAS_VALID_PHYS_ADDR_RANGE
-extern int valid_phys_addr_range(unsigned long addr, size_t size);
+extern int valid_phys_addr_range(phys_addr_t addr, size_t size);
 extern int valid_mmap_phys_addr_range(unsigned long pfn, size_t size);

 extern int devmem_is_allowed(unsigned long pfn);
diff --git a/arch/arm64/mm/mmap.c b/arch/arm64/mm/mmap.c
index 8ed6cb1..1d73662 100644
--- a/arch/arm64/mm/mmap.c
+++ b/arch/arm64/mm/mmap.c
@@ -102,7 +102,7 @@ EXPORT_SYMBOL_GPL(arch_pick_mmap_layout);
  * You really shouldn't be using read() or write() on /dev/mem.  This might go
  * away in the future.
  */
-int valid_phys_addr_range(unsigned long addr, size_t size)
+int valid_phys_addr_range(phys_addr_t addr, size_t size)
 {
 if (addr < PHYS_OFFSET)
 return 0;
-- 
1.7.10.4

On Thu, Oct 2, 2014 at 5:27 PM, Will Deacon  wrote:
> On Wed, Oct 01, 2014 at 03:26:55PM +0100, Min-Hua Chen wrote:
>> I found that valid_phys_addr_range does not use
>> phys_addr_t to describe physical address.
>> Is it better to change the type from unsigned long to phys_addr_t?
>
> Yes, that looks like a sensible change. Both types are 64-bit on arm64, so
> it shouldn't affect functionality. Can you resend as a proper patch (with
> commit message) please?
>
> Will
>
>> Signed-off-by: Min-Hua Chen
>> diff --git a/arch/arm64/include/asm/io.h b/arch/arm64/include/asm/io.h
>> index e0ecdcf..f771e8b 100644
>> --- a/arch/arm64/include/asm/io.h
>> +++ b/arch/arm64/include/asm/io.h
>> @@ -243,7 +243,7 @@ extern void __iomem *ioremap_cache(phys_addr_t
>> phys_addr, size_t size);
>>   * (PHYS_OFFSET and PHYS_MASK taken into account).
>>   */
>>  #define ARCH_HAS_VALID_PHYS_ADDR_RANGE
>> -extern int valid_phys_addr_range(unsigned long addr, size_t size);
>> +extern int valid_phys_addr_range(phys_addr_t addr, size_t size);
>>  extern int valid_mmap_phys_addr_range(unsigned long pfn, size_t size);
>>
>>  extern int devmem_is_allowed(unsigned long pfn);
>> diff --git a/arch/arm64/mm/mmap.c b/arch/arm64/mm/mmap.c
>> index 8ed6cb1..1d73662 100644
>> --- a/arch/arm64/mm/mmap.c
>> +++ b/arch/arm64/mm/mmap.c
>> @@ -102,7 +102,7 @@ EXPORT_SYMBOL_GPL(arch_pick_mmap_layout);
>>   * You really shouldn't be using read() or write() on /dev/mem.  This might 
>> go
>>   * away in the future.
>>   */
>> -int valid_phys_addr_range(unsigned long addr, size_t size)
>> +int valid_phys_addr_range(phys_addr_t addr, size_t size)
>>  {
>>  if (addr < PHYS_OFFSET)
>>  return 0;
>> --
>> 1.7.10.4
>>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] Use phys_addr_t type for physical address (arm64)

2014-10-02 Thread Min-Hua Chen
Change the type of physical address from unsigned long to phys_addr_t,
make valid_phys_addr_range more readable.

Signed-off-by: Min-Hua Chen orca.c...@gmail.com
---
 arch/arm64/include/asm/io.h |2 +-
 arch/arm64/mm/mmap.c|2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/io.h b/arch/arm64/include/asm/io.h
index e0ecdcf..f771e8b 100644
--- a/arch/arm64/include/asm/io.h
+++ b/arch/arm64/include/asm/io.h
@@ -243,7 +243,7 @@ extern void __iomem *ioremap_cache(phys_addr_t
phys_addr, size_t size);
  * (PHYS_OFFSET and PHYS_MASK taken into account).
  */
 #define ARCH_HAS_VALID_PHYS_ADDR_RANGE
-extern int valid_phys_addr_range(unsigned long addr, size_t size);
+extern int valid_phys_addr_range(phys_addr_t addr, size_t size);
 extern int valid_mmap_phys_addr_range(unsigned long pfn, size_t size);

 extern int devmem_is_allowed(unsigned long pfn);
diff --git a/arch/arm64/mm/mmap.c b/arch/arm64/mm/mmap.c
index 8ed6cb1..1d73662 100644
--- a/arch/arm64/mm/mmap.c
+++ b/arch/arm64/mm/mmap.c
@@ -102,7 +102,7 @@ EXPORT_SYMBOL_GPL(arch_pick_mmap_layout);
  * You really shouldn't be using read() or write() on /dev/mem.  This might go
  * away in the future.
  */
-int valid_phys_addr_range(unsigned long addr, size_t size)
+int valid_phys_addr_range(phys_addr_t addr, size_t size)
 {
 if (addr  PHYS_OFFSET)
 return 0;
-- 
1.7.10.4

On Thu, Oct 2, 2014 at 5:27 PM, Will Deacon will.dea...@arm.com wrote:
 On Wed, Oct 01, 2014 at 03:26:55PM +0100, Min-Hua Chen wrote:
 I found that valid_phys_addr_range does not use
 phys_addr_t to describe physical address.
 Is it better to change the type from unsigned long to phys_addr_t?

 Yes, that looks like a sensible change. Both types are 64-bit on arm64, so
 it shouldn't affect functionality. Can you resend as a proper patch (with
 commit message) please?

 Will

 Signed-off-by: Min-Hua Chenorca.c...@gmail.com
 diff --git a/arch/arm64/include/asm/io.h b/arch/arm64/include/asm/io.h
 index e0ecdcf..f771e8b 100644
 --- a/arch/arm64/include/asm/io.h
 +++ b/arch/arm64/include/asm/io.h
 @@ -243,7 +243,7 @@ extern void __iomem *ioremap_cache(phys_addr_t
 phys_addr, size_t size);
   * (PHYS_OFFSET and PHYS_MASK taken into account).
   */
  #define ARCH_HAS_VALID_PHYS_ADDR_RANGE
 -extern int valid_phys_addr_range(unsigned long addr, size_t size);
 +extern int valid_phys_addr_range(phys_addr_t addr, size_t size);
  extern int valid_mmap_phys_addr_range(unsigned long pfn, size_t size);

  extern int devmem_is_allowed(unsigned long pfn);
 diff --git a/arch/arm64/mm/mmap.c b/arch/arm64/mm/mmap.c
 index 8ed6cb1..1d73662 100644
 --- a/arch/arm64/mm/mmap.c
 +++ b/arch/arm64/mm/mmap.c
 @@ -102,7 +102,7 @@ EXPORT_SYMBOL_GPL(arch_pick_mmap_layout);
   * You really shouldn't be using read() or write() on /dev/mem.  This might 
 go
   * away in the future.
   */
 -int valid_phys_addr_range(unsigned long addr, size_t size)
 +int valid_phys_addr_range(phys_addr_t addr, size_t size)
  {
  if (addr  PHYS_OFFSET)
  return 0;
 --
 1.7.10.4

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] [RESEND]arm64: Use phys_addr_t type for physical address

2014-10-02 Thread Min-Hua Chen
Change the type of physical address from unsigned long to phys_addr_t,
make valid_phys_addr_range more readable.

Signed-off-by: Min-Hua Chen orca.c...@gmail.com
---
 arch/arm64/include/asm/io.h |2 +-
 arch/arm64/mm/mmap.c|2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/io.h b/arch/arm64/include/asm/io.h
index e0ecdcf..f771e8b 100644
--- a/arch/arm64/include/asm/io.h
+++ b/arch/arm64/include/asm/io.h
@@ -243,7 +243,7 @@ extern void __iomem *ioremap_cache(phys_addr_t
phys_addr, size_t size);
  * (PHYS_OFFSET and PHYS_MASK taken into account).
  */
 #define ARCH_HAS_VALID_PHYS_ADDR_RANGE
-extern int valid_phys_addr_range(unsigned long addr, size_t size);
+extern int valid_phys_addr_range(phys_addr_t addr, size_t size);
 extern int valid_mmap_phys_addr_range(unsigned long pfn, size_t size);

 extern int devmem_is_allowed(unsigned long pfn);
diff --git a/arch/arm64/mm/mmap.c b/arch/arm64/mm/mmap.c
index 8ed6cb1..1d73662 100644
--- a/arch/arm64/mm/mmap.c
+++ b/arch/arm64/mm/mmap.c
@@ -102,7 +102,7 @@ EXPORT_SYMBOL_GPL(arch_pick_mmap_layout);
  * You really shouldn't be using read() or write() on /dev/mem.  This might go
  * away in the future.
  */
-int valid_phys_addr_range(unsigned long addr, size_t size)
+int valid_phys_addr_range(phys_addr_t addr, size_t size)
 {
 if (addr  PHYS_OFFSET)
 return 0;
-- 
1.7.10.4
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] slub: fix coding style problems

2014-10-02 Thread Min-Hua Chen
fix most obvious coding style problems reported by checkpatch.pl -f mm/slub.c

Signed-off-by: Min-Hua Chen orca.c...@gmail.com
---
 mm/slub.c |  121 -
 1 file changed, 63 insertions(+), 58 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index 3e8afcc..7ea162f 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -93,25 +93,25 @@
  *
  * Overloading of page flags that are otherwise used for LRU management.
  *
- * PageActive The slab is frozen and exempt from list processing.
- * This means that the slab is dedicated to a purpose
- * such as satisfying allocations for a specific
- * processor. Objects may be freed in the slab while
- * it is frozen but slab_free will then skip the usual
- * list operations. It is up to the processor holding
- * the slab to integrate the slab into the slab lists
- * when the slab is no longer needed.
+ * PageActiveThe slab is frozen and exempt from list processing.
+ *This means that the slab is dedicated to a purpose
+ *such as satisfying allocations for a specific
+ *processor. Objects may be freed in the slab while
+ *it is frozen but slab_free will then skip the usual
+ *list operations. It is up to the processor holding
+ *the slab to integrate the slab into the slab lists
+ *when the slab is no longer needed.
  *
- * One use of this flag is to mark slabs that are
- * used for allocations. Then such a slab becomes a cpu
- * slab. The cpu slab may be equipped with an additional
- * freelist that allows lockless access to
- * free objects in addition to the regular freelist
- * that requires the slab lock.
+ *One use of this flag is to mark slabs that are
+ *used for allocations. Then such a slab becomes a cpu
+ *slab. The cpu slab may be equipped with an additional
+ *freelist that allows lockless access to
+ *free objects in addition to the regular freelist
+ *that requires the slab lock.
  *
  * PageErrorSlab requires special handling due to debug
- * options set. This movesslab handling out of
- * the fast path and disables lockless freelists.
+ *options set. This movesslab handling out of
+ *the fast path and disables lockless freelists.
  */

 static inline int kmem_cache_debug(struct kmem_cache *s)
@@ -230,7 +230,7 @@ static inline void stat(const struct kmem_cache
*s, enum stat_item si)
 }

 /
- * Core slab cache functions
+ *Core slab cache functions
  ***/

 /* Verify that a pointer has an address that is valid within a slab page */
@@ -355,9 +355,11 @@ static __always_inline void slab_unlock(struct page *page)
 __bit_spin_unlock(PG_locked, page-flags);
 }

-static inline void set_page_slub_counters(struct page *page, unsigned
long counters_new)
+static inline void set_page_slub_counters(struct page *page,
+  unsigned long counters_new)
 {
 struct page tmp;
+
 tmp.counters = counters_new;
 /*
  * page-counters can cover frozen/inuse/objects as well
@@ -371,14 +373,14 @@ static inline void set_page_slub_counters(struct
page *page, unsigned long count
 }

 /* Interrupts must be disabled (for the fallback code to work right) */
-static inline bool __cmpxchg_double_slab(struct kmem_cache *s, struct
page *page,
-void *freelist_old, unsigned long counters_old,
-void *freelist_new, unsigned long counters_new,
-const char *n)
+static inline bool __cmpxchg_double_slab(struct kmem_cache *s,
+struct page *page, void *freelist_old,
+unsigned long counters_old, void *freelist_new,
+unsigned long counters_new, const char *n)
 {
 VM_BUG_ON(!irqs_disabled());
 #if defined(CONFIG_HAVE_CMPXCHG_DOUBLE)  \
-defined(CONFIG_HAVE_ALIGNED_STRUCT_PAGE)
+defined(CONFIG_HAVE_ALIGNED_STRUCT_PAGE)
 if (s-flags  __CMPXCHG_DOUBLE) {
 if (cmpxchg_double(page-freelist, page-counters,
freelist_old, counters_old,
@@ -414,7 +416,7 @@ static inline bool cmpxchg_double_slab(struct
kmem_cache *s, struct page *page,
 const char *n)
 {
 #if defined(CONFIG_HAVE_CMPXCHG_DOUBLE)  \
-defined(CONFIG_HAVE_ALIGNED_STRUCT_PAGE)
+defined(CONFIG_HAVE_ALIGNED_STRUCT_PAGE)
 if (s-flags  __CMPXCHG_DOUBLE) {
 if (cmpxchg_double(page-freelist, page-counters,
freelist_old, counters_old,
@@ -550,6 +552,7 @@ static void print_track(const char *s, struct track *t)
 #ifdef CONFIG_STACKTRACE
 {
 int i;
+
 for (i = 0; i  TRACK_ADDRS_COUNT; i

[PATCH] Use phys_addr_t type for physical address (arm64)

2014-10-01 Thread Min-Hua Chen
Hi,

I found that valid_phys_addr_range does not use
phys_addr_t to describe physical address.
Is it better to change the type from unsigned long to phys_addr_t?

Thanks,
Min-Hua


Signed-off-by: Min-Hua Chen
diff --git a/arch/arm64/include/asm/io.h b/arch/arm64/include/asm/io.h
index e0ecdcf..f771e8b 100644
--- a/arch/arm64/include/asm/io.h
+++ b/arch/arm64/include/asm/io.h
@@ -243,7 +243,7 @@ extern void __iomem *ioremap_cache(phys_addr_t
phys_addr, size_t size);
  * (PHYS_OFFSET and PHYS_MASK taken into account).
  */
 #define ARCH_HAS_VALID_PHYS_ADDR_RANGE
-extern int valid_phys_addr_range(unsigned long addr, size_t size);
+extern int valid_phys_addr_range(phys_addr_t addr, size_t size);
 extern int valid_mmap_phys_addr_range(unsigned long pfn, size_t size);

 extern int devmem_is_allowed(unsigned long pfn);
diff --git a/arch/arm64/mm/mmap.c b/arch/arm64/mm/mmap.c
index 8ed6cb1..1d73662 100644
--- a/arch/arm64/mm/mmap.c
+++ b/arch/arm64/mm/mmap.c
@@ -102,7 +102,7 @@ EXPORT_SYMBOL_GPL(arch_pick_mmap_layout);
  * You really shouldn't be using read() or write() on /dev/mem.  This might go
  * away in the future.
  */
-int valid_phys_addr_range(unsigned long addr, size_t size)
+int valid_phys_addr_range(phys_addr_t addr, size_t size)
 {
 if (addr < PHYS_OFFSET)
 return 0;
-- 
1.7.10.4
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] Use phys_addr_t type for physical address (arm64)

2014-10-01 Thread Min-Hua Chen
Hi,

I found that valid_phys_addr_range does not use
phys_addr_t to describe physical address.
Is it better to change the type from unsigned long to phys_addr_t?

Thanks,
Min-Hua


Signed-off-by: Min-Hua Chenorca.c...@gmail.com
diff --git a/arch/arm64/include/asm/io.h b/arch/arm64/include/asm/io.h
index e0ecdcf..f771e8b 100644
--- a/arch/arm64/include/asm/io.h
+++ b/arch/arm64/include/asm/io.h
@@ -243,7 +243,7 @@ extern void __iomem *ioremap_cache(phys_addr_t
phys_addr, size_t size);
  * (PHYS_OFFSET and PHYS_MASK taken into account).
  */
 #define ARCH_HAS_VALID_PHYS_ADDR_RANGE
-extern int valid_phys_addr_range(unsigned long addr, size_t size);
+extern int valid_phys_addr_range(phys_addr_t addr, size_t size);
 extern int valid_mmap_phys_addr_range(unsigned long pfn, size_t size);

 extern int devmem_is_allowed(unsigned long pfn);
diff --git a/arch/arm64/mm/mmap.c b/arch/arm64/mm/mmap.c
index 8ed6cb1..1d73662 100644
--- a/arch/arm64/mm/mmap.c
+++ b/arch/arm64/mm/mmap.c
@@ -102,7 +102,7 @@ EXPORT_SYMBOL_GPL(arch_pick_mmap_layout);
  * You really shouldn't be using read() or write() on /dev/mem.  This might go
  * away in the future.
  */
-int valid_phys_addr_range(unsigned long addr, size_t size)
+int valid_phys_addr_range(phys_addr_t addr, size_t size)
 {
 if (addr  PHYS_OFFSET)
 return 0;
-- 
1.7.10.4
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/