Re: [PATCHv2] ARM: ioremap: Fix static vm area boundary checking.

2014-05-16 Thread Richard Lee
On Fri, May 16, 2014 at 3:17 PM, li.xi...@freescale.com
 wrote:
>> Subject: Re: [PATCHv2] ARM: ioremap: Fix static vm area boundary checking.
>>
>> On Fri, May 16, 2014 at 1:28 AM, Nicolas Pitre 
>> wrote:
>> > On Thu, 15 May 2014, Richard Lee wrote:
>> >
>> >> Static vm area boundary check:
>> >>
>> >>   paddr1 --->|   |
>> >>  |   |
>> >>  |---| <-\--- svm->vm->addr(is page aligned)
>> >>   paddr2 --->|   ||
>> >>  | --| <--|-- svm->vm->phys_addr
>> >>  |   ||
>> >>   paddr3 --->|   ||
>> >>  |   ||
>> >>  |---| <--|-- next page boundary
>> >>  |   ||
>> >>   paddr4 --->|   || <- svm->vm->size(including guard page)
>> >>  |   ||
>> >>  |   ||
>> >> max paddr_end -->|---| <--|-- svm->vm's phys_addr_end
>> >>  | / ||
>> >>   paddr5 --->| guard ||
>> >>  | page  ||
>> >>      | / ||
>> >>   ---  <-/--- svm->vm->addr + svm->vm_size
>> >>
>> >> <1> If the paddr == paddr1, then continue;
>> >> <2> If the paddr == paddr2~paddr4 and paddr_end > phys_addr_end,
>> >> then continue;
>> >> <3> if the paddr >= paddr5 then continue;
>> >>
>> >> Signed-off-by: Richard Lee 
>> >
>> > instead of doing this repeatedly, why not simply ensure the recorded
>> > static vm information is already page aligned in the first place?
>> >
>>
>> Sorry, I'm not very sure what vm information you meant here.
>>
>> And the 'svm->vm->addr' and 'svm->vm->size' are all page aligned
>> already, but we cannot make sure that the 'paddr' and
>> 'svm->vm->phys_addr' are page aligned.
>>
>
> The paddr and svm->vm->phys_addr are all already pfn aligned here.
> What we should fix about is the boundary of them.
>

Sorry, I'm a fresher and I'll be out of the opensource mail list for
changing jobs and then I have no chance to do this.
Could you help me fix this issue please ?

And thanks for your help about how to appoarch the opensource list.
Thanks very much,

   Richard,

>> >>
>> >>
>> >> Change in V2:
>> >>  - PAGE_SIZE --> PAGE_MASK
>> >>  - remove the 'size' page size align operation.
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>  arch/arm/mm/ioremap.c | 40 ++--
>> >>  1 file changed, 38 insertions(+), 2 deletions(-)
>> >>
>> >> diff --git a/arch/arm/mm/ioremap.c b/arch/arm/mm/ioremap.c
>> >> index be69333..f235ab7 100644
>> >> --- a/arch/arm/mm/ioremap.c
>> >> +++ b/arch/arm/mm/ioremap.c
>> >> @@ -47,16 +47,52 @@ static struct static_vm
>> *find_static_vm_paddr(phys_addr_t paddr,
>> >>  {
>> >>   struct static_vm *svm;
>> >>   struct vm_struct *vm;
>> >> + size_t offset;
>> >> +
>> >> + offset = paddr & ~PAGE_MASK;
>> >>
>> >>   list_for_each_entry(svm, _vmlist, list) {
>> >> + phys_addr_t paddr_end, phys_addr_end;
>> >> + size_t vmoff;
>> >> +
>> >>   vm = >vm;
>> >>   if (!(vm->flags & VM_ARM_STATIC_MAPPING))
>> >>   continue;
>> >>   if ((vm->flags & VM_ARM_MTYPE_MASK) != VM_ARM_MTYPE(mtype))
>> >>   continue;
>> >>
>> >> - if (vm->phys_addr > paddr ||
>> >> - paddr + size - 1 > vm->phys_addr + vm->size - 1)
>> >> + /* Static vm area boundary check:
>> >> +  *
>> >> +  *   paddr1 --->|   |
>> >> +  *  |   |
>> >> +  *  |---| <-\--- svm->vm->addr(page
>> aligned)
>> >> +  *   paddr2 --->|   ||
>> >>

Re: [PATCHv2] ARM: ioremap: Fix static vm area boundary checking.

2014-05-16 Thread Richard Lee
On Fri, May 16, 2014 at 3:17 PM, li.xi...@freescale.com
li.xi...@freescale.com wrote:
 Subject: Re: [PATCHv2] ARM: ioremap: Fix static vm area boundary checking.

 On Fri, May 16, 2014 at 1:28 AM, Nicolas Pitre nicolas.pi...@linaro.org
 wrote:
  On Thu, 15 May 2014, Richard Lee wrote:
 
  Static vm area boundary check:
 
paddr1 ---|   |
   |   |
   |---| -\--- svm-vm-addr(is page aligned)
paddr2 ---|   ||
   | --| --|-- svm-vm-phys_addr
   |   ||
paddr3 ---|   ||
   |   ||
   |---| --|-- next page boundary
   |   ||
paddr4 ---|   || - svm-vm-size(including guard page)
   |   ||
   |   ||
  max paddr_end --|---| --|-- svm-vm's phys_addr_end
   | / ||
paddr5 ---| guard ||
   | page  ||
   | / ||
---  -/--- svm-vm-addr + svm-vm_size
 
  1 If the paddr == paddr1, then continue;
  2 If the paddr == paddr2~paddr4 and paddr_end  phys_addr_end,
  then continue;
  3 if the paddr = paddr5 then continue;
 
  Signed-off-by: Richard Lee superlibj8...@gmail.com
 
  instead of doing this repeatedly, why not simply ensure the recorded
  static vm information is already page aligned in the first place?
 

 Sorry, I'm not very sure what vm information you meant here.

 And the 'svm-vm-addr' and 'svm-vm-size' are all page aligned
 already, but we cannot make sure that the 'paddr' and
 'svm-vm-phys_addr' are page aligned.


 The paddr and svm-vm-phys_addr are all already pfn aligned here.
 What we should fix about is the boundary of them.


Sorry, I'm a fresher and I'll be out of the opensource mail list for
changing jobs and then I have no chance to do this.
Could you help me fix this issue please ?

And thanks for your help about how to appoarch the opensource list.
Thanks very much,

   Richard,

 
 
  Change in V2:
   - PAGE_SIZE -- PAGE_MASK
   - remove the 'size' page size align operation.
 
 
 
 
 
 
   arch/arm/mm/ioremap.c | 40 ++--
   1 file changed, 38 insertions(+), 2 deletions(-)
 
  diff --git a/arch/arm/mm/ioremap.c b/arch/arm/mm/ioremap.c
  index be69333..f235ab7 100644
  --- a/arch/arm/mm/ioremap.c
  +++ b/arch/arm/mm/ioremap.c
  @@ -47,16 +47,52 @@ static struct static_vm
 *find_static_vm_paddr(phys_addr_t paddr,
   {
struct static_vm *svm;
struct vm_struct *vm;
  + size_t offset;
  +
  + offset = paddr  ~PAGE_MASK;
 
list_for_each_entry(svm, static_vmlist, list) {
  + phys_addr_t paddr_end, phys_addr_end;
  + size_t vmoff;
  +
vm = svm-vm;
if (!(vm-flags  VM_ARM_STATIC_MAPPING))
continue;
if ((vm-flags  VM_ARM_MTYPE_MASK) != VM_ARM_MTYPE(mtype))
continue;
 
  - if (vm-phys_addr  paddr ||
  - paddr + size - 1  vm-phys_addr + vm-size - 1)
  + /* Static vm area boundary check:
  +  *
  +  *   paddr1 ---|   |
  +  *  |   |
  +  *  |---| -\--- svm-vm-addr(page
 aligned)
  +  *   paddr2 ---|   ||
  +  *  | --| --|-- svm-vm-phys_addr
  +  *  |   ||
  +  *   paddr3 ---|   ||
  +  *  |   ||
  +  *  |---| --|-- next page boundary
  +  *  |   ||
  +  *   paddr4 ---|   || - svm-vm-size,
  +  *  |   || including guard
 page
  +  *  |   ||
  +  * max paddr_end --|---| --|-- svm-vm's phys_addr_end
  +  *  | / ||
  +  *   paddr5 ---| guard ||
  +  *  | page  ||
  +  *  | / ||
  +  *   ---  -/-- svm-vm-addr + svm-
 vm_size
  +  *
  +  * 1 If paddr == paddr1, then continue;
  +  * 2 If paddr == paddr2~paddr4 and paddr_end 
 phys_addr_end,
  +  * then continue;
  +  * 3 if paddr = paddr5 then continue;
  +  */
  + vmoff = vm-phys_addr  ~PAGE_MASK;
  + phys_addr_end = vm-phys_addr + vm-size - PAGE_SIZE - 
  vmoff;
  + paddr_end = paddr + size - offset;
  + if (__phys_to_pfn(vm-phys_addr)  __phys_to_pfn(paddr) ||
  + paddr_end - 1  phys_addr_end - 1)
continue

Re: [PATCHv2] ARM: ioremap: Fix static vm area boundary checking.

2014-05-15 Thread Richard Lee
On Fri, May 16, 2014 at 1:28 AM, Nicolas Pitre  wrote:
> On Thu, 15 May 2014, Richard Lee wrote:
>
>> Static vm area boundary check:
>>
>>   paddr1 --->|   |
>>  |   |
>>  |---| <-\--- svm->vm->addr(is page aligned)
>>   paddr2 --->|   ||
>>  | --| <--|-- svm->vm->phys_addr
>>  |   ||
>>   paddr3 --->|   ||
>>  |   ||
>>  |---| <--|-- next page boundary
>>  |   ||
>>   paddr4 --->|   || <- svm->vm->size(including guard page)
>>  |   ||
>>  |   ||
>> max paddr_end -->|---| <--|-- svm->vm's phys_addr_end
>>  | / ||
>>   paddr5 --->| guard ||
>>  | page  ||
>>  | / ||
>>   ---  <-/--- svm->vm->addr + svm->vm_size
>>
>> <1> If the paddr == paddr1, then continue;
>> <2> If the paddr == paddr2~paddr4 and paddr_end > phys_addr_end,
>> then continue;
>> <3> if the paddr >= paddr5 then continue;
>>
>> Signed-off-by: Richard Lee 
>
> instead of doing this repeatedly, why not simply ensure the recorded
> static vm information is already page aligned in the first place?
>

Sorry, I'm not very sure what vm information you meant here.

And the 'svm->vm->addr' and 'svm->vm->size' are all page aligned
already, but we cannot make sure that the 'paddr' and
'svm->vm->phys_addr' are page aligned.


Thanks,

BRs
Richard Lee

>
>
>> ---
>>
>>
>> Change in V2:
>>  - PAGE_SIZE --> PAGE_MASK
>>  - remove the 'size' page size align operation.
>>
>>
>>
>>
>>
>>
>>  arch/arm/mm/ioremap.c | 40 ++--
>>  1 file changed, 38 insertions(+), 2 deletions(-)
>>
>> diff --git a/arch/arm/mm/ioremap.c b/arch/arm/mm/ioremap.c
>> index be69333..f235ab7 100644
>> --- a/arch/arm/mm/ioremap.c
>> +++ b/arch/arm/mm/ioremap.c
>> @@ -47,16 +47,52 @@ static struct static_vm 
>> *find_static_vm_paddr(phys_addr_t paddr,
>>  {
>>   struct static_vm *svm;
>>   struct vm_struct *vm;
>> + size_t offset;
>> +
>> + offset = paddr & ~PAGE_MASK;
>>
>>   list_for_each_entry(svm, _vmlist, list) {
>> + phys_addr_t paddr_end, phys_addr_end;
>> + size_t vmoff;
>> +
>>   vm = >vm;
>>   if (!(vm->flags & VM_ARM_STATIC_MAPPING))
>>   continue;
>>   if ((vm->flags & VM_ARM_MTYPE_MASK) != VM_ARM_MTYPE(mtype))
>>   continue;
>>
>> - if (vm->phys_addr > paddr ||
>> - paddr + size - 1 > vm->phys_addr + vm->size - 1)
>> + /* Static vm area boundary check:
>> +  *
>> +  *   paddr1 --->|   |
>> +  *  |   |
>> +  *  |---| <-\--- svm->vm->addr(page 
>> aligned)
>> +  *   paddr2 --->|   ||
>> +  *  | --| <--|-- svm->vm->phys_addr
>> +  *  |   ||
>> +  *   paddr3 --->|   ||
>> +  *  |   ||
>> +  *  |---| <--|-- next page boundary
>> +  *  |   ||
>> +  *   paddr4 --->|   || <- svm->vm->size,
>> +  *  |   || including guard page
>> +  *  |   ||
>> +  * max paddr_end -->|---| <--|-- svm->vm's phys_addr_end
>> +  *  | / ||
>> +  *   paddr5 --->| guard ||
>> +  *  | page  ||
>> +  *  | / ||
>> +  *   ---  <-/-- svm->vm->addr + 
>> svm->vm_size
>> +  *
>> +  * <1> If paddr == paddr1, then continue;
>> +  * <2> If paddr == paddr2~paddr4 and paddr_end > phys_addr_end,
>> +  * then continue;
>> +  * <3> if paddr >= paddr5 then continue;
>> +  */
>> + vmoff = vm->phys_addr & ~PAGE_MASK;
>> + phys_addr_end = vm->phys_addr + vm->size - PAGE_SIZE - vmoff;
>> + paddr_end = paddr + size - offset;
>> + if (__phys_to_pfn(vm->phys_addr) > __phys_to_pfn(paddr) ||
>> + paddr_end - 1 > phys_addr_end - 1)
>>   continue;
>>
>>   return svm;
>> --
>> 1.8.4
>>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCHv2] ARM: ioremap: Fix static vm area boundary checking.

2014-05-15 Thread Richard Lee
Static vm area boundary check:

  paddr1 --->|   |
 |   |
 |---| <-\--- svm->vm->addr(is page aligned)
  paddr2 --->|   ||
 | --| <--|-- svm->vm->phys_addr
 |   ||
  paddr3 --->|   ||
 |   ||
 |---| <--|-- next page boundary
 |   ||
  paddr4 --->|   || <- svm->vm->size(including guard page)
 |   ||
 |   ||
max paddr_end -->|---| <--|-- svm->vm's phys_addr_end
 | / ||
  paddr5 --->| guard ||
 | page  ||
 | / ||
  ---  <-/--- svm->vm->addr + svm->vm_size

<1> If the paddr == paddr1, then continue;
<2> If the paddr == paddr2~paddr4 and paddr_end > phys_addr_end,
    then continue;
<3> if the paddr >= paddr5 then continue;

Signed-off-by: Richard Lee 
---


Change in V2:
 - PAGE_SIZE --> PAGE_MASK
 - remove the 'size' page size align operation.






 arch/arm/mm/ioremap.c | 40 ++--
 1 file changed, 38 insertions(+), 2 deletions(-)

diff --git a/arch/arm/mm/ioremap.c b/arch/arm/mm/ioremap.c
index be69333..f235ab7 100644
--- a/arch/arm/mm/ioremap.c
+++ b/arch/arm/mm/ioremap.c
@@ -47,16 +47,52 @@ static struct static_vm *find_static_vm_paddr(phys_addr_t 
paddr,
 {
struct static_vm *svm;
struct vm_struct *vm;
+   size_t offset;
+
+   offset = paddr & ~PAGE_MASK;
 
list_for_each_entry(svm, _vmlist, list) {
+   phys_addr_t paddr_end, phys_addr_end;
+   size_t vmoff;
+
vm = >vm;
if (!(vm->flags & VM_ARM_STATIC_MAPPING))
continue;
if ((vm->flags & VM_ARM_MTYPE_MASK) != VM_ARM_MTYPE(mtype))
continue;
 
-   if (vm->phys_addr > paddr ||
-   paddr + size - 1 > vm->phys_addr + vm->size - 1)
+   /* Static vm area boundary check:
+*
+*   paddr1 --->|   |
+*  |   |
+*  |---| <-\--- svm->vm->addr(page aligned)
+*   paddr2 --->|   ||
+*  | --| <--|-- svm->vm->phys_addr
+*  |   ||
+*   paddr3 --->|   ||
+*  |   ||
+*  |---| <--|-- next page boundary
+*  |   ||
+*   paddr4 --->|   || <- svm->vm->size,
+*  |   || including guard page
+*  |   ||
+* max paddr_end -->|---| <--|-- svm->vm's phys_addr_end
+*  | / ||
+*   paddr5 --->| guard ||
+*  | page  ||
+*  | / ||
+*   ---  <-/-- svm->vm->addr + svm->vm_size
+*
+* <1> If paddr == paddr1, then continue;
+* <2> If paddr == paddr2~paddr4 and paddr_end > phys_addr_end,
+* then continue;
+* <3> if paddr >= paddr5 then continue;
+*/
+   vmoff = vm->phys_addr & ~PAGE_MASK;
+   phys_addr_end = vm->phys_addr + vm->size - PAGE_SIZE - vmoff;
+   paddr_end = paddr + size - offset;
+   if (__phys_to_pfn(vm->phys_addr) > __phys_to_pfn(paddr) ||
+   paddr_end - 1 > phys_addr_end - 1)
continue;
 
return svm;
-- 
1.8.4

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] ARM: ioremap: Fix static vm area boundary checking.

2014-05-15 Thread Richard Lee
 On Thu, May 15, 2014 at 4:55 PM, li.xi...@freescale.com
 wrote:
>> Subject: [PATCH] ARM: ioremap: Fix static vm area boundary checking.
>>
>> Static vm area boundary check:
>>
>>   paddr1 --->|   |
>>  |   |
>>  |---| <-\--- svm->vm->addr(is page aligned)
>>   paddr2 --->|   ||
>>  | --| <--|-- svm->vm->phys_addr
>>  |   ||
>>   paddr3 --->|   ||
>>  |   ||
>>  |---| <--|-- next page boundary
>>  |   ||
>>   paddr4 --->|   || <- svm->vm->size(including guard page)
>>  |   ||
>>  |   ||
>> max paddr_end -->|---| <--|-- svm->vm's phys_addr_end
>>  | / ||
>>   paddr5 --->| guard ||
>>  | page  ||
>>  | / ||
>>   ---  <-/--- svm->vm->addr + svm->vm_size
>>
>> <1> If the paddr == paddr1, then continue;
>> <2> If the paddr == paddr2~paddr4 and paddr_end > phys_addr_end,
>> then continue;
>> <3> if the paddr >= paddr5 then continue;
>>
>> Signed-off-by: Richard Lee 
>> ---
>>  arch/arm/mm/ioremap.c | 44 ++--
>>  1 file changed, 42 insertions(+), 2 deletions(-)
>>
>> diff --git a/arch/arm/mm/ioremap.c b/arch/arm/mm/ioremap.c
>> index be69333..2fa41f4 100644
>> --- a/arch/arm/mm/ioremap.c
>> +++ b/arch/arm/mm/ioremap.c
>> @@ -47,16 +47,56 @@ static struct static_vm *find_static_vm_paddr(phys_addr_t
>> paddr,
>>  {
>>   struct static_vm *svm;
>>   struct vm_struct *vm;
>> + size_t offset;
>> +
>> + /*
>> +  * Make sure the size the mapping size is page aligned.
>> +  */
>> + size = PAGE_ALIGN((paddr & ~PAGE_SIZE) + size);
>> + offset = paddr & ~PAGE_SIZE;
>>
>
> Shouldn't it be PAGE_MASK ?
>

Yes, it is. I'll fix it.

Thanks,

Best regards,
Richard Lee


>
>>   list_for_each_entry(svm, _vmlist, list) {
>> + phys_addr_t paddr_end, phys_addr_end;
>> + size_t vmoff;
>> +
>>   vm = >vm;
>>   if (!(vm->flags & VM_ARM_STATIC_MAPPING))
>>   continue;
>>   if ((vm->flags & VM_ARM_MTYPE_MASK) != VM_ARM_MTYPE(mtype))
>>   continue;
>>
>> - if (vm->phys_addr > paddr ||
>> - paddr + size - 1 > vm->phys_addr + vm->size - 1)
>> + /* Static vm area boundary check:
>> +  *
>> +  *   paddr1 --->|   |
>> +  *  |   |
>> +  *  |---| <-\--- svm->vm->addr(page 
>> aligned)
>> +  *   paddr2 --->|   ||
>> +  *  | --| <--|-- svm->vm->phys_addr
>> +  *  |   ||
>> +  *   paddr3 --->|   ||
>> +  *  |   ||
>> +  *  |---| <--|-- next page boundary
>> +  *  |   ||
>> +  *   paddr4 --->|   || <- svm->vm->size,
>> +  *  |   || including guard page
>> +  *  |   ||
>> +  * max paddr_end -->|---| <--|-- svm->vm's phys_addr_end
>> +  *  | / ||
>> +  *   paddr5 --->| guard ||
>> +  *  | page  ||
>> +  *  | / ||
>> +  *   ---  <-/-- svm->vm->addr + 
>> svm->vm_size
>> +  *
>> +  * <1> If paddr == paddr1, then continue;
>> +  * <2> If paddr == paddr2~paddr4 and paddr_end > phys_addr_end,
>> +  * then continue;
>> +  * <3> if paddr >= paddr5 then continue;
>> +  */
>> + vmoff = vm->phys_addr & ~PAGE_SIZE;
>
> And here.
>
> Thanks,
> BRs
> Xiubo
>
>> + phys_addr_end = vm->phys_addr + vm->size - PAGE_SIZE - vmoff;
>> + paddr_end = paddr + size - offset;
>> + if (__phys_to_pfn(vm->phys_addr) > __phys_to_pfn(paddr) ||
>> + paddr_end - 1 > phys_addr_end - 1)
>>   continue;
>>
>>   return svm;
>> --
>> 1.8.4
>>
>>
>> ___
>> linux-arm-kernel mailing list
>> linux-arm-ker...@lists.infradead.org
>> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCHv2 1/2] mm/vmalloc: Add IO mapping space reused interface support.

2014-05-15 Thread Richard Lee
On Thu, May 15, 2014 at 6:56 AM, Andrew Morton
 wrote:
> On Wed, 14 May 2014 16:18:51 +0800 Richard Lee  
> wrote:
>
>> For the IO mapping, the same physical address space maybe
>> mapped more than one time, for example, in some SoCs:
>>   - 0x20001000 ~ 0x20001400 --> 1KB for Dev1
>>   - 0x20001400 ~ 0x20001800 --> 1KB for Dev2
>>   and the page size is 4KB.
>>
>> Then both Dev1 and Dev2 will do ioremap operations, and the IO
>> vmalloc area's virtual address will be aligned down to 4KB, and
>> the size will be aligned up to 4KB. That's to say, only one
>> 4KB size's vmalloc area could contain Dev1 and Dev2 IO mapping area
>> at the same time.
>
> Unclear.  What happens when a caller does the two ioremaps at present?
> It fails?  Returns the current mapping's address?  Something else?
>

For this case, should the later one wait ?
Maybe this patch hasn't consider about this.


>> For this case, we can ioremap only one time, and the later ioremap
>> operation will just return the exist vmalloc area.
>
> I guess an alternative is to establish a new vmap pointing at the same
> physical address.  How does this approach compare to refcounting the
> existing vmap?
>

Yes, I'm also thinking to estabish one new vmap.


>> --- a/include/linux/vmalloc.h
>> +++ b/include/linux/vmalloc.h
>> @@ -1,6 +1,7 @@
>>  #ifndef _LINUX_VMALLOC_H
>>  #define _LINUX_VMALLOC_H
>>
>> +#include 
>>  #include 
>>  #include 
>>  #include 
>> @@ -34,6 +35,7 @@ struct vm_struct {
>>   struct page **pages;
>>   unsigned intnr_pages;
>>   phys_addr_t phys_addr;
>> + atomic_tused;
>>   const void  *caller;
>>  };
>>
>> @@ -100,6 +102,9 @@ static inline size_t get_vm_area_size(const struct 
>> vm_struct *area)
>>   return area->size - PAGE_SIZE;
>>  }
>>
>> +extern struct vm_struct *find_vm_area_paddr(phys_addr_t paddr, size_t size,
>> +  unsigned long *offset,
>> +  unsigned long flags);
>>  extern struct vm_struct *get_vm_area(unsigned long size, unsigned long 
>> flags);
>>  extern struct vm_struct *get_vm_area_caller(unsigned long size,
>>   unsigned long flags, const void 
>> *caller);
>> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
>> index bf233b2..cf0093c 100644
>> --- a/mm/vmalloc.c
>> +++ b/mm/vmalloc.c
>> @@ -1293,6 +1293,7 @@ static void setup_vmalloc_vm(struct vm_struct *vm, 
>> struct vmap_area *va,
>>   vm->addr = (void *)va->va_start;
>>   vm->size = va->va_end - va->va_start;
>>   vm->caller = caller;
>> + atomic_set(>used, 1);
>>   va->vm = vm;
>>   va->flags |= VM_VM_AREA;
>>   spin_unlock(_area_lock);
>> @@ -1383,6 +1384,84 @@ struct vm_struct *get_vm_area_caller(unsigned long 
>> size, unsigned long flags,
>> NUMA_NO_NODE, GFP_KERNEL, caller);
>>  }
>>
>> +static int vm_area_used_inc(struct vm_struct *area)
>> +{
>> + if (!(area->flags & VM_IOREMAP))
>> + return -EINVAL;
>
> afaict this can never happen?
>

Yes, it is for now.

>> + atomic_add(1, >used);
>> +
>> + return atomic_read(>vm->used);
>
> atomic_add_return() is neater.  But the return value is in fact never
> used so it could return void.
>

yes, that' fine.

>> +}
>> +
>> +static int vm_area_used_dec(const void *addr)
>> +{
>> + struct vmap_area *va;
>> +
>> + va = find_vmap_area((unsigned long)addr);
>> + if (!va || !(va->flags & VM_VM_AREA))
>> + return 0;
>> +
>> + if (!(va->vm->flags & VM_IOREMAP))
>> + return 0;
>> +
>> + atomic_sub(1, >vm->used);
>> +
>> + return atomic_read(>vm->used);
>
> atomic_sub_return()
>

yes,

>> +}
>> +
>> +/**
>> + *   find_vm_area_paddr  -  find a continuous kernel virtual area using the
>> + *   physical addreess.
>> + *   @paddr: base physical address
>> + *   @size:  size of the physical area range
>> + *   @offset:the start offset of the vm area
>> + *   @flags: %VM_IOREMAP for I/O mappings
>> + *
>> + *   Search for the kernel VM area, whoes physical address starting at
>> + *   @paddr, and if the exsit VM area's size is large enough, then return

[PATCH] ARM: ioremap: Fix static vm area boundary checking.

2014-05-15 Thread Richard Lee
Static vm area boundary check:

  paddr1 --->|   |
 |   |
 |---| <-\--- svm->vm->addr(is page aligned)
  paddr2 --->|   ||
 | --| <--|-- svm->vm->phys_addr
 |   ||
  paddr3 --->|   ||
 |   ||
 |---| <--|-- next page boundary
 |   ||
  paddr4 --->|   || <- svm->vm->size(including guard page)
 |   ||
 |   ||
max paddr_end -->|---| <--|-- svm->vm's phys_addr_end
 | / ||
  paddr5 --->| guard ||
 | page  ||
 | / ||
  ---  <-/--- svm->vm->addr + svm->vm_size

<1> If the paddr == paddr1, then continue;
<2> If the paddr == paddr2~paddr4 and paddr_end > phys_addr_end,
    then continue;
<3> if the paddr >= paddr5 then continue;

Signed-off-by: Richard Lee 
---
 arch/arm/mm/ioremap.c | 44 ++--
 1 file changed, 42 insertions(+), 2 deletions(-)

diff --git a/arch/arm/mm/ioremap.c b/arch/arm/mm/ioremap.c
index be69333..2fa41f4 100644
--- a/arch/arm/mm/ioremap.c
+++ b/arch/arm/mm/ioremap.c
@@ -47,16 +47,56 @@ static struct static_vm *find_static_vm_paddr(phys_addr_t 
paddr,
 {
struct static_vm *svm;
struct vm_struct *vm;
+   size_t offset;
+
+   /*
+* Make sure the size the mapping size is page aligned.
+*/
+   size = PAGE_ALIGN((paddr & ~PAGE_SIZE) + size);
+   offset = paddr & ~PAGE_SIZE;
 
list_for_each_entry(svm, _vmlist, list) {
+   phys_addr_t paddr_end, phys_addr_end;
+   size_t vmoff;
+
vm = >vm;
if (!(vm->flags & VM_ARM_STATIC_MAPPING))
continue;
if ((vm->flags & VM_ARM_MTYPE_MASK) != VM_ARM_MTYPE(mtype))
continue;
 
-   if (vm->phys_addr > paddr ||
-   paddr + size - 1 > vm->phys_addr + vm->size - 1)
+   /* Static vm area boundary check:
+*
+*   paddr1 --->|   |
+*  |   |
+*  |---| <-\--- svm->vm->addr(page aligned)
+*   paddr2 --->|   ||
+*  | --| <--|-- svm->vm->phys_addr
+*  |   ||
+*   paddr3 --->|   ||
+*  |   ||
+*  |---| <--|-- next page boundary
+*  |   ||
+*   paddr4 --->|   || <- svm->vm->size,
+*  |   || including guard page
+*  |   ||
+* max paddr_end -->|---| <--|-- svm->vm's phys_addr_end
+*  | / ||
+*   paddr5 --->| guard ||
+*  | page  ||
+*  | / ||
+*   ---  <-/-- svm->vm->addr + svm->vm_size
+*
+* <1> If paddr == paddr1, then continue;
+* <2> If paddr == paddr2~paddr4 and paddr_end > phys_addr_end,
+* then continue;
+* <3> if paddr >= paddr5 then continue;
+*/
+   vmoff = vm->phys_addr & ~PAGE_SIZE;
+   phys_addr_end = vm->phys_addr + vm->size - PAGE_SIZE - vmoff;
+   paddr_end = paddr + size - offset;
+   if (__phys_to_pfn(vm->phys_addr) > __phys_to_pfn(paddr) ||
+   paddr_end - 1 > phys_addr_end - 1)
continue;
 
return svm;
-- 
1.8.4

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] ARM: ioremap: Fix static vm area boundary checking.

2014-05-15 Thread Richard Lee
Static vm area boundary check:

  paddr1 ---|   |
 |   |
 |---| -\--- svm-vm-addr(is page aligned)
  paddr2 ---|   ||
 | --| --|-- svm-vm-phys_addr
 |   ||
  paddr3 ---|   ||
 |   ||
 |---| --|-- next page boundary
 |   ||
  paddr4 ---|   || - svm-vm-size(including guard page)
 |   ||
 |   ||
max paddr_end --|---| --|-- svm-vm's phys_addr_end
 | / ||
  paddr5 ---| guard ||
 | page  ||
 | / ||
  ---  -/--- svm-vm-addr + svm-vm_size

1 If the paddr == paddr1, then continue;
2 If the paddr == paddr2~paddr4 and paddr_end  phys_addr_end,
then continue;
3 if the paddr = paddr5 then continue;

Signed-off-by: Richard Lee superlibj8...@gmail.com
---
 arch/arm/mm/ioremap.c | 44 ++--
 1 file changed, 42 insertions(+), 2 deletions(-)

diff --git a/arch/arm/mm/ioremap.c b/arch/arm/mm/ioremap.c
index be69333..2fa41f4 100644
--- a/arch/arm/mm/ioremap.c
+++ b/arch/arm/mm/ioremap.c
@@ -47,16 +47,56 @@ static struct static_vm *find_static_vm_paddr(phys_addr_t 
paddr,
 {
struct static_vm *svm;
struct vm_struct *vm;
+   size_t offset;
+
+   /*
+* Make sure the size the mapping size is page aligned.
+*/
+   size = PAGE_ALIGN((paddr  ~PAGE_SIZE) + size);
+   offset = paddr  ~PAGE_SIZE;
 
list_for_each_entry(svm, static_vmlist, list) {
+   phys_addr_t paddr_end, phys_addr_end;
+   size_t vmoff;
+
vm = svm-vm;
if (!(vm-flags  VM_ARM_STATIC_MAPPING))
continue;
if ((vm-flags  VM_ARM_MTYPE_MASK) != VM_ARM_MTYPE(mtype))
continue;
 
-   if (vm-phys_addr  paddr ||
-   paddr + size - 1  vm-phys_addr + vm-size - 1)
+   /* Static vm area boundary check:
+*
+*   paddr1 ---|   |
+*  |   |
+*  |---| -\--- svm-vm-addr(page aligned)
+*   paddr2 ---|   ||
+*  | --| --|-- svm-vm-phys_addr
+*  |   ||
+*   paddr3 ---|   ||
+*  |   ||
+*  |---| --|-- next page boundary
+*  |   ||
+*   paddr4 ---|   || - svm-vm-size,
+*  |   || including guard page
+*  |   ||
+* max paddr_end --|---| --|-- svm-vm's phys_addr_end
+*  | / ||
+*   paddr5 ---| guard ||
+*  | page  ||
+*  | / ||
+*   ---  -/-- svm-vm-addr + svm-vm_size
+*
+* 1 If paddr == paddr1, then continue;
+* 2 If paddr == paddr2~paddr4 and paddr_end  phys_addr_end,
+* then continue;
+* 3 if paddr = paddr5 then continue;
+*/
+   vmoff = vm-phys_addr  ~PAGE_SIZE;
+   phys_addr_end = vm-phys_addr + vm-size - PAGE_SIZE - vmoff;
+   paddr_end = paddr + size - offset;
+   if (__phys_to_pfn(vm-phys_addr)  __phys_to_pfn(paddr) ||
+   paddr_end - 1  phys_addr_end - 1)
continue;
 
return svm;
-- 
1.8.4

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCHv2 1/2] mm/vmalloc: Add IO mapping space reused interface support.

2014-05-15 Thread Richard Lee
On Thu, May 15, 2014 at 6:56 AM, Andrew Morton
a...@linux-foundation.org wrote:
 On Wed, 14 May 2014 16:18:51 +0800 Richard Lee superlibj8...@gmail.com 
 wrote:

 For the IO mapping, the same physical address space maybe
 mapped more than one time, for example, in some SoCs:
   - 0x20001000 ~ 0x20001400 -- 1KB for Dev1
   - 0x20001400 ~ 0x20001800 -- 1KB for Dev2
   and the page size is 4KB.

 Then both Dev1 and Dev2 will do ioremap operations, and the IO
 vmalloc area's virtual address will be aligned down to 4KB, and
 the size will be aligned up to 4KB. That's to say, only one
 4KB size's vmalloc area could contain Dev1 and Dev2 IO mapping area
 at the same time.

 Unclear.  What happens when a caller does the two ioremaps at present?
 It fails?  Returns the current mapping's address?  Something else?


For this case, should the later one wait ?
Maybe this patch hasn't consider about this.


 For this case, we can ioremap only one time, and the later ioremap
 operation will just return the exist vmalloc area.

 I guess an alternative is to establish a new vmap pointing at the same
 physical address.  How does this approach compare to refcounting the
 existing vmap?


Yes, I'm also thinking to estabish one new vmap.


 --- a/include/linux/vmalloc.h
 +++ b/include/linux/vmalloc.h
 @@ -1,6 +1,7 @@
  #ifndef _LINUX_VMALLOC_H
  #define _LINUX_VMALLOC_H

 +#include linux/atomic.h
  #include linux/spinlock.h
  #include linux/init.h
  #include linux/list.h
 @@ -34,6 +35,7 @@ struct vm_struct {
   struct page **pages;
   unsigned intnr_pages;
   phys_addr_t phys_addr;
 + atomic_tused;
   const void  *caller;
  };

 @@ -100,6 +102,9 @@ static inline size_t get_vm_area_size(const struct 
 vm_struct *area)
   return area-size - PAGE_SIZE;
  }

 +extern struct vm_struct *find_vm_area_paddr(phys_addr_t paddr, size_t size,
 +  unsigned long *offset,
 +  unsigned long flags);
  extern struct vm_struct *get_vm_area(unsigned long size, unsigned long 
 flags);
  extern struct vm_struct *get_vm_area_caller(unsigned long size,
   unsigned long flags, const void 
 *caller);
 diff --git a/mm/vmalloc.c b/mm/vmalloc.c
 index bf233b2..cf0093c 100644
 --- a/mm/vmalloc.c
 +++ b/mm/vmalloc.c
 @@ -1293,6 +1293,7 @@ static void setup_vmalloc_vm(struct vm_struct *vm, 
 struct vmap_area *va,
   vm-addr = (void *)va-va_start;
   vm-size = va-va_end - va-va_start;
   vm-caller = caller;
 + atomic_set(vm-used, 1);
   va-vm = vm;
   va-flags |= VM_VM_AREA;
   spin_unlock(vmap_area_lock);
 @@ -1383,6 +1384,84 @@ struct vm_struct *get_vm_area_caller(unsigned long 
 size, unsigned long flags,
 NUMA_NO_NODE, GFP_KERNEL, caller);
  }

 +static int vm_area_used_inc(struct vm_struct *area)
 +{
 + if (!(area-flags  VM_IOREMAP))
 + return -EINVAL;

 afaict this can never happen?


Yes, it is for now.

 + atomic_add(1, area-used);
 +
 + return atomic_read(va-vm-used);

 atomic_add_return() is neater.  But the return value is in fact never
 used so it could return void.


yes, that' fine.

 +}
 +
 +static int vm_area_used_dec(const void *addr)
 +{
 + struct vmap_area *va;
 +
 + va = find_vmap_area((unsigned long)addr);
 + if (!va || !(va-flags  VM_VM_AREA))
 + return 0;
 +
 + if (!(va-vm-flags  VM_IOREMAP))
 + return 0;
 +
 + atomic_sub(1, va-vm-used);
 +
 + return atomic_read(va-vm-used);

 atomic_sub_return()


yes,

 +}
 +
 +/**
 + *   find_vm_area_paddr  -  find a continuous kernel virtual area using the
 + *   physical addreess.
 + *   @paddr: base physical address
 + *   @size:  size of the physical area range
 + *   @offset:the start offset of the vm area
 + *   @flags: %VM_IOREMAP for I/O mappings
 + *
 + *   Search for the kernel VM area, whoes physical address starting at
 + *   @paddr, and if the exsit VM area's size is large enough, then return
 + *   it with increasing the 'used' counter, or return NULL.
 + */
 +struct vm_struct *find_vm_area_paddr(phys_addr_t paddr, size_t size,
 +  unsigned long *offset,
 +  unsigned long flags)
 +{
 + struct vmap_area *va;
 + int off;
 +
 + if (!(flags  VM_IOREMAP))
 + return NULL;
 +
 + size = PAGE_ALIGN((paddr  ~PAGE_MASK) + size);
 +
 + rcu_read_lock();
 + list_for_each_entry_rcu(va, vmap_area_list, list) {
 + phys_addr_t phys_addr;
 +
 + if (!va || !(va-flags  VM_VM_AREA) || !va-vm)
 + continue;
 +
 + if (!(va-vm-flags  VM_IOREMAP))
 + continue;
 +
 + phys_addr = va-vm-phys_addr;
 +
 + off = (paddr  PAGE_MASK) - (phys_addr  PAGE_MASK

Re: [PATCH] ARM: ioremap: Fix static vm area boundary checking.

2014-05-15 Thread Richard Lee
 On Thu, May 15, 2014 at 4:55 PM, li.xi...@freescale.com
li.xi...@freescale.com wrote:
 Subject: [PATCH] ARM: ioremap: Fix static vm area boundary checking.

 Static vm area boundary check:

   paddr1 ---|   |
  |   |
  |---| -\--- svm-vm-addr(is page aligned)
   paddr2 ---|   ||
  | --| --|-- svm-vm-phys_addr
  |   ||
   paddr3 ---|   ||
  |   ||
  |---| --|-- next page boundary
  |   ||
   paddr4 ---|   || - svm-vm-size(including guard page)
  |   ||
  |   ||
 max paddr_end --|---| --|-- svm-vm's phys_addr_end
  | / ||
   paddr5 ---| guard ||
  | page  ||
  | / ||
   ---  -/--- svm-vm-addr + svm-vm_size

 1 If the paddr == paddr1, then continue;
 2 If the paddr == paddr2~paddr4 and paddr_end  phys_addr_end,
 then continue;
 3 if the paddr = paddr5 then continue;

 Signed-off-by: Richard Lee superlibj8...@gmail.com
 ---
  arch/arm/mm/ioremap.c | 44 ++--
  1 file changed, 42 insertions(+), 2 deletions(-)

 diff --git a/arch/arm/mm/ioremap.c b/arch/arm/mm/ioremap.c
 index be69333..2fa41f4 100644
 --- a/arch/arm/mm/ioremap.c
 +++ b/arch/arm/mm/ioremap.c
 @@ -47,16 +47,56 @@ static struct static_vm *find_static_vm_paddr(phys_addr_t
 paddr,
  {
   struct static_vm *svm;
   struct vm_struct *vm;
 + size_t offset;
 +
 + /*
 +  * Make sure the size the mapping size is page aligned.
 +  */
 + size = PAGE_ALIGN((paddr  ~PAGE_SIZE) + size);
 + offset = paddr  ~PAGE_SIZE;


 Shouldn't it be PAGE_MASK ?


Yes, it is. I'll fix it.

Thanks,

Best regards,
Richard Lee



   list_for_each_entry(svm, static_vmlist, list) {
 + phys_addr_t paddr_end, phys_addr_end;
 + size_t vmoff;
 +
   vm = svm-vm;
   if (!(vm-flags  VM_ARM_STATIC_MAPPING))
   continue;
   if ((vm-flags  VM_ARM_MTYPE_MASK) != VM_ARM_MTYPE(mtype))
   continue;

 - if (vm-phys_addr  paddr ||
 - paddr + size - 1  vm-phys_addr + vm-size - 1)
 + /* Static vm area boundary check:
 +  *
 +  *   paddr1 ---|   |
 +  *  |   |
 +  *  |---| -\--- svm-vm-addr(page 
 aligned)
 +  *   paddr2 ---|   ||
 +  *  | --| --|-- svm-vm-phys_addr
 +  *  |   ||
 +  *   paddr3 ---|   ||
 +  *  |   ||
 +  *  |---| --|-- next page boundary
 +  *  |   ||
 +  *   paddr4 ---|   || - svm-vm-size,
 +  *  |   || including guard page
 +  *  |   ||
 +  * max paddr_end --|---| --|-- svm-vm's phys_addr_end
 +  *  | / ||
 +  *   paddr5 ---| guard ||
 +  *  | page  ||
 +  *  | / ||
 +  *   ---  -/-- svm-vm-addr + 
 svm-vm_size
 +  *
 +  * 1 If paddr == paddr1, then continue;
 +  * 2 If paddr == paddr2~paddr4 and paddr_end  phys_addr_end,
 +  * then continue;
 +  * 3 if paddr = paddr5 then continue;
 +  */
 + vmoff = vm-phys_addr  ~PAGE_SIZE;

 And here.

 Thanks,
 BRs
 Xiubo

 + phys_addr_end = vm-phys_addr + vm-size - PAGE_SIZE - vmoff;
 + paddr_end = paddr + size - offset;
 + if (__phys_to_pfn(vm-phys_addr)  __phys_to_pfn(paddr) ||
 + paddr_end - 1  phys_addr_end - 1)
   continue;

   return svm;
 --
 1.8.4


 ___
 linux-arm-kernel mailing list
 linux-arm-ker...@lists.infradead.org
 http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCHv2] ARM: ioremap: Fix static vm area boundary checking.

2014-05-15 Thread Richard Lee
Static vm area boundary check:

  paddr1 ---|   |
 |   |
 |---| -\--- svm-vm-addr(is page aligned)
  paddr2 ---|   ||
 | --| --|-- svm-vm-phys_addr
 |   ||
  paddr3 ---|   ||
 |   ||
 |---| --|-- next page boundary
 |   ||
  paddr4 ---|   || - svm-vm-size(including guard page)
 |   ||
 |   ||
max paddr_end --|---| --|-- svm-vm's phys_addr_end
 | / ||
  paddr5 ---| guard ||
 | page  ||
 | / ||
  ---  -/--- svm-vm-addr + svm-vm_size

1 If the paddr == paddr1, then continue;
2 If the paddr == paddr2~paddr4 and paddr_end  phys_addr_end,
then continue;
3 if the paddr = paddr5 then continue;

Signed-off-by: Richard Lee superlibj8...@gmail.com
---


Change in V2:
 - PAGE_SIZE -- PAGE_MASK
 - remove the 'size' page size align operation.






 arch/arm/mm/ioremap.c | 40 ++--
 1 file changed, 38 insertions(+), 2 deletions(-)

diff --git a/arch/arm/mm/ioremap.c b/arch/arm/mm/ioremap.c
index be69333..f235ab7 100644
--- a/arch/arm/mm/ioremap.c
+++ b/arch/arm/mm/ioremap.c
@@ -47,16 +47,52 @@ static struct static_vm *find_static_vm_paddr(phys_addr_t 
paddr,
 {
struct static_vm *svm;
struct vm_struct *vm;
+   size_t offset;
+
+   offset = paddr  ~PAGE_MASK;
 
list_for_each_entry(svm, static_vmlist, list) {
+   phys_addr_t paddr_end, phys_addr_end;
+   size_t vmoff;
+
vm = svm-vm;
if (!(vm-flags  VM_ARM_STATIC_MAPPING))
continue;
if ((vm-flags  VM_ARM_MTYPE_MASK) != VM_ARM_MTYPE(mtype))
continue;
 
-   if (vm-phys_addr  paddr ||
-   paddr + size - 1  vm-phys_addr + vm-size - 1)
+   /* Static vm area boundary check:
+*
+*   paddr1 ---|   |
+*  |   |
+*  |---| -\--- svm-vm-addr(page aligned)
+*   paddr2 ---|   ||
+*  | --| --|-- svm-vm-phys_addr
+*  |   ||
+*   paddr3 ---|   ||
+*  |   ||
+*  |---| --|-- next page boundary
+*  |   ||
+*   paddr4 ---|   || - svm-vm-size,
+*  |   || including guard page
+*  |   ||
+* max paddr_end --|---| --|-- svm-vm's phys_addr_end
+*  | / ||
+*   paddr5 ---| guard ||
+*  | page  ||
+*  | / ||
+*   ---  -/-- svm-vm-addr + svm-vm_size
+*
+* 1 If paddr == paddr1, then continue;
+* 2 If paddr == paddr2~paddr4 and paddr_end  phys_addr_end,
+* then continue;
+* 3 if paddr = paddr5 then continue;
+*/
+   vmoff = vm-phys_addr  ~PAGE_MASK;
+   phys_addr_end = vm-phys_addr + vm-size - PAGE_SIZE - vmoff;
+   paddr_end = paddr + size - offset;
+   if (__phys_to_pfn(vm-phys_addr)  __phys_to_pfn(paddr) ||
+   paddr_end - 1  phys_addr_end - 1)
continue;
 
return svm;
-- 
1.8.4

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCHv2] ARM: ioremap: Fix static vm area boundary checking.

2014-05-15 Thread Richard Lee
On Fri, May 16, 2014 at 1:28 AM, Nicolas Pitre nicolas.pi...@linaro.org wrote:
 On Thu, 15 May 2014, Richard Lee wrote:

 Static vm area boundary check:

   paddr1 ---|   |
  |   |
  |---| -\--- svm-vm-addr(is page aligned)
   paddr2 ---|   ||
  | --| --|-- svm-vm-phys_addr
  |   ||
   paddr3 ---|   ||
  |   ||
  |---| --|-- next page boundary
  |   ||
   paddr4 ---|   || - svm-vm-size(including guard page)
  |   ||
  |   ||
 max paddr_end --|---| --|-- svm-vm's phys_addr_end
  | / ||
   paddr5 ---| guard ||
  | page  ||
  | / ||
   ---  -/--- svm-vm-addr + svm-vm_size

 1 If the paddr == paddr1, then continue;
 2 If the paddr == paddr2~paddr4 and paddr_end  phys_addr_end,
 then continue;
 3 if the paddr = paddr5 then continue;

 Signed-off-by: Richard Lee superlibj8...@gmail.com

 instead of doing this repeatedly, why not simply ensure the recorded
 static vm information is already page aligned in the first place?


Sorry, I'm not very sure what vm information you meant here.

And the 'svm-vm-addr' and 'svm-vm-size' are all page aligned
already, but we cannot make sure that the 'paddr' and
'svm-vm-phys_addr' are page aligned.


Thanks,

BRs
Richard Lee



 ---


 Change in V2:
  - PAGE_SIZE -- PAGE_MASK
  - remove the 'size' page size align operation.






  arch/arm/mm/ioremap.c | 40 ++--
  1 file changed, 38 insertions(+), 2 deletions(-)

 diff --git a/arch/arm/mm/ioremap.c b/arch/arm/mm/ioremap.c
 index be69333..f235ab7 100644
 --- a/arch/arm/mm/ioremap.c
 +++ b/arch/arm/mm/ioremap.c
 @@ -47,16 +47,52 @@ static struct static_vm 
 *find_static_vm_paddr(phys_addr_t paddr,
  {
   struct static_vm *svm;
   struct vm_struct *vm;
 + size_t offset;
 +
 + offset = paddr  ~PAGE_MASK;

   list_for_each_entry(svm, static_vmlist, list) {
 + phys_addr_t paddr_end, phys_addr_end;
 + size_t vmoff;
 +
   vm = svm-vm;
   if (!(vm-flags  VM_ARM_STATIC_MAPPING))
   continue;
   if ((vm-flags  VM_ARM_MTYPE_MASK) != VM_ARM_MTYPE(mtype))
   continue;

 - if (vm-phys_addr  paddr ||
 - paddr + size - 1  vm-phys_addr + vm-size - 1)
 + /* Static vm area boundary check:
 +  *
 +  *   paddr1 ---|   |
 +  *  |   |
 +  *  |---| -\--- svm-vm-addr(page 
 aligned)
 +  *   paddr2 ---|   ||
 +  *  | --| --|-- svm-vm-phys_addr
 +  *  |   ||
 +  *   paddr3 ---|   ||
 +  *  |   ||
 +  *  |---| --|-- next page boundary
 +  *  |   ||
 +  *   paddr4 ---|   || - svm-vm-size,
 +  *  |   || including guard page
 +  *  |   ||
 +  * max paddr_end --|---| --|-- svm-vm's phys_addr_end
 +  *  | / ||
 +  *   paddr5 ---| guard ||
 +  *  | page  ||
 +  *  | / ||
 +  *   ---  -/-- svm-vm-addr + 
 svm-vm_size
 +  *
 +  * 1 If paddr == paddr1, then continue;
 +  * 2 If paddr == paddr2~paddr4 and paddr_end  phys_addr_end,
 +  * then continue;
 +  * 3 if paddr = paddr5 then continue;
 +  */
 + vmoff = vm-phys_addr  ~PAGE_MASK;
 + phys_addr_end = vm-phys_addr + vm-size - PAGE_SIZE - vmoff;
 + paddr_end = paddr + size - offset;
 + if (__phys_to_pfn(vm-phys_addr)  __phys_to_pfn(paddr) ||
 + paddr_end - 1  phys_addr_end - 1)
   continue;

   return svm;
 --
 1.8.4

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCHv2 1/2] mm/vmalloc: Add IO mapping space reused interface support.

2014-05-14 Thread Richard Lee
On Thu, May 15, 2014 at 12:06 AM, Rob Herring  wrote:
> Adding Nico...
>
> On Wed, May 14, 2014 at 3:18 AM, Richard Lee  wrote:
>> For the IO mapping, the same physical address space maybe
>> mapped more than one time, for example, in some SoCs:
>>   - 0x20001000 ~ 0x20001400 --> 1KB for Dev1
>>   - 0x20001400 ~ 0x20001800 --> 1KB for Dev2
>>   and the page size is 4KB.
>>
>> Then both Dev1 and Dev2 will do ioremap operations, and the IO
>> vmalloc area's virtual address will be aligned down to 4KB, and
>> the size will be aligned up to 4KB. That's to say, only one
>> 4KB size's vmalloc area could contain Dev1 and Dev2 IO mapping area
>> at the same time.
>>
>> For this case, we can ioremap only one time, and the later ioremap
>> operation will just return the exist vmalloc area.
>
> We already have similar reuse tracking for the static mappings on ARM.
> I'm wondering if either that can be reused for ioremap as well or this
> can replace the static mapping tracking. It seems silly to have 2
> lists to search for finding an existing mapping.
>

Yes, it is.

Maybe there is one way to reuse the ARM's static mapping tracking, or one
new global way to replace it.



> But there is a fundamental problem with your approach. You do not and
> cannot check the memory type of the mapping. If you look at the static
> mapping code, it only reuses mappings if the memory type match. While
> having aliases with different memory types is bad on ARMv6+, I assume
> there are some cases that do that given the static mapping code
> handles that case. We would at least want to detect and warn if we
> didn't want to allow different attributes rather than just return a
> mapping with whatever attributes the first mapping used.
>

You are right, maybe an alternative is defining one call back for each
individual
ARCH or other ways.


Thanks very much for you comments, they are very important for me.

BRs,
Richard Lee


> Rob
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCHv2 2/2] ARM: ioremap: Add IO mapping space reused support.

2014-05-14 Thread Richard Lee
For the IO mapping, the same physical address space maybe
mapped more than one time, for example, in some SoCs:
  - 0x20001000 ~ 0x20001400 --> 1KB for Dev1
  - 0x20001400 ~ 0x20001800 --> 1KB for Dev2
  and the page size is 4KB.

Then both Dev1 and Dev2 will do ioremap operations, and the IO
vmalloc area's virtual address will be aligned down to 4KB, and
the size will be aligned up to 4KB. That's to say, only one
4KB size's vmalloc area could contain Dev1 and Dev2 IO mapping area
at the same time.

For this case, we can ioremap only one time, and the later ioremap
operation will just return the exist vmalloc area.

This patch add IO mapping space reused support.

Signed-off-by: Richard Lee 
---
 arch/arm/mm/ioremap.c | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/arch/arm/mm/ioremap.c b/arch/arm/mm/ioremap.c
index f9c32ba..be69333 100644
--- a/arch/arm/mm/ioremap.c
+++ b/arch/arm/mm/ioremap.c
@@ -301,6 +301,12 @@ void __iomem * __arm_ioremap_pfn_caller(unsigned long pfn,
if (WARN_ON(pfn_valid(pfn)))
return NULL;
 
+   area = find_vm_area_paddr(paddr, size, , VM_IOREMAP);
+   if (area) {
+   addr = (unsigned long)area->addr;
+   return (void __iomem *)(offset + addr);
+   }
+
area = get_vm_area_caller(size, VM_IOREMAP, caller);
if (!area)
return NULL;
-- 
1.8.4

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCHv2 1/2] mm/vmalloc: Add IO mapping space reused interface support.

2014-05-14 Thread Richard Lee
For the IO mapping, the same physical address space maybe
mapped more than one time, for example, in some SoCs:
  - 0x20001000 ~ 0x20001400 --> 1KB for Dev1
  - 0x20001400 ~ 0x20001800 --> 1KB for Dev2
  and the page size is 4KB.

Then both Dev1 and Dev2 will do ioremap operations, and the IO
vmalloc area's virtual address will be aligned down to 4KB, and
the size will be aligned up to 4KB. That's to say, only one
4KB size's vmalloc area could contain Dev1 and Dev2 IO mapping area
at the same time.

For this case, we can ioremap only one time, and the later ioremap
operation will just return the exist vmalloc area.

Signed-off-by: Richard Lee 
---
 include/linux/vmalloc.h |  5 +++
 mm/vmalloc.c| 82 +
 2 files changed, 87 insertions(+)

diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index 4b8a891..a53b70f 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -1,6 +1,7 @@
 #ifndef _LINUX_VMALLOC_H
 #define _LINUX_VMALLOC_H
 
+#include 
 #include 
 #include 
 #include 
@@ -34,6 +35,7 @@ struct vm_struct {
struct page **pages;
unsigned intnr_pages;
phys_addr_t phys_addr;
+   atomic_tused;
const void  *caller;
 };
 
@@ -100,6 +102,9 @@ static inline size_t get_vm_area_size(const struct 
vm_struct *area)
return area->size - PAGE_SIZE;
 }
 
+extern struct vm_struct *find_vm_area_paddr(phys_addr_t paddr, size_t size,
+unsigned long *offset,
+unsigned long flags);
 extern struct vm_struct *get_vm_area(unsigned long size, unsigned long flags);
 extern struct vm_struct *get_vm_area_caller(unsigned long size,
unsigned long flags, const void 
*caller);
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index bf233b2..cf0093c 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1293,6 +1293,7 @@ static void setup_vmalloc_vm(struct vm_struct *vm, struct 
vmap_area *va,
vm->addr = (void *)va->va_start;
vm->size = va->va_end - va->va_start;
vm->caller = caller;
+   atomic_set(>used, 1);
va->vm = vm;
va->flags |= VM_VM_AREA;
spin_unlock(_area_lock);
@@ -1383,6 +1384,84 @@ struct vm_struct *get_vm_area_caller(unsigned long size, 
unsigned long flags,
  NUMA_NO_NODE, GFP_KERNEL, caller);
 }
 
+static int vm_area_used_inc(struct vm_struct *area)
+{
+   if (!(area->flags & VM_IOREMAP))
+   return -EINVAL;
+
+   atomic_add(1, >used);
+
+   return atomic_read(>vm->used);
+}
+
+static int vm_area_used_dec(const void *addr)
+{
+   struct vmap_area *va;
+
+   va = find_vmap_area((unsigned long)addr);
+   if (!va || !(va->flags & VM_VM_AREA))
+   return 0;
+
+   if (!(va->vm->flags & VM_IOREMAP))
+   return 0;
+
+   atomic_sub(1, >vm->used);
+
+   return atomic_read(>vm->used);
+}
+
+/**
+ * find_vm_area_paddr  -  find a continuous kernel virtual area using the
+ * physical addreess.
+ * @paddr: base physical address
+ * @size:  size of the physical area range
+ * @offset:the start offset of the vm area
+ * @flags: %VM_IOREMAP for I/O mappings
+ *
+ * Search for the kernel VM area, whoes physical address starting at
+ * @paddr, and if the exsit VM area's size is large enough, then return
+ * it with increasing the 'used' counter, or return NULL.
+ */
+struct vm_struct *find_vm_area_paddr(phys_addr_t paddr, size_t size,
+unsigned long *offset,
+unsigned long flags)
+{
+   struct vmap_area *va;
+   int off;
+
+   if (!(flags & VM_IOREMAP))
+   return NULL;
+
+   size = PAGE_ALIGN((paddr & ~PAGE_MASK) + size);
+
+   rcu_read_lock();
+   list_for_each_entry_rcu(va, _area_list, list) {
+   phys_addr_t phys_addr;
+
+   if (!va || !(va->flags & VM_VM_AREA) || !va->vm)
+   continue;
+
+   if (!(va->vm->flags & VM_IOREMAP))
+   continue;
+
+   phys_addr = va->vm->phys_addr;
+
+   off = (paddr & PAGE_MASK) - (phys_addr & PAGE_MASK);
+   if (off < 0)
+   continue;
+
+   if (off + size <= va->vm->size - PAGE_SIZE) {
+   *offset = off + (paddr & ~PAGE_MASK);
+   vm_area_used_inc(va->vm);
+   rcu_read_unlock();
+   return va->vm;
+   }
+   }
+   rcu_read_unlock();
+
+   return NULL;
+}
+
 /**
  * find_vm_are

[PATCHv2 0/2] Add IO mapping space reused support

2014-05-14 Thread Richard Lee
Changes in V2:
 - Uses the atomic_t instead.
 - Revises some comment message.


Richard Lee (2):
  mm/vmalloc: Add IO mapping space reused interface support.
  ARM: ioremap: Add IO mapping space reused support.

 arch/arm/mm/ioremap.c   |  6 
 include/linux/vmalloc.h |  5 +++
 mm/vmalloc.c| 82 +
 3 files changed, 93 insertions(+)

-- 
1.8.4

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCHv2 1/2] mm/vmalloc: Add IO mapping space reused interface support.

2014-05-14 Thread Richard Lee
On Thu, May 15, 2014 at 12:06 AM, Rob Herring robherri...@gmail.com wrote:
 Adding Nico...

 On Wed, May 14, 2014 at 3:18 AM, Richard Lee superlibj8...@gmail.com wrote:
 For the IO mapping, the same physical address space maybe
 mapped more than one time, for example, in some SoCs:
   - 0x20001000 ~ 0x20001400 -- 1KB for Dev1
   - 0x20001400 ~ 0x20001800 -- 1KB for Dev2
   and the page size is 4KB.

 Then both Dev1 and Dev2 will do ioremap operations, and the IO
 vmalloc area's virtual address will be aligned down to 4KB, and
 the size will be aligned up to 4KB. That's to say, only one
 4KB size's vmalloc area could contain Dev1 and Dev2 IO mapping area
 at the same time.

 For this case, we can ioremap only one time, and the later ioremap
 operation will just return the exist vmalloc area.

 We already have similar reuse tracking for the static mappings on ARM.
 I'm wondering if either that can be reused for ioremap as well or this
 can replace the static mapping tracking. It seems silly to have 2
 lists to search for finding an existing mapping.


Yes, it is.

Maybe there is one way to reuse the ARM's static mapping tracking, or one
new global way to replace it.



 But there is a fundamental problem with your approach. You do not and
 cannot check the memory type of the mapping. If you look at the static
 mapping code, it only reuses mappings if the memory type match. While
 having aliases with different memory types is bad on ARMv6+, I assume
 there are some cases that do that given the static mapping code
 handles that case. We would at least want to detect and warn if we
 didn't want to allow different attributes rather than just return a
 mapping with whatever attributes the first mapping used.


You are right, maybe an alternative is defining one call back for each
individual
ARCH or other ways.


Thanks very much for you comments, they are very important for me.

BRs,
Richard Lee


 Rob
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCHv2 0/2] Add IO mapping space reused support

2014-05-14 Thread Richard Lee
Changes in V2:
 - Uses the atomic_t instead.
 - Revises some comment message.


Richard Lee (2):
  mm/vmalloc: Add IO mapping space reused interface support.
  ARM: ioremap: Add IO mapping space reused support.

 arch/arm/mm/ioremap.c   |  6 
 include/linux/vmalloc.h |  5 +++
 mm/vmalloc.c| 82 +
 3 files changed, 93 insertions(+)

-- 
1.8.4

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCHv2 1/2] mm/vmalloc: Add IO mapping space reused interface support.

2014-05-14 Thread Richard Lee
For the IO mapping, the same physical address space maybe
mapped more than one time, for example, in some SoCs:
  - 0x20001000 ~ 0x20001400 -- 1KB for Dev1
  - 0x20001400 ~ 0x20001800 -- 1KB for Dev2
  and the page size is 4KB.

Then both Dev1 and Dev2 will do ioremap operations, and the IO
vmalloc area's virtual address will be aligned down to 4KB, and
the size will be aligned up to 4KB. That's to say, only one
4KB size's vmalloc area could contain Dev1 and Dev2 IO mapping area
at the same time.

For this case, we can ioremap only one time, and the later ioremap
operation will just return the exist vmalloc area.

Signed-off-by: Richard Lee superlibj8...@gmail.com
---
 include/linux/vmalloc.h |  5 +++
 mm/vmalloc.c| 82 +
 2 files changed, 87 insertions(+)

diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index 4b8a891..a53b70f 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -1,6 +1,7 @@
 #ifndef _LINUX_VMALLOC_H
 #define _LINUX_VMALLOC_H
 
+#include linux/atomic.h
 #include linux/spinlock.h
 #include linux/init.h
 #include linux/list.h
@@ -34,6 +35,7 @@ struct vm_struct {
struct page **pages;
unsigned intnr_pages;
phys_addr_t phys_addr;
+   atomic_tused;
const void  *caller;
 };
 
@@ -100,6 +102,9 @@ static inline size_t get_vm_area_size(const struct 
vm_struct *area)
return area-size - PAGE_SIZE;
 }
 
+extern struct vm_struct *find_vm_area_paddr(phys_addr_t paddr, size_t size,
+unsigned long *offset,
+unsigned long flags);
 extern struct vm_struct *get_vm_area(unsigned long size, unsigned long flags);
 extern struct vm_struct *get_vm_area_caller(unsigned long size,
unsigned long flags, const void 
*caller);
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index bf233b2..cf0093c 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1293,6 +1293,7 @@ static void setup_vmalloc_vm(struct vm_struct *vm, struct 
vmap_area *va,
vm-addr = (void *)va-va_start;
vm-size = va-va_end - va-va_start;
vm-caller = caller;
+   atomic_set(vm-used, 1);
va-vm = vm;
va-flags |= VM_VM_AREA;
spin_unlock(vmap_area_lock);
@@ -1383,6 +1384,84 @@ struct vm_struct *get_vm_area_caller(unsigned long size, 
unsigned long flags,
  NUMA_NO_NODE, GFP_KERNEL, caller);
 }
 
+static int vm_area_used_inc(struct vm_struct *area)
+{
+   if (!(area-flags  VM_IOREMAP))
+   return -EINVAL;
+
+   atomic_add(1, area-used);
+
+   return atomic_read(va-vm-used);
+}
+
+static int vm_area_used_dec(const void *addr)
+{
+   struct vmap_area *va;
+
+   va = find_vmap_area((unsigned long)addr);
+   if (!va || !(va-flags  VM_VM_AREA))
+   return 0;
+
+   if (!(va-vm-flags  VM_IOREMAP))
+   return 0;
+
+   atomic_sub(1, va-vm-used);
+
+   return atomic_read(va-vm-used);
+}
+
+/**
+ * find_vm_area_paddr  -  find a continuous kernel virtual area using the
+ * physical addreess.
+ * @paddr: base physical address
+ * @size:  size of the physical area range
+ * @offset:the start offset of the vm area
+ * @flags: %VM_IOREMAP for I/O mappings
+ *
+ * Search for the kernel VM area, whoes physical address starting at
+ * @paddr, and if the exsit VM area's size is large enough, then return
+ * it with increasing the 'used' counter, or return NULL.
+ */
+struct vm_struct *find_vm_area_paddr(phys_addr_t paddr, size_t size,
+unsigned long *offset,
+unsigned long flags)
+{
+   struct vmap_area *va;
+   int off;
+
+   if (!(flags  VM_IOREMAP))
+   return NULL;
+
+   size = PAGE_ALIGN((paddr  ~PAGE_MASK) + size);
+
+   rcu_read_lock();
+   list_for_each_entry_rcu(va, vmap_area_list, list) {
+   phys_addr_t phys_addr;
+
+   if (!va || !(va-flags  VM_VM_AREA) || !va-vm)
+   continue;
+
+   if (!(va-vm-flags  VM_IOREMAP))
+   continue;
+
+   phys_addr = va-vm-phys_addr;
+
+   off = (paddr  PAGE_MASK) - (phys_addr  PAGE_MASK);
+   if (off  0)
+   continue;
+
+   if (off + size = va-vm-size - PAGE_SIZE) {
+   *offset = off + (paddr  ~PAGE_MASK);
+   vm_area_used_inc(va-vm);
+   rcu_read_unlock();
+   return va-vm;
+   }
+   }
+   rcu_read_unlock();
+
+   return NULL;
+}
+
 /**
  * find_vm_area  -  find a continuous kernel virtual area
  * @addr:  base address
@@ -1443,6 +1522,9

[PATCHv2 2/2] ARM: ioremap: Add IO mapping space reused support.

2014-05-14 Thread Richard Lee
For the IO mapping, the same physical address space maybe
mapped more than one time, for example, in some SoCs:
  - 0x20001000 ~ 0x20001400 -- 1KB for Dev1
  - 0x20001400 ~ 0x20001800 -- 1KB for Dev2
  and the page size is 4KB.

Then both Dev1 and Dev2 will do ioremap operations, and the IO
vmalloc area's virtual address will be aligned down to 4KB, and
the size will be aligned up to 4KB. That's to say, only one
4KB size's vmalloc area could contain Dev1 and Dev2 IO mapping area
at the same time.

For this case, we can ioremap only one time, and the later ioremap
operation will just return the exist vmalloc area.

This patch add IO mapping space reused support.

Signed-off-by: Richard Lee superlibj8...@gmail.com
---
 arch/arm/mm/ioremap.c | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/arch/arm/mm/ioremap.c b/arch/arm/mm/ioremap.c
index f9c32ba..be69333 100644
--- a/arch/arm/mm/ioremap.c
+++ b/arch/arm/mm/ioremap.c
@@ -301,6 +301,12 @@ void __iomem * __arm_ioremap_pfn_caller(unsigned long pfn,
if (WARN_ON(pfn_valid(pfn)))
return NULL;
 
+   area = find_vm_area_paddr(paddr, size, offset, VM_IOREMAP);
+   if (area) {
+   addr = (unsigned long)area-addr;
+   return (void __iomem *)(offset + addr);
+   }
+
area = get_vm_area_caller(size, VM_IOREMAP, caller);
if (!area)
return NULL;
-- 
1.8.4

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [RFC][PATCH 2/2] ARM: ioremap: Add IO mapping space reused support.

2014-05-13 Thread Richard Lee
On Tue, May 13, 2014 at 4:43 PM, Arnd Bergmann  wrote:
> On Tuesday 13 May 2014 09:45:08 Richard Lee wrote:
>> > On Mon, May 12, 2014 at 3:51 PM, Arnd Bergmann  wrote:
>> > On Monday 12 May 2014 10:19:55 Richard Lee wrote:
>> >> For the IO mapping, for the same physical address space maybe
>> >> mapped more than one time, for example, in some SoCs:
>> >> 0x2000 ~ 0x20001000: are global control IO physical map,
>> >> and this range space will be used by many drivers.
>> >> And then if each driver will do the same ioremap operation, we
>> >> will waste to much malloc virtual spaces.
>> >>
>> >> This patch add IO mapping space reused support.
>> >>
>> >> Signed-off-by: Richard Lee 
>> >
>> > What happens if the first driver then unmaps the area?
>> >
>>
>> If the first driver will unmap the area, it shouldn't do any thing
>> except decreasing the 'used' counter.
>
> Ah, for some reason I didn't see your first patch that introduces
> that counter.
>

It's "[PATCH 1/2] mm/vmalloc: Add IO mapping space reused".

Thanks,

BRs
Richard



> Arnd
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [RFC][PATCH 1/2] mm/vmalloc: Add IO mapping space reused interface.

2014-05-13 Thread Richard Lee
On Tue, May 13, 2014 at 11:13 AM, Rob Herring  wrote:
> On Sun, May 11, 2014 at 9:19 PM, Richard Lee  wrote:
>> For the IO mapping, for the same physical address space maybe
>> mapped more than one time, for example, in some SoCs:
>> 0x2000 ~ 0x20001000: are global control IO physical map,
>> and this range space will be used by many drivers.
>
> What address or who the user is isn't really relevant.
>
>> And then if each driver will do the same ioremap operation, we
>> will waste to much malloc virtual spaces.
>
> s/malloc/vmalloc/
>
>>
>> This patch add the IO mapping space reusing interface:
>> - find_vm_area_paddr: used to find the exsit vmalloc area using
>
> s/exsit/exist/
>

Yes, see the next version.

[...]
>> +{
>> +   struct vmap_area *va;
>> +
>> +   va = find_vmap_area((unsigned long)addr);
>> +   if (!va || !(va->flags & VM_VM_AREA) || !va->vm)
>> +   return 1;
>> +
>> +   if (va->vm->used <= 1)
>> +   return 1;
>> +
>> +   --va->vm->used;
>
> What lock protects this? You should use atomic ops here.
>

Yes, it is.


[...]
>> +   if (!(flags & VM_IOREMAP))
>> +   return NULL;
>> +
>> +   rcu_read_lock();
>> +   list_for_each_entry_rcu(va, _area_list, list) {
>> +   phys_addr_t phys_addr;
>> +
>> +   if (!va || !(va->flags & VM_VM_AREA) || !va->vm)
>> +   continue;
>> +
>> +   phys_addr = va->vm->phys_addr;
>> +
>> +   if (paddr < phys_addr || paddr + size > phys_addr + 
>> va->vm->size)
>> +   continue;
>> +
>> +   *offset = paddr - phys_addr;
>> +
>> +   if (va->vm->flags & VM_IOREMAP && va->vm->size >= size) {
>> +   va->vm->used++;
>
> What lock protects this? It looks like you are modifying this with
> only a rcu reader lock.

I'll try to use the proper lock ops for this later.



Thanks very much,

Richard


>
>> +   rcu_read_unlock();
>> +   return va->vm;
>> +   }
>> +   }
>> +   rcu_read_unlock();
>> +
>> +   return NULL;
>> +}
>> +
>>  /**
>>   * find_vm_area  -  find a continuous kernel virtual area
>>   * @addr:  base address
>> --
>> 1.8.4
>>
>>
>> ___
>> linux-arm-kernel mailing list
>> linux-arm-ker...@lists.infradead.org
>> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [RFC][PATCH 1/2] mm/vmalloc: Add IO mapping space reused interface.

2014-05-13 Thread Richard Lee
On Tue, May 13, 2014 at 11:13 AM, Rob Herring robherri...@gmail.com wrote:
 On Sun, May 11, 2014 at 9:19 PM, Richard Lee superlibj8...@gmail.com wrote:
 For the IO mapping, for the same physical address space maybe
 mapped more than one time, for example, in some SoCs:
 0x2000 ~ 0x20001000: are global control IO physical map,
 and this range space will be used by many drivers.

 What address or who the user is isn't really relevant.

 And then if each driver will do the same ioremap operation, we
 will waste to much malloc virtual spaces.

 s/malloc/vmalloc/


 This patch add the IO mapping space reusing interface:
 - find_vm_area_paddr: used to find the exsit vmalloc area using

 s/exsit/exist/


Yes, see the next version.

[...]
 +{
 +   struct vmap_area *va;
 +
 +   va = find_vmap_area((unsigned long)addr);
 +   if (!va || !(va-flags  VM_VM_AREA) || !va-vm)
 +   return 1;
 +
 +   if (va-vm-used = 1)
 +   return 1;
 +
 +   --va-vm-used;

 What lock protects this? You should use atomic ops here.


Yes, it is.


[...]
 +   if (!(flags  VM_IOREMAP))
 +   return NULL;
 +
 +   rcu_read_lock();
 +   list_for_each_entry_rcu(va, vmap_area_list, list) {
 +   phys_addr_t phys_addr;
 +
 +   if (!va || !(va-flags  VM_VM_AREA) || !va-vm)
 +   continue;
 +
 +   phys_addr = va-vm-phys_addr;
 +
 +   if (paddr  phys_addr || paddr + size  phys_addr + 
 va-vm-size)
 +   continue;
 +
 +   *offset = paddr - phys_addr;
 +
 +   if (va-vm-flags  VM_IOREMAP  va-vm-size = size) {
 +   va-vm-used++;

 What lock protects this? It looks like you are modifying this with
 only a rcu reader lock.

I'll try to use the proper lock ops for this later.



Thanks very much,

Richard



 +   rcu_read_unlock();
 +   return va-vm;
 +   }
 +   }
 +   rcu_read_unlock();
 +
 +   return NULL;
 +}
 +
  /**
   * find_vm_area  -  find a continuous kernel virtual area
   * @addr:  base address
 --
 1.8.4


 ___
 linux-arm-kernel mailing list
 linux-arm-ker...@lists.infradead.org
 http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [RFC][PATCH 2/2] ARM: ioremap: Add IO mapping space reused support.

2014-05-13 Thread Richard Lee
On Tue, May 13, 2014 at 4:43 PM, Arnd Bergmann a...@arndb.de wrote:
 On Tuesday 13 May 2014 09:45:08 Richard Lee wrote:
  On Mon, May 12, 2014 at 3:51 PM, Arnd Bergmann a...@arndb.de wrote:
  On Monday 12 May 2014 10:19:55 Richard Lee wrote:
  For the IO mapping, for the same physical address space maybe
  mapped more than one time, for example, in some SoCs:
  0x2000 ~ 0x20001000: are global control IO physical map,
  and this range space will be used by many drivers.
  And then if each driver will do the same ioremap operation, we
  will waste to much malloc virtual spaces.
 
  This patch add IO mapping space reused support.
 
  Signed-off-by: Richard Lee superl...@gmail.com
 
  What happens if the first driver then unmaps the area?
 

 If the first driver will unmap the area, it shouldn't do any thing
 except decreasing the 'used' counter.

 Ah, for some reason I didn't see your first patch that introduces
 that counter.


It's [PATCH 1/2] mm/vmalloc: Add IO mapping space reused.

Thanks,

BRs
Richard



 Arnd
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [RFC][PATCH 2/2] ARM: ioremap: Add IO mapping space reused support.

2014-05-12 Thread Richard Lee
> On Mon, May 12, 2014 at 3:51 PM, Arnd Bergmann  wrote:
> On Monday 12 May 2014 10:19:55 Richard Lee wrote:
>> For the IO mapping, for the same physical address space maybe
>> mapped more than one time, for example, in some SoCs:
>> 0x2000 ~ 0x20001000: are global control IO physical map,
>> and this range space will be used by many drivers.
>> And then if each driver will do the same ioremap operation, we
>> will waste to much malloc virtual spaces.
>>
>> This patch add IO mapping space reused support.
>>
>> Signed-off-by: Richard Lee 
>
> What happens if the first driver then unmaps the area?
>

If the first driver will unmap the area, it shouldn't do any thing
except decreasing the 'used' counter.

Thanks,

BRs
Richard Lee


> Arnd
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [RFC][PATCH 2/2] ARM: ioremap: Add IO mapping space reused support.

2014-05-12 Thread Richard Lee
 On Mon, May 12, 2014 at 3:51 PM, Arnd Bergmann a...@arndb.de wrote:
 On Monday 12 May 2014 10:19:55 Richard Lee wrote:
 For the IO mapping, for the same physical address space maybe
 mapped more than one time, for example, in some SoCs:
 0x2000 ~ 0x20001000: are global control IO physical map,
 and this range space will be used by many drivers.
 And then if each driver will do the same ioremap operation, we
 will waste to much malloc virtual spaces.

 This patch add IO mapping space reused support.

 Signed-off-by: Richard Lee superl...@gmail.com

 What happens if the first driver then unmaps the area?


If the first driver will unmap the area, it shouldn't do any thing
except decreasing the 'used' counter.

Thanks,

BRs
Richard Lee


 Arnd
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[RFC][PATCH 2/2] ARM: ioremap: Add IO mapping space reused support.

2014-05-11 Thread Richard Lee
For the IO mapping, for the same physical address space maybe
mapped more than one time, for example, in some SoCs:
0x2000 ~ 0x20001000: are global control IO physical map,
and this range space will be used by many drivers.
And then if each driver will do the same ioremap operation, we
will waste to much malloc virtual spaces.

This patch add IO mapping space reused support.

Signed-off-by: Richard Lee 
---
 arch/arm/mm/ioremap.c | 11 ++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/arch/arm/mm/ioremap.c b/arch/arm/mm/ioremap.c
index f9c32ba..26a3744 100644
--- a/arch/arm/mm/ioremap.c
+++ b/arch/arm/mm/ioremap.c
@@ -260,7 +260,7 @@ void __iomem * __arm_ioremap_pfn_caller(unsigned long pfn,
 {
const struct mem_type *type;
int err;
-   unsigned long addr;
+   unsigned long addr, off;
struct vm_struct *area;
phys_addr_t paddr = __pfn_to_phys(pfn);
 
@@ -301,6 +301,12 @@ void __iomem * __arm_ioremap_pfn_caller(unsigned long pfn,
if (WARN_ON(pfn_valid(pfn)))
return NULL;
 
+   area = find_vm_area_paddr(paddr, size, , VM_IOREMAP);
+   if (area) {
+   addr = (unsigned long)area->addr;
+   return (void __iomem *)(offset + off + addr);
+   }
+
area = get_vm_area_caller(size, VM_IOREMAP, caller);
if (!area)
return NULL;
@@ -410,6 +416,9 @@ void __iounmap(volatile void __iomem *io_addr)
if (svm)
return;
 
+   if (!vm_area_is_aready_to_free((unsigned long)addr))
+   return;
+
 #if !defined(CONFIG_SMP) && !defined(CONFIG_ARM_LPAE)
{
struct vm_struct *vm;
-- 
1.8.4

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[RFC][PATCH 0/2] Add IO mapping space reused support

2014-05-11 Thread Richard Lee

Richard Lee (2):
  mm/vmalloc: Add IO mapping space reused interface.
  ARM: ioremap: Add IO mapping space reused support.

 arch/arm/mm/ioremap.c   | 11 -
 include/linux/vmalloc.h |  5 
 mm/vmalloc.c| 63 +
 3 files changed, 78 insertions(+), 1 deletion(-)

-- 
1.8.4

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[RFC][PATCH 1/2] mm/vmalloc: Add IO mapping space reused interface.

2014-05-11 Thread Richard Lee
For the IO mapping, for the same physical address space maybe
mapped more than one time, for example, in some SoCs:
0x2000 ~ 0x20001000: are global control IO physical map,
and this range space will be used by many drivers.
And then if each driver will do the same ioremap operation, we
will waste to much malloc virtual spaces.

This patch add the IO mapping space reusing interface:
- find_vm_area_paddr: used to find the exsit vmalloc area using
  the IO physical address.
- vm_area_is_aready_to_free: before vfree the IO mapped areas
  using this to do the check that if this area is used by more
  than one consumer.

Signed-off-by: Richard Lee 
---
 include/linux/vmalloc.h |  5 
 mm/vmalloc.c| 63 +
 2 files changed, 68 insertions(+)

diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index 4b8a891..2b811f6 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -34,6 +34,7 @@ struct vm_struct {
struct page **pages;
unsigned intnr_pages;
phys_addr_t phys_addr;
+   unsigned intused;
const void  *caller;
 };
 
@@ -100,6 +101,10 @@ static inline size_t get_vm_area_size(const struct 
vm_struct *area)
return area->size - PAGE_SIZE;
 }
 
+extern int vm_area_is_aready_to_free(phys_addr_t addr);
+struct vm_struct *find_vm_area_paddr(phys_addr_t paddr, size_t size,
+unsigned long *offset,
+unsigned long flags);
 extern struct vm_struct *get_vm_area(unsigned long size, unsigned long flags);
 extern struct vm_struct *get_vm_area_caller(unsigned long size,
unsigned long flags, const void 
*caller);
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index bf233b2..f75b7b3 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1293,6 +1293,7 @@ static void setup_vmalloc_vm(struct vm_struct *vm, struct 
vmap_area *va,
vm->addr = (void *)va->va_start;
vm->size = va->va_end - va->va_start;
vm->caller = caller;
+   vm->used = 1;
va->vm = vm;
va->flags |= VM_VM_AREA;
spin_unlock(_area_lock);
@@ -1383,6 +1384,68 @@ struct vm_struct *get_vm_area_caller(unsigned long size, 
unsigned long flags,
  NUMA_NO_NODE, GFP_KERNEL, caller);
 }
 
+int vm_area_is_aready_to_free(phys_addr_t addr)
+{
+   struct vmap_area *va;
+
+   va = find_vmap_area((unsigned long)addr);
+   if (!va || !(va->flags & VM_VM_AREA) || !va->vm)
+   return 1;
+
+   if (va->vm->used <= 1)
+   return 1;
+
+   --va->vm->used;
+
+   return 0;
+}
+
+/**
+ * find_vm_area_paddr  -  find a continuous kernel virtual area using the
+ * physical addreess.
+ * @paddr: base physical address
+ * @size:  size of the physical area range
+ * @offset:the start offset of the vm area
+ * @flags: %VM_IOREMAP for I/O mappings
+ *
+ * Search for the kernel VM area, whoes physical address starting at 
@paddr,
+ * and if the exsit VM area's size is large enough, then just return it, or
+ * return NULL.
+ */
+struct vm_struct *find_vm_area_paddr(phys_addr_t paddr, size_t size,
+unsigned long *offset,
+unsigned long flags)
+{
+   struct vmap_area *va;
+
+   if (!(flags & VM_IOREMAP))
+   return NULL;
+
+   rcu_read_lock();
+   list_for_each_entry_rcu(va, _area_list, list) {
+   phys_addr_t phys_addr;
+
+   if (!va || !(va->flags & VM_VM_AREA) || !va->vm)
+   continue;
+
+   phys_addr = va->vm->phys_addr;
+
+   if (paddr < phys_addr || paddr + size > phys_addr + 
va->vm->size)
+   continue;
+
+   *offset = paddr - phys_addr;
+
+   if (va->vm->flags & VM_IOREMAP && va->vm->size >= size) {
+   va->vm->used++;
+   rcu_read_unlock();
+   return va->vm;
+   }
+   }
+   rcu_read_unlock();
+
+   return NULL;
+}
+
 /**
  * find_vm_area  -  find a continuous kernel virtual area
  * @addr:  base address
-- 
1.8.4

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[RFC][PATCH 1/2] mm/vmalloc: Add IO mapping space reused interface.

2014-05-11 Thread Richard Lee
For the IO mapping, for the same physical address space maybe
mapped more than one time, for example, in some SoCs:
0x2000 ~ 0x20001000: are global control IO physical map,
and this range space will be used by many drivers.
And then if each driver will do the same ioremap operation, we
will waste to much malloc virtual spaces.

This patch add the IO mapping space reusing interface:
- find_vm_area_paddr: used to find the exsit vmalloc area using
  the IO physical address.
- vm_area_is_aready_to_free: before vfree the IO mapped areas
  using this to do the check that if this area is used by more
  than one consumer.

Signed-off-by: Richard Lee superl...@gmail.com
---
 include/linux/vmalloc.h |  5 
 mm/vmalloc.c| 63 +
 2 files changed, 68 insertions(+)

diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index 4b8a891..2b811f6 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -34,6 +34,7 @@ struct vm_struct {
struct page **pages;
unsigned intnr_pages;
phys_addr_t phys_addr;
+   unsigned intused;
const void  *caller;
 };
 
@@ -100,6 +101,10 @@ static inline size_t get_vm_area_size(const struct 
vm_struct *area)
return area-size - PAGE_SIZE;
 }
 
+extern int vm_area_is_aready_to_free(phys_addr_t addr);
+struct vm_struct *find_vm_area_paddr(phys_addr_t paddr, size_t size,
+unsigned long *offset,
+unsigned long flags);
 extern struct vm_struct *get_vm_area(unsigned long size, unsigned long flags);
 extern struct vm_struct *get_vm_area_caller(unsigned long size,
unsigned long flags, const void 
*caller);
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index bf233b2..f75b7b3 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1293,6 +1293,7 @@ static void setup_vmalloc_vm(struct vm_struct *vm, struct 
vmap_area *va,
vm-addr = (void *)va-va_start;
vm-size = va-va_end - va-va_start;
vm-caller = caller;
+   vm-used = 1;
va-vm = vm;
va-flags |= VM_VM_AREA;
spin_unlock(vmap_area_lock);
@@ -1383,6 +1384,68 @@ struct vm_struct *get_vm_area_caller(unsigned long size, 
unsigned long flags,
  NUMA_NO_NODE, GFP_KERNEL, caller);
 }
 
+int vm_area_is_aready_to_free(phys_addr_t addr)
+{
+   struct vmap_area *va;
+
+   va = find_vmap_area((unsigned long)addr);
+   if (!va || !(va-flags  VM_VM_AREA) || !va-vm)
+   return 1;
+
+   if (va-vm-used = 1)
+   return 1;
+
+   --va-vm-used;
+
+   return 0;
+}
+
+/**
+ * find_vm_area_paddr  -  find a continuous kernel virtual area using the
+ * physical addreess.
+ * @paddr: base physical address
+ * @size:  size of the physical area range
+ * @offset:the start offset of the vm area
+ * @flags: %VM_IOREMAP for I/O mappings
+ *
+ * Search for the kernel VM area, whoes physical address starting at 
@paddr,
+ * and if the exsit VM area's size is large enough, then just return it, or
+ * return NULL.
+ */
+struct vm_struct *find_vm_area_paddr(phys_addr_t paddr, size_t size,
+unsigned long *offset,
+unsigned long flags)
+{
+   struct vmap_area *va;
+
+   if (!(flags  VM_IOREMAP))
+   return NULL;
+
+   rcu_read_lock();
+   list_for_each_entry_rcu(va, vmap_area_list, list) {
+   phys_addr_t phys_addr;
+
+   if (!va || !(va-flags  VM_VM_AREA) || !va-vm)
+   continue;
+
+   phys_addr = va-vm-phys_addr;
+
+   if (paddr  phys_addr || paddr + size  phys_addr + 
va-vm-size)
+   continue;
+
+   *offset = paddr - phys_addr;
+
+   if (va-vm-flags  VM_IOREMAP  va-vm-size = size) {
+   va-vm-used++;
+   rcu_read_unlock();
+   return va-vm;
+   }
+   }
+   rcu_read_unlock();
+
+   return NULL;
+}
+
 /**
  * find_vm_area  -  find a continuous kernel virtual area
  * @addr:  base address
-- 
1.8.4

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[RFC][PATCH 0/2] Add IO mapping space reused support

2014-05-11 Thread Richard Lee

Richard Lee (2):
  mm/vmalloc: Add IO mapping space reused interface.
  ARM: ioremap: Add IO mapping space reused support.

 arch/arm/mm/ioremap.c   | 11 -
 include/linux/vmalloc.h |  5 
 mm/vmalloc.c| 63 +
 3 files changed, 78 insertions(+), 1 deletion(-)

-- 
1.8.4

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[RFC][PATCH 2/2] ARM: ioremap: Add IO mapping space reused support.

2014-05-11 Thread Richard Lee
For the IO mapping, for the same physical address space maybe
mapped more than one time, for example, in some SoCs:
0x2000 ~ 0x20001000: are global control IO physical map,
and this range space will be used by many drivers.
And then if each driver will do the same ioremap operation, we
will waste to much malloc virtual spaces.

This patch add IO mapping space reused support.

Signed-off-by: Richard Lee superl...@gmail.com
---
 arch/arm/mm/ioremap.c | 11 ++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/arch/arm/mm/ioremap.c b/arch/arm/mm/ioremap.c
index f9c32ba..26a3744 100644
--- a/arch/arm/mm/ioremap.c
+++ b/arch/arm/mm/ioremap.c
@@ -260,7 +260,7 @@ void __iomem * __arm_ioremap_pfn_caller(unsigned long pfn,
 {
const struct mem_type *type;
int err;
-   unsigned long addr;
+   unsigned long addr, off;
struct vm_struct *area;
phys_addr_t paddr = __pfn_to_phys(pfn);
 
@@ -301,6 +301,12 @@ void __iomem * __arm_ioremap_pfn_caller(unsigned long pfn,
if (WARN_ON(pfn_valid(pfn)))
return NULL;
 
+   area = find_vm_area_paddr(paddr, size, off, VM_IOREMAP);
+   if (area) {
+   addr = (unsigned long)area-addr;
+   return (void __iomem *)(offset + off + addr);
+   }
+
area = get_vm_area_caller(size, VM_IOREMAP, caller);
if (!area)
return NULL;
@@ -410,6 +416,9 @@ void __iounmap(volatile void __iomem *io_addr)
if (svm)
return;
 
+   if (!vm_area_is_aready_to_free((unsigned long)addr))
+   return;
+
 #if !defined(CONFIG_SMP)  !defined(CONFIG_ARM_LPAE)
{
struct vm_struct *vm;
-- 
1.8.4

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/