Re: [PATCH v3 07/33] nds32: MMU initialization

2017-12-18 Thread Greentime Hu
Hi, Guo Ren:

2017-12-18 20:22 GMT+08:00 Guo Ren :
> On Mon, Dec 18, 2017 at 07:21:30PM +0800, Greentime Hu wrote:
>> Hi, Guo Ren:
>>
>> 2017-12-18 17:08 GMT+08:00 Guo Ren :
>> > Hi Greentime,
>> >
>> > On Fri, Dec 08, 2017 at 05:11:50PM +0800, Greentime Hu wrote:
>> > [...]
>> >>
>> >> diff --git a/arch/nds32/mm/highmem.c b/arch/nds32/mm/highmem.c
>> > [...]
>> >> +void *kmap(struct page *page)
>> >> +{
>> >> + unsigned long vaddr;
>> >> + might_sleep();
>> >> + if (!PageHighMem(page))
>> >> + return page_address(page);
>> >> + vaddr = (unsigned long)kmap_high(page);
>> > Here should invalid the cpu_mmu_tlb's entry, Or invalid it in the
>> > set_pte().
>> >
>> > eg:
>> > vaddr0 = kmap(page0)
>> > *vaddr0 = val0 //It will cause tlb-miss, and hard-refill to MMU-tlb
>> > kunmap(page0)
>> > vaddr1 = kmap(page1) // Mostly vaddr1 = vaddr0
>> > val = vaddr1; //No tlb-miss and it will get page0's val not page1, because
>> > last expired vaddr0's entry is left in CPU-MMU-tlb.
>> >
>>
>> Thanks.
>> I will add __nds32__tlbop_inv(vaddr); to invalidate this mapping
>> before retrun vaddr.
>
> Sorry, perhaps I'm wrong. See
> kmap->kmap_high->map_new_virtual->get_next_pkmap_nr(color).
>
> Seems pkmap will return the vaddr by vaddr + 1 until
> no_more_pkmaps(), and then flush_all_zero_pkmaps.
> Just kmap_atomic need it, and you've done.

Thanks for double checking this case. :)
As you said, it will flush tlb in the generic code flow.


Re: [PATCH v3 07/33] nds32: MMU initialization

2017-12-18 Thread Guo Ren
On Mon, Dec 18, 2017 at 07:21:30PM +0800, Greentime Hu wrote:
> Hi, Guo Ren:
> 
> 2017-12-18 17:08 GMT+08:00 Guo Ren :
> > Hi Greentime,
> >
> > On Fri, Dec 08, 2017 at 05:11:50PM +0800, Greentime Hu wrote:
> > [...]
> >>
> >> diff --git a/arch/nds32/mm/highmem.c b/arch/nds32/mm/highmem.c
> > [...]
> >> +void *kmap(struct page *page)
> >> +{
> >> + unsigned long vaddr;
> >> + might_sleep();
> >> + if (!PageHighMem(page))
> >> + return page_address(page);
> >> + vaddr = (unsigned long)kmap_high(page);
> > Here should invalid the cpu_mmu_tlb's entry, Or invalid it in the
> > set_pte().
> >
> > eg:
> > vaddr0 = kmap(page0)
> > *vaddr0 = val0 //It will cause tlb-miss, and hard-refill to MMU-tlb
> > kunmap(page0)
> > vaddr1 = kmap(page1) // Mostly vaddr1 = vaddr0
> > val = vaddr1; //No tlb-miss and it will get page0's val not page1, because
> > last expired vaddr0's entry is left in CPU-MMU-tlb.
> >
> 
> Thanks.
> I will add __nds32__tlbop_inv(vaddr); to invalidate this mapping
> before retrun vaddr.

Sorry, perhaps I'm wrong. See
kmap->kmap_high->map_new_virtual->get_next_pkmap_nr(color).

Seems pkmap will return the vaddr by vaddr + 1 until
no_more_pkmaps(), and then flush_all_zero_pkmaps.
Just kmap_atomic need it, and you've done.

But I don't know why mips need flush_tlb_one in
arch/mips/mm/highmem.c:kmap(). VIPT? but kmap give the get_pkmap_color
for aliasing.

Best Regards
 Guo Ren


Re: [PATCH v3 07/33] nds32: MMU initialization

2017-12-18 Thread Greentime Hu
Hi, Guo Ren:

2017-12-18 17:08 GMT+08:00 Guo Ren :
> Hi Greentime,
>
> On Fri, Dec 08, 2017 at 05:11:50PM +0800, Greentime Hu wrote:
> [...]
>>
>> diff --git a/arch/nds32/mm/highmem.c b/arch/nds32/mm/highmem.c
> [...]
>> +void *kmap(struct page *page)
>> +{
>> + unsigned long vaddr;
>> + might_sleep();
>> + if (!PageHighMem(page))
>> + return page_address(page);
>> + vaddr = (unsigned long)kmap_high(page);
> Here should invalid the cpu_mmu_tlb's entry, Or invalid it in the
> set_pte().
>
> eg:
> vaddr0 = kmap(page0)
> *vaddr0 = val0 //It will cause tlb-miss, and hard-refill to MMU-tlb
> kunmap(page0)
> vaddr1 = kmap(page1) // Mostly vaddr1 = vaddr0
> val = vaddr1; //No tlb-miss and it will get page0's val not page1, because
> last expired vaddr0's entry is left in CPU-MMU-tlb.
>

Thanks.
I will add __nds32__tlbop_inv(vaddr); to invalidate this mapping
before retrun vaddr.


Re: [PATCH v3 07/33] nds32: MMU initialization

2017-12-18 Thread Guo Ren
Hi Greentime,

On Fri, Dec 08, 2017 at 05:11:50PM +0800, Greentime Hu wrote:
[...]
> 
> diff --git a/arch/nds32/mm/highmem.c b/arch/nds32/mm/highmem.c
[...]
> +void *kmap(struct page *page)
> +{
> + unsigned long vaddr;
> + might_sleep();
> + if (!PageHighMem(page))
> + return page_address(page);
> + vaddr = (unsigned long)kmap_high(page);
Here should invalid the cpu_mmu_tlb's entry, Or invalid it in the
set_pte().

eg:
vaddr0 = kmap(page0)
*vaddr0 = val0 //It will cause tlb-miss, and hard-refill to MMU-tlb
kunmap(page0)
vaddr1 = kmap(page1) // Mostly vaddr1 = vaddr0
val = vaddr1; //No tlb-miss and it will get page0's val not page1, because
last expired vaddr0's entry is left in CPU-MMU-tlb.

Best Regards
 Guo Ren



[PATCH v3 07/33] nds32: MMU initialization

2017-12-08 Thread Greentime Hu
From: Greentime Hu 

This patch includes memory initializations and highmem supporting.

Signed-off-by: Vincent Chen 
Signed-off-by: Greentime Hu 
---
 arch/nds32/mm/highmem.c  |   92 +++
 arch/nds32/mm/init.c |  290 ++
 arch/nds32/mm/mm-nds32.c |  103 
 3 files changed, 485 insertions(+)
 create mode 100644 arch/nds32/mm/highmem.c
 create mode 100644 arch/nds32/mm/init.c
 create mode 100644 arch/nds32/mm/mm-nds32.c

diff --git a/arch/nds32/mm/highmem.c b/arch/nds32/mm/highmem.c
new file mode 100644
index 000..d5101bd
--- /dev/null
+++ b/arch/nds32/mm/highmem.c
@@ -0,0 +1,92 @@
+/*
+ * Copyright (C) 2005-2017 Andes Technology Corporation
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see .
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+void *kmap(struct page *page)
+{
+   unsigned long vaddr;
+   might_sleep();
+   if (!PageHighMem(page))
+   return page_address(page);
+   vaddr = (unsigned long)kmap_high(page);
+   return (void *)vaddr;
+}
+
+EXPORT_SYMBOL(kmap);
+
+void kunmap(struct page *page)
+{
+   BUG_ON(in_interrupt());
+   if (!PageHighMem(page))
+   return;
+   kunmap_high(page);
+}
+
+EXPORT_SYMBOL(kunmap);
+
+void *kmap_atomic(struct page *page)
+{
+   unsigned int idx;
+   unsigned long vaddr, pte;
+   int type;
+   pte_t *ptep;
+
+   preempt_disable();
+   pagefault_disable();
+   if (!PageHighMem(page))
+   return page_address(page);
+
+   type = kmap_atomic_idx_push();
+
+   idx = type + KM_TYPE_NR * smp_processor_id();
+   vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx);
+   pte = (page_to_pfn(page) << PAGE_SHIFT) | (PAGE_KERNEL);
+   ptep = pte_offset_kernel(pmd_off_k(vaddr), vaddr);
+   set_pte(ptep, pte);
+
+   __nds32__tlbop_inv(vaddr);
+   __nds32__mtsr_dsb(vaddr, NDS32_SR_TLB_VPN);
+   __nds32__tlbop_rwr(pte);
+   __nds32__isb();
+   return (void *)vaddr;
+}
+
+EXPORT_SYMBOL(kmap_atomic);
+
+void __kunmap_atomic(void *kvaddr)
+{
+   if (kvaddr >= (void *)FIXADDR_START) {
+   unsigned long vaddr = (unsigned long)kvaddr;
+   pte_t *ptep;
+   kmap_atomic_idx_pop();
+   __nds32__tlbop_inv(vaddr);
+   __nds32__isb();
+   ptep = pte_offset_kernel(pmd_off_k(vaddr), vaddr);
+   set_pte(ptep, 0);
+   }
+   pagefault_enable();
+   preempt_enable();
+}
+
+EXPORT_SYMBOL(__kunmap_atomic);
diff --git a/arch/nds32/mm/init.c b/arch/nds32/mm/init.c
new file mode 100644
index 000..05e072a
--- /dev/null
+++ b/arch/nds32/mm/init.c
@@ -0,0 +1,290 @@
+/*
+ * Copyright (C) 1995-2005 Russell King
+ * Copyright (C) 2012 ARM Ltd.
+ * Copyright (C) 2013-2017 Andes Technology Corporation
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see .
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+
+DEFINE_PER_CPU(struct mmu_gather, mmu_gathers);
+DEFINE_SPINLOCK(anon_alias_lock);
+extern pgd_t swapper_pg_dir[PTRS_PER_PGD];
+extern unsigned long phys_initrd_start;
+extern unsigned long phys_initrd_size;
+
+/*
+ * empty_zero_page is a special page that is used for
+ * zero-initialized data and COW.
+ */
+struct page *empty_zero_page;
+
+static void __init zone_sizes_init(void)
+{
+   unsigned long zones_size[MAX_NR_ZONES];
+
+   /* Clear the zone sizes */
+   memset(zones_size, 0, sizeof(zones_size));
+
+   zones_size[ZONE_NORMAL] = max_low_pfn;
+#ifdef CONFIG_HIGHMEM
+   zones_size[ZONE_HIGHMEM] = max_pfn;
+#endif
+   free_area_init(zones_size);
+
+}
+
+/*
+ * Map all physical memory under high_memory into kernel's address space.
+ *
+ * This is explicitly