Re: [PATCH 04/10] 8xx: Always pin kernel instruction TLB

2009-11-14 Thread Roland Dreier

 > > How to make better use of the remaining ITLB slots is tricky.
 > > Somehow one would want to map at lest one to modules but I cannot see how.
 > 
 > No. If you use modules, you pay the price. Sane embedded solutions
 > running in "tight" environments don't use modules :-) No point pinning
 > TLB entries on the vmalloc space, really.

Long ago (2.4 days I think) when using modules on ppc 4xx we hacked the
module_alloc function (or whatever it was called back then) to allocate
space in the kernel pinned TLB instead of using vmalloc.  Gave something
like a 2x speedup for module code, since the 4xx TLB is so small and the
miss handling is so expensive.  I assume it should still be possible to
do a similar hack with current kernels.

 - R.
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH 04/10] 8xx: Always pin kernel instruction TLB

2009-11-14 Thread Benjamin Herrenschmidt
On Sat, 2009-11-14 at 20:08 +0100, Joakim Tjernlund wrote:
> Dan Malek  wrote on 14/11/2009 19:08:43:
> > On Nov 14, 2009, at 2:42 AM, Joakim Tjernlund wrote:
> >
> > > . Avoid this by always pinning
> > > kernel instruction TLB space.
> >
> > You may as well map the data space, too, since you have
> > reserved the entries.  Take advantage of that performance.
> > Also, some processor variants have very few TLB entries,
> > and may only reserve two entries (although the flag says
> > reserve 4).  Ensure there are sufficient resources to do
> > what you want.  This is the reason the option is configurable.
> 
> Scott had some concerns about pinning the data space too. That is
> is why I left the data TLB pinning behind the the config option.
> 
> How to make better use of the remaining ITLB slots is tricky.
> Somehow one would want to map at lest one to modules but I cannot see how.

No. If you use modules, you pay the price. Sane embedded solutions
running in "tight" environments don't use modules :-) No point pinning
TLB entries on the vmalloc space, really.

What -might- be more useful is to look at Grant work on re-doing the
early ioremap and providing a way to provide what the old
io_block_mapping() did, but with dynamically chosen virtual addresses,
to have a pinned entry covering most common IOs.

Cheers,
Ben.


___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH 04/10] 8xx: Always pin kernel instruction TLB

2009-11-14 Thread Joakim Tjernlund
Dan Malek  wrote on 14/11/2009 19:08:43:
> On Nov 14, 2009, at 2:42 AM, Joakim Tjernlund wrote:
>
> > . Avoid this by always pinning
> > kernel instruction TLB space.
>
> You may as well map the data space, too, since you have
> reserved the entries.  Take advantage of that performance.
> Also, some processor variants have very few TLB entries,
> and may only reserve two entries (although the flag says
> reserve 4).  Ensure there are sufficient resources to do
> what you want.  This is the reason the option is configurable.

Scott had some concerns about pinning the data space too. That is
is why I left the data TLB pinning behind the the config option.

How to make better use of the remaining ITLB slots is tricky.
Somehow one would want to map at lest one to modules but I cannot see how.

 Jocke

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH 04/10] 8xx: Always pin kernel instruction TLB

2009-11-14 Thread Dan Malek


On Nov 14, 2009, at 2:42 AM, Joakim Tjernlund wrote:


. Avoid this by always pinning
kernel instruction TLB space.


You may as well map the data space, too, since you have
reserved the entries.  Take advantage of that performance.
Also, some processor variants have very few TLB entries,
and may only reserve two entries (although the flag says
reserve 4).  Ensure there are sufficient resources to do
what you want.  This is the reason the option is configurable.

Thanks.

-- Dan

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH 0/8] 8xx: Misc fixes for buggy insn

2009-11-14 Thread Joakim Tjernlund
Scott Wood  wrote on 13/11/2009 20:25:48:
>
> Joakim Tjernlund wrote:
> > Anyhow, lets start simple and just do the pinned ITLB so the
> > new TLB code can be applied. Can you confirm this works for you?
>
> It works (after changing #ifdef 1 to #if 1).

OK, new series sent.

BTW, one can probably avoid the extra space by the work around if the first
part(up to and including the insn check) is included in the DTLB error
and the second half is put just before the . = 0x2000, you would have to
use the self modifying variant though.
Not something I am going to play with.

 Jocke

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH] powerpc: kill unused swiotlb variable

2009-11-14 Thread FUJITA Tomonori
We can kill unused swiotlb variable.

Signed-off-by: FUJITA Tomonori 
---
 arch/powerpc/kernel/dma-swiotlb.c |1 -
 1 files changed, 0 insertions(+), 1 deletions(-)

diff --git a/arch/powerpc/kernel/dma-swiotlb.c 
b/arch/powerpc/kernel/dma-swiotlb.c
index e96cbbd..59c9285 100644
--- a/arch/powerpc/kernel/dma-swiotlb.c
+++ b/arch/powerpc/kernel/dma-swiotlb.c
@@ -21,7 +21,6 @@
 #include 
 #include 
 
-int swiotlb __read_mostly;
 unsigned int ppc_swiotlb_enable;
 
 /*
-- 
1.5.6.5

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH 09/10] 8xx: Remove DIRTY pte handling in DTLB Error.

2009-11-14 Thread Joakim Tjernlund
There is no need to do set the DIRTY bit directly in DTLB Error.
Trap to do_page_fault() and let the generic MM code do the work.

Signed-off-by: Joakim Tjernlund 
---
 arch/powerpc/kernel/head_8xx.S |   96 
 1 files changed, 0 insertions(+), 96 deletions(-)

diff --git a/arch/powerpc/kernel/head_8xx.S b/arch/powerpc/kernel/head_8xx.S
index 08cd2a3..c754ea6 100644
--- a/arch/powerpc/kernel/head_8xx.S
+++ b/arch/powerpc/kernel/head_8xx.S
@@ -500,102 +500,6 @@ DataTLBError:
cmpwi   cr0, r10, 0x00f0
beq-FixupDAR/* must be a buggy dcbX, icbi insn. */
 DARFixed:/* Return from dcbx instruction bug workaround, r10 holds value of 
DAR */
-   mfspr   r11, SPRN_DSISR
-   /* As the DAR fixup may clear store we may have all 3 states zero.
-* Make sure only 0x0200(store) falls down into DIRTY handling
-*/
-   andis.  r11, r11, 0x4a00/* !translation, protection or store */
-   srwir11, r11, 16
-   cmpwi   cr0, r11, 0x0200/* just store ? */
-   bne 2f
-   /* Only Change bit left now, do it here as it is faster
-* than trapping to the C fault handler.
-   */
-
-   /* The EA of a data TLB miss is automatically stored in the MD_EPN
-* register.  The EA of a data TLB error is automatically stored in
-* the DAR, but not the MD_EPN register.  We must copy the 20 most
-* significant bits of the EA from the DAR to MD_EPN before we
-* start walking the page tables.  We also need to copy the CASID
-* value from the M_CASID register.
-* Addendum:  The EA of a data TLB error is _supposed_ to be stored
-* in DAR, but it seems that this doesn't happen in some cases, such
-* as when the error is due to a dcbi instruction to a page with a
-* TLB that doesn't have the changed bit set.  In such cases, there
-* does not appear to be any way  to recover the EA of the error
-* since it is neither in DAR nor MD_EPN.  As a workaround, the
-* _PAGE_HWWRITE bit is set for all kernel data pages when the PTEs
-* are initialized in mapin_ram().  This will avoid the problem,
-* assuming we only use the dcbi instruction on kernel addresses.
-*/
-
-   /* DAR is in r10 already */
-   rlwinm  r11, r10, 0, 0, 19
-   ori r11, r11, MD_EVALID
-   mfspr   r10, SPRN_M_CASID
-   rlwimi  r11, r10, 0, 28, 31
-   DO_8xx_CPU6(0x3780, r3)
-   mtspr   SPRN_MD_EPN, r11
-
-   mfspr   r10, SPRN_M_TWB /* Get level 1 table entry address */
-
-   /* If we are faulting a kernel address, we have to use the
-* kernel page tables.
-*/
-   andi.   r11, r10, 0x0800
-   beq 3f
-   lis r11, swapper_pg_...@h
-   ori r11, r11, swapper_pg_...@l
-   rlwimi  r10, r11, 0, 2, 19
-3:
-   lwz r11, 0(r10) /* Get the level 1 entry */
-   rlwinm. r10, r11,0,0,19 /* Extract page descriptor page address */
-   beq 2f  /* If zero, bail */
-
-   /* We have a pte table, so fetch the pte from the table.
-*/
-   ori r11, r11, 1 /* Set valid bit in physical L2 page */
-   DO_8xx_CPU6(0x3b80, r3)
-   mtspr   SPRN_MD_TWC, r11/* Load pte table base address */
-   mfspr   r10, SPRN_MD_TWC/* and get the pte address */
-   lwz r10, 0(r10) /* Get the pte */
-   /* Insert the Guarded flag into the TWC from the Linux PTE.
-* It is bit 27 of both the Linux PTE and the TWC
-*/
-   rlwimi  r11, r10, 0, 27, 27
-   /* Insert the WriteThru flag into the TWC from the Linux PTE.
-* It is bit 25 in the Linux PTE and bit 30 in the TWC
-*/
-   rlwimi  r11, r10, 32-5, 30, 30
-   DO_8xx_CPU6(0x3b80, r3)
-   mtspr   SPRN_MD_TWC, r11
-   mfspr   r11, SPRN_MD_TWC/* get the pte address again */
-
-   ori r10, r10, _PAGE_DIRTY | _PAGE_ACCESSED | _PAGE_HWWRITE
-   stw r10, 0(r11) /* and update pte in table */
-   xorir10, r10, _PAGE_RW  /* RW bit is inverted */
-
-   /* The Linux PTE won't go exactly into the MMU TLB.
-* Software indicator bits 22 and 28 must be clear.
-* Software indicator bits 24, 25, 26, and 27 must be
-* set.  All other Linux PTE bits control the behavior
-* of the MMU.
-*/
-   li  r11, 0x00f0
-   mtspr   SPRN_DAR,r11/* Tag DAR */
-   rlwimi  r10, r11, 0, 24, 28 /* Set 24-27, clear 28 */
-   DO_8xx_CPU6(0x3d80, r3)
-   mtspr   SPRN_MD_RPN, r10/* Update TLB entry */
-
-   mfspr   r10, SPRN_M_TW  /* Restore registers */
-   lwz r11, 0(r0)
-   mtcrr11
-   lwz r11, 4(r0)
-#ifdef CONFIG_8xx_CPU6
-   lwz r3, 8(r0)
-#endif
-   rfi
-2:
mfspr   r10, SPRN_M_TW  /* Restore registers */
lwz r11, 0(r0

[PATCH 08/10] 8xx: start using dcbX instructions in various copy routines

2009-11-14 Thread Joakim Tjernlund
Now that 8xx can fixup dcbX instructions, start using them
where possible like every other PowerPc arch do.

Signed-off-by: Joakim Tjernlund 
---
 arch/powerpc/kernel/misc_32.S |   18 --
 arch/powerpc/lib/copy_32.S|   24 
 2 files changed, 0 insertions(+), 42 deletions(-)

diff --git a/arch/powerpc/kernel/misc_32.S b/arch/powerpc/kernel/misc_32.S
index 15f28e0..b92095e 100644
--- a/arch/powerpc/kernel/misc_32.S
+++ b/arch/powerpc/kernel/misc_32.S
@@ -495,15 +495,7 @@ _GLOBAL(clear_pages)
li  r0,PAGE_SIZE/L1_CACHE_BYTES
slw r0,r0,r4
mtctr   r0
-#ifdef CONFIG_8xx
-   li  r4, 0
-1: stw r4, 0(r3)
-   stw r4, 4(r3)
-   stw r4, 8(r3)
-   stw r4, 12(r3)
-#else
 1: dcbz0,r3
-#endif
addir3,r3,L1_CACHE_BYTES
bdnz1b
blr
@@ -528,15 +520,6 @@ _GLOBAL(copy_page)
addir3,r3,-4
addir4,r4,-4
 
-#ifdef CONFIG_8xx
-   /* don't use prefetch on 8xx */
-   li  r0,4096/L1_CACHE_BYTES
-   mtctr   r0
-1: COPY_16_BYTES
-   bdnz1b
-   blr
-
-#else  /* not 8xx, we can prefetch */
li  r5,4
 
 #if MAX_COPY_PREFETCH > 1
@@ -577,7 +560,6 @@ _GLOBAL(copy_page)
li  r0,MAX_COPY_PREFETCH
li  r11,4
b   2b
-#endif /* CONFIG_8xx */
 
 /*
  * void atomic_clear_mask(atomic_t mask, atomic_t *addr)
diff --git a/arch/powerpc/lib/copy_32.S b/arch/powerpc/lib/copy_32.S
index c657de5..74a7f41 100644
--- a/arch/powerpc/lib/copy_32.S
+++ b/arch/powerpc/lib/copy_32.S
@@ -98,20 +98,7 @@ _GLOBAL(cacheable_memzero)
bdnz4b
 3: mtctr   r9
li  r7,4
-#if !defined(CONFIG_8xx)
 10:dcbzr7,r6
-#else
-10:stw r4, 4(r6)
-   stw r4, 8(r6)
-   stw r4, 12(r6)
-   stw r4, 16(r6)
-#if CACHE_LINE_SIZE >= 32
-   stw r4, 20(r6)
-   stw r4, 24(r6)
-   stw r4, 28(r6)
-   stw r4, 32(r6)
-#endif /* CACHE_LINE_SIZE */
-#endif
addir6,r6,CACHELINE_BYTES
bdnz10b
clrlwi  r5,r8,32-LG_CACHELINE_BYTES
@@ -200,9 +187,7 @@ _GLOBAL(cacheable_memcpy)
mtctr   r0
beq 63f
 53:
-#if !defined(CONFIG_8xx)
dcbzr11,r6
-#endif
COPY_16_BYTES
 #if L1_CACHE_BYTES >= 32
COPY_16_BYTES
@@ -356,14 +341,6 @@ _GLOBAL(__copy_tofrom_user)
li  r11,4
beq 63f
 
-#ifdef CONFIG_8xx
-   /* Don't use prefetch on 8xx */
-   mtctr   r0
-   li  r0,0
-53:COPY_16_BYTES_WITHEX(0)
-   bdnz53b
-
-#else /* not CONFIG_8xx */
/* Here we decide how far ahead to prefetch the source */
li  r3,4
cmpwi   r0,1
@@ -416,7 +393,6 @@ _GLOBAL(__copy_tofrom_user)
li  r3,4
li  r7,0
bne 114b
-#endif /* CONFIG_8xx */
 
 63:srwi.   r0,r5,2
mtctr   r0
-- 
1.6.4.4

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH 05/10] 8xx: Fixup DAR from buggy dcbX instructions.

2009-11-14 Thread Joakim Tjernlund
This is an assembler version to fixup DAR not being set
by dcbX, icbi instructions. There are two versions, one
uses selfmodifing code, the other uses a
jump table but is much bigger(default).

Signed-off-by: Joakim Tjernlund 
---
 arch/powerpc/kernel/head_8xx.S |  147 ++-
 1 files changed, 143 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/kernel/head_8xx.S b/arch/powerpc/kernel/head_8xx.S
index 0c2bf00..a4a94ad 100644
--- a/arch/powerpc/kernel/head_8xx.S
+++ b/arch/powerpc/kernel/head_8xx.S
@@ -494,11 +494,16 @@ DataTLBError:
 
mfspr   r10, SPRN_DAR
cmpwi   cr0, r10, 0x00f0
-   beq-2f  /* must be a buggy dcbX, icbi insn. */
-
+   beq-FixupDAR/* must be a buggy dcbX, icbi insn. */
+DARFixed:/* Return from dcbx instruction bug workaround, r10 holds value of 
DAR */
mfspr   r11, SPRN_DSISR
-   andis.  r11, r11, 0x4800/* !translation or protection */
-   bne 2f  /* branch if either is set */
+   /* As the DAR fixup may clear store we may have all 3 states zero.
+* Make sure only 0x0200(store) falls down into DIRTY handling
+*/
+   andis.  r11, r11, 0x4a00/* !translation, protection or store */
+   srwir11, r11, 16
+   cmpwi   cr0, r11, 0x0200/* just store ? */
+   bne 2f
/* Only Change bit left now, do it here as it is faster
 * than trapping to the C fault handler.
*/
@@ -604,6 +609,140 @@ DataTLBError:
 
. = 0x2000
 
+/* This is the procedure to calculate the data EA for buggy dcbx,dcbi 
instructions
+ * by decoding the registers used by the dcbx instruction and adding them.
+ * DAR is set to the calculated address and r10 also holds the EA on exit.
+ */
+ /* define if you don't want to use self modifying code */
+#define NO_SELF_MODIFYING_CODE
+FixupDAR:/* Entry point for dcbx workaround. */
+   /* fetch instruction from memory. */
+   mfspr   r10, SPRN_SRR0
+   DO_8xx_CPU6(0x3780, r3)
+   mtspr   SPRN_MD_EPN, r10
+   mfspr   r11, SPRN_M_TWB /* Get level 1 table entry address */
+   cmplwi  cr0, r11, 0x0800
+   blt-3f  /* Branch if user space */
+   lis r11, (swapper_pg_dir-PAGE_OFFSET)@h
+   ori r11, r11, (swapper_pg_dir-PAGE_OFFSET)@l
+   rlwimi  r11, r10, 22, 0xffc
+3: lwz r11, 0(r11) /* Get the level 1 entry */
+   DO_8xx_CPU6(0x3b80, r3)
+   mtspr   SPRN_MD_TWC, r11/* Load pte table base address */
+   mfspr   r11, SPRN_MD_TWC/* and get the pte address */
+   lwz r11, 0(r11) /* Get the pte */
+   /* concat physical page address(r11) and page offset(r10) */
+   rlwimi  r11, r10, 0, 20, 31
+   lwz r11,0(r11)
+/* Check if it really is a dcbx instruction. */
+/* dcbt and dcbtst does not generate DTLB Misses/Errors,
+ * no need to include them here */
+   srwir10, r11, 26/* check if major OP code is 31 */
+   cmpwi   cr0, r10, 31
+   bne-141f
+   rlwinm  r10, r11, 0, 21, 30
+   cmpwi   cr0, r10, 2028  /* Is dcbz? */
+   beq+142f
+   cmpwi   cr0, r10, 940   /* Is dcbi? */
+   beq+142f
+   cmpwi   cr0, r10, 108   /* Is dcbst? */
+   beq+144f/* Fix up store bit! */
+   cmpwi   cr0, r10, 172   /* Is dcbf? */
+   beq+142f
+   cmpwi   cr0, r10, 1964  /* Is icbi? */
+   beq+142f
+141:   mfspr   r10, SPRN_DAR   /* r10 must hold DAR at exit */
+   b   DARFixed/* Nope, go back to normal TLB processing */
+
+144:   mfspr   r10, SPRN_DSISR
+   rlwinm  r10, r10,0,7,5  /* Clear store bit for buggy dcbst insn */
+   mtspr   SPRN_DSISR, r10
+142:   /* continue, it was a dcbx, dcbi instruction. */
+#ifdef CONFIG_8xx_CPU6
+   lwz r3, 8(r0)   /* restore r3 from memory */
+#endif
+#ifndef NO_SELF_MODIFYING_CODE
+   andis.  r10,r11,0x1f/* test if reg RA is r0 */
+   li  r10,modified_in...@l
+   dcbtst  r0,r10  /* touch for store */
+   rlwinm  r11,r11,0,0,20  /* Zero lower 10 bits */
+   orisr11,r11,640 /* Transform instr. to a "add r10,RA,RB" */
+   ori r11,r11,532
+   stw r11,0(r10)  /* store add/and instruction */
+   dcbf0,r10   /* flush new instr. to memory. */
+   icbi0,r10   /* invalidate instr. cache line */
+   lwz r11, 4(r0)  /* restore r11 from memory */
+   mfspr   r10, SPRN_M_TW  /* restore r10 from M_TW */
+   isync   /* Wait until new instr is loaded from memory */
+modified_instr:
+   .space  4   /* this is where the add/and instr. is stored */
+   bne+143f
+   subfr10,r0,r10  /* r10=r10-r0, only if reg RA is r0 */
+143:   mtdar   r10 /* store faulting EA in DAR */
+   b   DARFixed/* Go back to normal TLB handling */
+#else
+   mfctr   r

[PATCH 07/10] 8xx: Restore _PAGE_WRITETHRU

2009-11-14 Thread Joakim Tjernlund
8xx has not had WRITETHRU due to lack of bits in the pte.
After the recent rewrite of the 8xx TLB code, there are
two bits left. Use one of them to WRITETHRU.

Perhaps use the last SW bit to PAGE_SPECIAL or PAGE_FILE?

Signed-off-by: Joakim Tjernlund 
---
 arch/powerpc/include/asm/pte-8xx.h |5 +++--
 arch/powerpc/kernel/head_8xx.S |8 
 2 files changed, 11 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/include/asm/pte-8xx.h 
b/arch/powerpc/include/asm/pte-8xx.h
index f23cd15..9349d83 100644
--- a/arch/powerpc/include/asm/pte-8xx.h
+++ b/arch/powerpc/include/asm/pte-8xx.h
@@ -34,12 +34,13 @@
 #define _PAGE_SHARED   0x0004  /* No ASID (context) compare */
 #define _PAGE_DIRTY0x0100  /* C: page changed */
 
-/* These 3 software bits must be masked out when the entry is loaded
- * into the TLB, 2 SW bits left.
+/* These 4 software bits must be masked out when the entry is loaded
+ * into the TLB, 1 SW bit left(0x0080).
  */
 #define _PAGE_EXEC 0x0008  /* software: i-cache coherency required */
 #define _PAGE_GUARDED  0x0010  /* software: guarded access */
 #define _PAGE_ACCESSED 0x0020  /* software: page referenced */
+#define _PAGE_WRITETHRU0x0040  /* software: caching is write through */
 
 /* Setting any bits in the nibble with the follow two controls will
  * require a TLB exception handler change.  It is assumed unused bits
diff --git a/arch/powerpc/kernel/head_8xx.S b/arch/powerpc/kernel/head_8xx.S
index c7851d1..08cd2a3 100644
--- a/arch/powerpc/kernel/head_8xx.S
+++ b/arch/powerpc/kernel/head_8xx.S
@@ -422,6 +422,10 @@ DataStoreTLBMiss:
 * above.
 */
rlwimi  r11, r10, 0, 27, 27
+   /* Insert the WriteThru flag into the TWC from the Linux PTE.
+* It is bit 25 in the Linux PTE and bit 30 in the TWC
+*/
+   rlwimi  r11, r10, 32-5, 30, 30
DO_8xx_CPU6(0x3b80, r3)
mtspr   SPRN_MD_TWC, r11
 
@@ -559,6 +563,10 @@ DARFixed:/* Return from dcbx instruction bug workaround, 
r10 holds value of DAR
 * It is bit 27 of both the Linux PTE and the TWC
 */
rlwimi  r11, r10, 0, 27, 27
+   /* Insert the WriteThru flag into the TWC from the Linux PTE.
+* It is bit 25 in the Linux PTE and bit 30 in the TWC
+*/
+   rlwimi  r11, r10, 32-5, 30, 30
DO_8xx_CPU6(0x3b80, r3)
mtspr   SPRN_MD_TWC, r11
mfspr   r11, SPRN_MD_TWC/* get the pte address again */
-- 
1.6.4.4

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH 10/10] 8xx: DTLB Miss cleanup

2009-11-14 Thread Joakim Tjernlund
Use symbolic constant for PRESENT and avoid branching.

Signed-off-by: Joakim Tjernlund 
---
 arch/powerpc/kernel/head_8xx.S |   17 +++--
 1 files changed, 11 insertions(+), 6 deletions(-)

diff --git a/arch/powerpc/kernel/head_8xx.S b/arch/powerpc/kernel/head_8xx.S
index c754ea6..f49d4d4 100644
--- a/arch/powerpc/kernel/head_8xx.S
+++ b/arch/powerpc/kernel/head_8xx.S
@@ -438,15 +438,20 @@ DataStoreTLBMiss:
 * r11 = ((r10 & PRESENT) & ((r10 & ACCESSED) >> 5));
 * r10 = (r10 & ~PRESENT) | r11;
 */
-   rlwinm  r11, r10, 32-5, 31, 31
+   rlwinm  r11, r10, 32-5, _PAGE_PRESENT
and r11, r11, r10
-   rlwimi  r10, r11, 0, 31, 31
+   rlwimi  r10, r11, 0, _PAGE_PRESENT
 
/* Honour kernel RO, User NA */
-   andi.   r11, r10, _PAGE_USER | _PAGE_RW
-   bne-cr0, 5f
-   ori r10,r10, 0x200 /* Extended encoding, bit 22 */
-5: xorir10, r10, _PAGE_RW  /* invert RW bit */
+   /* 0x200 == Extended encoding, bit 22 */
+   /* r11 =  (r10 & _PAGE_USER) >> 2 */
+   rlwinm  r11, r10, 32-2, 0x200
+   or  r10, r11, r10
+   /* r11 =  (r10 & _PAGE_RW) >> 1 */
+   rlwinm  r11, r10, 32-1, 0x200
+   or  r10, r11, r10
+   /* invert RW and 0x200 bits */
+   xorir10, r10, _PAGE_RW | 0x200
 
/* The Linux PTE won't go exactly into the MMU TLB.
 * Software indicator bits 22 and 28 must be clear.
-- 
1.6.4.4

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH 06/10] 8xx: Add missing Guarded setting in DTLB Error.

2009-11-14 Thread Joakim Tjernlund
only DTLB Miss did set this bit, DTLB Error needs too otherwise
the setting is lost when the page becomes dirty.

Signed-off-by: Joakim Tjernlund 
---
 arch/powerpc/kernel/head_8xx.S |   13 ++---
 1 files changed, 10 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/kernel/head_8xx.S b/arch/powerpc/kernel/head_8xx.S
index a4a94ad..c7851d1 100644
--- a/arch/powerpc/kernel/head_8xx.S
+++ b/arch/powerpc/kernel/head_8xx.S
@@ -552,9 +552,16 @@ DARFixed:/* Return from dcbx instruction bug workaround, 
r10 holds value of DAR
 */
ori r11, r11, 1 /* Set valid bit in physical L2 page */
DO_8xx_CPU6(0x3b80, r3)
-   mtspr   SPRN_MD_TWC, r11/* Load pte table base address 
*/
-   mfspr   r11, SPRN_MD_TWC/* and get the pte address 
*/
-   lwz r10, 0(r11) /* Get the pte */
+   mtspr   SPRN_MD_TWC, r11/* Load pte table base address */
+   mfspr   r10, SPRN_MD_TWC/* and get the pte address */
+   lwz r10, 0(r10) /* Get the pte */
+   /* Insert the Guarded flag into the TWC from the Linux PTE.
+* It is bit 27 of both the Linux PTE and the TWC
+*/
+   rlwimi  r11, r10, 0, 27, 27
+   DO_8xx_CPU6(0x3b80, r3)
+   mtspr   SPRN_MD_TWC, r11
+   mfspr   r11, SPRN_MD_TWC/* get the pte address again */
 
ori r10, r10, _PAGE_DIRTY | _PAGE_ACCESSED | _PAGE_HWWRITE
stw r10, 0(r11) /* and update pte in table */
-- 
1.6.4.4

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH 02/10] 8xx: Update TLB asm so it behaves as linux mm expects.

2009-11-14 Thread Joakim Tjernlund
Update the TLB asm to make proper use of _PAGE_DIRY and _PAGE_ACCESSED.
Get rid of _PAGE_HWWRITE too.
Pros:
 - I/D TLB Miss never needs to write to the linux pte.
 - _PAGE_ACCESSED is only set on TLB Error fixing accounting
 - _PAGE_DIRTY is mapped to 0x100, the changed bit, and is set directly
when a page has been made dirty.
 - Proper RO/RW mapping of user space.
 - Free up 2 SW TLB bits in the linux pte(add back _PAGE_WRITETHRU ?)
 - kernel RO/user NA support.
Cons:
 - A few more instructions in the TLB Miss routines.

Signed-off-by: Joakim Tjernlund 
---
 arch/powerpc/include/asm/pte-8xx.h |   13 ++---
 arch/powerpc/kernel/head_8xx.S |   99 ++-
 2 files changed, 57 insertions(+), 55 deletions(-)

diff --git a/arch/powerpc/include/asm/pte-8xx.h 
b/arch/powerpc/include/asm/pte-8xx.h
index 8c6e312..f23cd15 100644
--- a/arch/powerpc/include/asm/pte-8xx.h
+++ b/arch/powerpc/include/asm/pte-8xx.h
@@ -32,22 +32,21 @@
 #define _PAGE_FILE 0x0002  /* when !present: nonlinear file mapping */
 #define _PAGE_NO_CACHE 0x0002  /* I: cache inhibit */
 #define _PAGE_SHARED   0x0004  /* No ASID (context) compare */
+#define _PAGE_DIRTY0x0100  /* C: page changed */
 
-/* These five software bits must be masked out when the entry is loaded
- * into the TLB.
+/* These 3 software bits must be masked out when the entry is loaded
+ * into the TLB, 2 SW bits left.
  */
 #define _PAGE_EXEC 0x0008  /* software: i-cache coherency required */
 #define _PAGE_GUARDED  0x0010  /* software: guarded access */
-#define _PAGE_DIRTY0x0020  /* software: page changed */
-#define _PAGE_RW   0x0040  /* software: user write access allowed */
-#define _PAGE_ACCESSED 0x0080  /* software: page referenced */
+#define _PAGE_ACCESSED 0x0020  /* software: page referenced */
 
 /* Setting any bits in the nibble with the follow two controls will
  * require a TLB exception handler change.  It is assumed unused bits
  * are always zero.
  */
-#define _PAGE_HWWRITE  0x0100  /* h/w write enable: never set in Linux PTE */
-#define _PAGE_USER 0x0800  /* One of the PP bits, the other is USER&~RW */
+#define _PAGE_RW   0x0400  /* lsb PP bits, inverted in HW */
+#define _PAGE_USER 0x0800  /* msb PP bits */
 
 #define _PMD_PRESENT   0x0001
 #define _PMD_BAD   0x0ff0
diff --git a/arch/powerpc/kernel/head_8xx.S b/arch/powerpc/kernel/head_8xx.S
index 52ff8c5..2011230 100644
--- a/arch/powerpc/kernel/head_8xx.S
+++ b/arch/powerpc/kernel/head_8xx.S
@@ -333,26 +333,20 @@ InstructionTLBMiss:
mfspr   r11, SPRN_MD_TWC/* and get the pte address */
lwz r10, 0(r11) /* Get the pte */
 
-#ifdef CONFIG_SWAP
-   /* do not set the _PAGE_ACCESSED bit of a non-present page */
-   andi.   r11, r10, _PAGE_PRESENT
-   beq 4f
-   ori r10, r10, _PAGE_ACCESSED
-   mfspr   r11, SPRN_MD_TWC/* get the pte address again */
-   stw r10, 0(r11)
-4:
-#else
-   ori r10, r10, _PAGE_ACCESSED
-   stw r10, 0(r11)
-#endif
+   andi.   r11, r10, _PAGE_ACCESSED | _PAGE_PRESENT
+   cmpwi   cr0, r11, _PAGE_ACCESSED | _PAGE_PRESENT
+   bne-cr0, 2f
+
+   /* Clear PP lsb, 0x400 */
+   rlwinm  r10, r10, 0, 22, 20
 
/* The Linux PTE won't go exactly into the MMU TLB.
-* Software indicator bits 21, 22 and 28 must be clear.
+* Software indicator bits 22 and 28 must be clear.
 * Software indicator bits 24, 25, 26, and 27 must be
 * set.  All other Linux PTE bits control the behavior
 * of the MMU.
 */
-2: li  r11, 0x00f0
+   li  r11, 0x00f0
rlwimi  r10, r11, 0, 24, 28 /* Set 24-27, clear 28 */
DO_8xx_CPU6(0x2d80, r3)
mtspr   SPRN_MI_RPN, r10/* Update TLB entry */
@@ -365,6 +359,22 @@ InstructionTLBMiss:
lwz r3, 8(r0)
 #endif
rfi
+2:
+   mfspr   r11, SPRN_SRR1
+   /* clear all error bits as TLB Miss
+* sets a few unconditionally
+   */
+   rlwinm  r11, r11, 0, 0x
+   mtspr   SPRN_SRR1, r11
+
+   mfspr   r10, SPRN_M_TW  /* Restore registers */
+   lwz r11, 0(r0)
+   mtcrr11
+   lwz r11, 4(r0)
+#ifdef CONFIG_8xx_CPU6
+   lwz r3, 8(r0)
+#endif
+   b   InstructionAccess
 
. = 0x1200
 DataStoreTLBMiss:
@@ -409,21 +419,27 @@ DataStoreTLBMiss:
DO_8xx_CPU6(0x3b80, r3)
mtspr   SPRN_MD_TWC, r11
 
-#ifdef CONFIG_SWAP
-   /* do not set the _PAGE_ACCESSED bit of a non-present page */
-   andi.   r11, r10, _PAGE_PRESENT
-   beq 4f
-   ori r10, r10, _PAGE_ACCESSED
-4:
-   /* and update pte in table */
-#else
-   ori r10, r10, _PAGE_ACCESSED
-#endif
-   mfspr   r11, SPRN_MD_TWC/* get the pte address again */
-   stw r10, 0(r11)
+   /* Both _PAGE_ACCESSED and _PAGE_PRESENT has to be set.
+* We also need to know if the insn is a load/store, so:
+ 

[PATCH 03/10] 8xx: Tag DAR with 0x00f0 to catch buggy instructions.

2009-11-14 Thread Joakim Tjernlund
dcbz, dcbf, dcbi, dcbst and icbi do not set DAR when they
cause a DTLB Error. Dectect this by tagging DAR with 0x00f0
at every exception exit that modifies DAR.
Test for DAR=0x00f0 in DataTLBError and bail
to handle_page_fault().

Signed-off-by: Joakim Tjernlund 
---
 arch/powerpc/kernel/head_8xx.S |   15 ++-
 1 files changed, 14 insertions(+), 1 deletions(-)

diff --git a/arch/powerpc/kernel/head_8xx.S b/arch/powerpc/kernel/head_8xx.S
index 2011230..bca22fa 100644
--- a/arch/powerpc/kernel/head_8xx.S
+++ b/arch/powerpc/kernel/head_8xx.S
@@ -206,6 +206,8 @@ MachineCheck:
EXCEPTION_PROLOG
mfspr r4,SPRN_DAR
stw r4,_DAR(r11)
+   li r5,0x00f0
+   mtspr SPRN_DAR,r5   /* Tag DAR, to be used in DTLB Error */
mfspr r5,SPRN_DSISR
stw r5,_DSISR(r11)
addi r3,r1,STACK_FRAME_OVERHEAD
@@ -222,6 +224,8 @@ DataAccess:
stw r10,_DSISR(r11)
mr  r5,r10
mfspr   r4,SPRN_DAR
+   li  r10,0x00f0
+   mtspr   SPRN_DAR,r10/* Tag DAR, to be used in DTLB Error */
EXC_XFER_EE_LITE(0x300, handle_page_fault)
 
 /* Instruction access exception.
@@ -244,6 +248,8 @@ Alignment:
EXCEPTION_PROLOG
mfspr   r4,SPRN_DAR
stw r4,_DAR(r11)
+   li  r5,0x00f0
+   mtspr   SPRN_DAR,r5 /* Tag DAR, to be used in DTLB Error */
mfspr   r5,SPRN_DSISR
stw r5,_DSISR(r11)
addir3,r1,STACK_FRAME_OVERHEAD
@@ -445,6 +451,7 @@ DataStoreTLBMiss:
 * of the MMU.
 */
 2: li  r11, 0x00f0
+   mtspr   SPRN_DAR,r11/* Tag DAR */
rlwimi  r10, r11, 0, 24, 28 /* Set 24-27, clear 28 */
DO_8xx_CPU6(0x3d80, r3)
mtspr   SPRN_MD_RPN, r10/* Update TLB entry */
@@ -485,6 +492,10 @@ DataTLBError:
stw r10, 0(r0)
stw r11, 4(r0)
 
+   mfspr   r10, SPRN_DAR
+   cmpwi   cr0, r10, 0x00f0
+   beq-2f  /* must be a buggy dcbX, icbi insn. */
+
mfspr   r11, SPRN_DSISR
andis.  r11, r11, 0x4800/* !translation or protection */
bne 2f  /* branch if either is set */
@@ -508,7 +519,8 @@ DataTLBError:
 * are initialized in mapin_ram().  This will avoid the problem,
 * assuming we only use the dcbi instruction on kernel addresses.
 */
-   mfspr   r10, SPRN_DAR
+
+   /* DAR is in r10 already */
rlwinm  r11, r10, 0, 0, 19
ori r11, r11, MD_EVALID
mfspr   r10, SPRN_M_CASID
@@ -550,6 +562,7 @@ DataTLBError:
 * of the MMU.
 */
li  r11, 0x00f0
+   mtspr   SPRN_DAR,r11/* Tag DAR */
rlwimi  r10, r11, 0, 24, 28 /* Set 24-27, clear 28 */
DO_8xx_CPU6(0x3d80, r3)
mtspr   SPRN_MD_RPN, r10/* Update TLB entry */
-- 
1.6.4.4

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH 00/10] Fix 8xx MMU/TLB

2009-11-14 Thread Joakim Tjernlund
This is hopfully the last iteration of the series.
Rex & Scott, please test and signoff.
Changes since last version:
 - Added mandatory pinning of iTLB
 - Added "DTLB Miss cleanup"

Joakim Tjernlund (10):
  8xx: invalidate non present TLBs
  8xx: Update TLB asm so it behaves as linux mm expects.
  8xx: Tag DAR with 0x00f0 to catch buggy instructions.
  8xx: Always pin kernel instruction TLB
  8xx: Fixup DAR from buggy dcbX instructions.
  8xx: Add missing Guarded setting in DTLB Error.
  8xx: Restore _PAGE_WRITETHRU
  8xx: start using dcbX instructions in various copy routines
  8xx: Remove DIRTY pte handling in DTLB Error.
  8xx: DTLB Miss cleanup

 arch/powerpc/include/asm/pte-8xx.h |   14 +-
 arch/powerpc/kernel/head_8xx.S |  315 ++--
 arch/powerpc/kernel/misc_32.S  |   18 --
 arch/powerpc/lib/copy_32.S |   24 ---
 arch/powerpc/mm/fault.c|8 +-
 5 files changed, 211 insertions(+), 168 deletions(-)

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH 01/10] 8xx: invalidate non present TLBs

2009-11-14 Thread Joakim Tjernlund
8xx sometimes need to load a invalid/non-present TLBs in
it DTLB asm handler.
These must be invalidated separaly as linux mm don't.

Signed-off-by: Joakim Tjernlund 
---
 arch/powerpc/mm/fault.c |8 +++-
 1 files changed, 7 insertions(+), 1 deletions(-)

diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
index 7699394..071e0ca 100644
--- a/arch/powerpc/mm/fault.c
+++ b/arch/powerpc/mm/fault.c
@@ -39,7 +39,7 @@
 #include 
 #include 
 #include 
-
+#include 
 
 #ifdef CONFIG_KPROBES
 static inline int notify_page_fault(struct pt_regs *regs)
@@ -243,6 +243,12 @@ good_area:
goto bad_area;
 #endif /* CONFIG_6xx */
 #if defined(CONFIG_8xx)
+   /* 8xx sometimes need to load a invalid/non-present TLBs.
+* These must be invalidated separately as linux mm don't.
+*/
+   if (error_code & 0x4000) /* no translation? */
+   _tlbil_va(address, 0, 0, 0);
+
 /* The MPC8xx seems to always set 0x8000, which is
  * "undefined".  Of those that can be set, this is the only
  * one which seems bad.
-- 
1.6.4.4

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH 04/10] 8xx: Always pin kernel instruction TLB

2009-11-14 Thread Joakim Tjernlund
Various kernel asm modifies SRR0/SRR1 just before executing
a rfi. If such code crosses a page boundary you risk a TLB miss
which will clobber SRR0/SRR1. Avoid this by always pinning
kernel instruction TLB space.

Signed-off-by: Joakim Tjernlund 
---
 arch/powerpc/kernel/head_8xx.S |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/arch/powerpc/kernel/head_8xx.S b/arch/powerpc/kernel/head_8xx.S
index bca22fa..0c2bf00 100644
--- a/arch/powerpc/kernel/head_8xx.S
+++ b/arch/powerpc/kernel/head_8xx.S
@@ -704,7 +704,7 @@ start_here:
  */
 initial_mmu:
tlbia   /* Invalidate all TLB entries */
-#ifdef CONFIG_PIN_TLB
+#if 1 /* CONFIG_PIN_TLB */
lis r8, mi_rs...@h
ori r8, r8, 0x1c00
 #else
-- 
1.6.4.4

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev