Re: [PATCH v2] mmc: Add mmc_vddrange_to_ocrmask() helper function

2008-12-14 Thread Pierre Ossman
On Mon, 1 Dec 2008 14:53:20 +0300
Anton Vorontsov avoront...@ru.mvista.com wrote:

 
 Though, the $subject patch could be merged anytime as it doesn't
 depend on anything else. So, if you'll merge it earlier, that will
 make things a bit easier: -1 patch to resend. ;-)
 

Queued up. Will be sent once the merge window opens up.

Rgds
-- 
 -- Pierre Ossman

  Linux kernel, MMC maintainerhttp://www.kernel.org
  rdesktop, core developer  http://www.rdesktop.org

  WARNING: This correspondence is being monitored by the
  Swedish government. Make sure your server uses encryption
  for SMTP traffic and consider using PGP for end-to-end
  encryption.
___
Linuxppc-dev mailing list
Linuxppc-dev@ozlabs.org
https://ozlabs.org/mailman/listinfo/linuxppc-dev


Re: [PATCH] Fix corruption error in rh_alloc_fixed()

2008-12-14 Thread Guillaume Knispel
On Tue, 09 Dec 2008 09:16:50 -0600
Timur Tabi ti...@freescale.com wrote:

 Guillaume Knispel wrote:
 
  blk = NULL; at the end of the loop is what is done in the more used
  rh_alloc_align(), so for consistency either we change both or we use
  the same construction here.
  I also think that testing for info-free_list is harder to understand
  because you must have the linked list implementation in your head
  (which a kernel developer should anyway so this is not so important)
 
 Fair enough.
 
 Acked-by: Timur Tabi ti...@freescale.com
 

Kumar, can this go into your tree ?
(copying the patch under so you have it at hand)

There is an error in rh_alloc_fixed() of the Remote Heap code:
If there is at least one free block blk won't be NULL at the end of the
search loop, so -ENOMEM won't be returned and the else branch of
if (bs == s || be == e) will be taken, corrupting the management
structures.

Signed-off-by: Guillaume Knispel gknis...@proformatique.com
---
Fix an error in rh_alloc_fixed() that made allocations succeed when
they should fail, and corrupted management structures.

diff --git a/arch/powerpc/lib/rheap.c b/arch/powerpc/lib/rheap.c
index 29b2941..45907c1 100644
--- a/arch/powerpc/lib/rheap.c
+++ b/arch/powerpc/lib/rheap.c
@@ -556,6 +556,7 @@ unsigned long rh_alloc_fixed(rh_info_t * info, unsigned 
long start, int size, co
be = blk-start + blk-size;
if (s = bs  e = be)
break;
+   blk = NULL;
}
 
if (blk == NULL)

-- 
Guillaume KNISPEL
___
Linuxppc-dev mailing list
Linuxppc-dev@ozlabs.org
https://ozlabs.org/mailman/listinfo/linuxppc-dev


Re: [PATCH] Fix corruption error in rh_alloc_fixed()

2008-12-14 Thread Paul Mackerras
Guillaume Knispel writes:

 On Tue, 09 Dec 2008 09:16:50 -0600
 Timur Tabi ti...@freescale.com wrote:
 
  Guillaume Knispel wrote:
  
   blk = NULL; at the end of the loop is what is done in the more used
   rh_alloc_align(), so for consistency either we change both or we use
   the same construction here.
   I also think that testing for info-free_list is harder to understand
   because you must have the linked list implementation in your head
   (which a kernel developer should anyway so this is not so important)
  
  Fair enough.
  
  Acked-by: Timur Tabi ti...@freescale.com
  
 
 Kumar, can this go into your tree ?
 (copying the patch under so you have it at hand)
 
 There is an error in rh_alloc_fixed() of the Remote Heap code:
 If there is at least one free block blk won't be NULL at the end of the
 search loop, so -ENOMEM won't be returned and the else branch of
 if (bs == s || be == e) will be taken, corrupting the management
 structures.
 
 Signed-off-by: Guillaume Knispel gknis...@proformatique.com
 ---
 Fix an error in rh_alloc_fixed() that made allocations succeed when
 they should fail, and corrupted management structures.

What's the impact of this?  Can it cause an oops?

Is it a regression from 2.6.27?  Should we be putting it in 2.6.28?

Paul.
___
Linuxppc-dev mailing list
Linuxppc-dev@ozlabs.org
https://ozlabs.org/mailman/listinfo/linuxppc-dev


Re: [PATCH 1/3] add of_find_next_cache_node()

2008-12-14 Thread Benjamin Herrenschmidt
On Wed, 2008-12-10 at 18:46 -0600, Nathan Lynch wrote:
 +   /* OF on pmac has nodes instead of properties named l2-cache
 +* beneath CPU nodes.
 +*/
 +   if (!strcmp(np-type, cpu))
 +   for_each_child_of_node(np, child)
 +   if (!strcmp(child-type, cache))
 +   return child;
 +

pmac has both actually. And the property points to the node. It's a
problem for /proc/device-tree so we rename them iirc, but only in /proc,
ie, they should still be intact in the tree I think.

Cheers,
Ben.

___
Linuxppc-dev mailing list
Linuxppc-dev@ozlabs.org
https://ozlabs.org/mailman/listinfo/linuxppc-dev


Re: [PATCH 1/3] add of_find_next_cache_node()

2008-12-14 Thread Nathan Lynch
Benjamin Herrenschmidt wrote:
 On Wed, 2008-12-10 at 18:46 -0600, Nathan Lynch wrote:
  +   /* OF on pmac has nodes instead of properties named l2-cache
  +* beneath CPU nodes.
  +*/
  +   if (!strcmp(np-type, cpu))
  +   for_each_child_of_node(np, child)
  +   if (!strcmp(child-type, cache))
  +   return child;
  +
 
 pmac has both actually. And the property points to the node. It's a
 problem for /proc/device-tree so we rename them iirc, but only in /proc,
 ie, they should still be intact in the tree I think.

Okay, I'll check on this.
___
Linuxppc-dev mailing list
Linuxppc-dev@ozlabs.org
https://ozlabs.org/mailman/listinfo/linuxppc-dev


Re: [RFC] Dummy GPIO driver for use with SPI

2008-12-14 Thread David Gibson
On Fri, Dec 12, 2008 at 09:22:02AM -0500, Steven A. Falco wrote:
 This patch adds a dummy GPIO driver, which is useful for SPI devices
 that do not have a physical chip select.
 
 Signed-off-by: Steven A. Falco sfa...@harris.com
 ---
 The SPI subsystem requires a chip-select for each connected slave
 device.  I have a custom board with an Atmel co-processor.  This part
 is reprogrammed via SPI, so it needs a chip select to satisfy the SPI
 subsystem, but my hardware does not have a physical CS connected.
 
 I could waste a no-connect GPIO pin, but I'd rather not.  So, I've
 written a dummy GPIO driver, which behaves exactly like a real GPIO
 device, but with no underlying hardware.  This could also be useful
 as a template for real GPIO drivers.
 
 I use the following dts entry:
 
   GPIO3: du...@ef50 {
   compatible = linux,dummy-gpio;
   reg = ef50 1;
   gpio-controller;
   #gpio-cells = 2;
   };

This is not sane.  I can see reasons it might be useful to have a
dummy gpio driver within the kernel, but since this doesn't represent
any real hardware, it should not appear in the device tree.

-- 
David Gibson| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au  | minimalist, thank you.  NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson
___
Linuxppc-dev mailing list
Linuxppc-dev@ozlabs.org
https://ozlabs.org/mailman/listinfo/linuxppc-dev


Re: [PATCH] Fix corruption error in rh_alloc_fixed()

2008-12-14 Thread Guillaume Knispel
On Mon, 15 Dec 2008 08:21:05 +1100
Paul Mackerras pau...@samba.org wrote:

 Guillaume Knispel writes:
 
  On Tue, 09 Dec 2008 09:16:50 -0600
  Timur Tabi ti...@freescale.com wrote:
  
   Guillaume Knispel wrote:
   
blk = NULL; at the end of the loop is what is done in the more used
rh_alloc_align(), so for consistency either we change both or we use
the same construction here.
I also think that testing for info-free_list is harder to understand
because you must have the linked list implementation in your head
(which a kernel developer should anyway so this is not so important)
   
   Fair enough.
   
   Acked-by: Timur Tabi ti...@freescale.com
   
  
  Kumar, can this go into your tree ?
  (copying the patch under so you have it at hand)
  
  There is an error in rh_alloc_fixed() of the Remote Heap code:
  If there is at least one free block blk won't be NULL at the end of the
  search loop, so -ENOMEM won't be returned and the else branch of
  if (bs == s || be == e) will be taken, corrupting the management
  structures.
  
  Signed-off-by: Guillaume Knispel gknis...@proformatique.com
  ---
  Fix an error in rh_alloc_fixed() that made allocations succeed when
  they should fail, and corrupted management structures.
 
 What's the impact of this?  Can it cause an oops?
 
 Is it a regression from 2.6.27?  Should we be putting it in 2.6.28?
 
 Paul.

The problem obviously only affect people that make use of
rh_alloc_fixed(), which is the case when you program an MCC or a QMC
controller of the CPM. Without the patch cpm_muram_alloc_fixed()
succeed when it should not, for example when trying to allocate out of
range areas or already allocated areas, so it is possible that buffer
descriptors or other control structures used by other controllers get
corrupted.

Digging into old Linux (like 2.6.9, I haven't checked before),
the problem seems to always have been present.

Without this patch I experienced oops (sometimes panic, sometimes not)
in various unrelated part (probably an indirect result of either
corruption of rheap management structures or corruption caused by the
CPM using crazy overwritten data) and also initialization of
multi-channel control structures putting other communication
controllers out-of-order.

The only risk I can think off is that it could break some out of tree
kernel space code which worked because of luck and a double error - for
example when doing a single DPRam allocation from offset 0 while
leaving an area reserved at the base of the DPRam. So I think it should
be put in 2.6.28.

Guillaume Knispel
___
Linuxppc-dev mailing list
Linuxppc-dev@ozlabs.org
https://ozlabs.org/mailman/listinfo/linuxppc-dev


Re: powerpc/cell/axon-msi: fix MSI after kexec

2008-12-14 Thread Michael Ellerman
On Fri, 2008-12-12 at 20:19 +0100, Arnd Bergmann wrote:
 Commit d015fe995 'powerpc/cell/axon-msi: Retry on missing interrupt'
 has turned a rare failure to kexec on QS22 into a reproducible
 error, which we have now analysed.
 
 The problem is that after a kexec, the MSIC hardware still points
 into the middle of the old ring buffer. We set up the ring buffer
 during reboot, but not the offset into it. On older kernels, this
 would cause a storm of thousands of spurious interrupts after a
 kexec, which would most of the time get dropped silently.
 
 With the new code, we time out on each interrupt, waiting for
 it to become valid. If more interrupts come in that we time
 out on, this goes on indefinitely, which eventually leads to
 a hard crash.
 
 The solution in this patch is to read the current offset from
 the MSIC when reinitializing it. This now works correctly, as
 expected.
 
 Reported-by: Dirk Herrendoerfer d.herrendoer...@de.ibm.com
 Signed-off-by: Arnd Bergmann a...@arndb.de
 ---
 
 Please apply when Dirk and Michael have given their Ack.
 Should we have it in 2.6.28? Not sure if going from 'works sometimes'
 to 'works never' counts as a regression. Most users won't be impacted,
 because they don't use kexec on QS22.

I think it does count, it's a pretty small fix.

 diff --git a/arch/powerpc/platforms/cell/axon_msi.c 
 b/arch/powerpc/platforms/cell/axon_msi.c
 index 442cf36..548fa4e 100644
 --- a/arch/powerpc/platforms/cell/axon_msi.c
 +++ b/arch/powerpc/platforms/cell/axon_msi.c
 @@ -413,6 +422,9 @@ static int axon_msi_probe(struct of_device *device,
   MSIC_CTRL_IRQ_ENABLE | MSIC_CTRL_ENABLE |
   MSIC_CTRL_FIFO_SIZE);
  
 + msic-read_offset = dcr_read(msic-dcr_host, MSIC_WRITE_OFFSET_REG)
 +  MSIC_FIFO_SIZE_MASK;
 +

Acked-by: Michael Ellerman mich...@ellerman.id.au

cheers

-- 
Michael Ellerman
OzLabs, IBM Australia Development Lab

wwweb: http://michael.ellerman.id.au
phone: +61 2 6212 1183 (tie line 70 21183)

We do not inherit the earth from our ancestors,
we borrow it from our children. - S.M.A.R.T Person


signature.asc
Description: This is a digitally signed message part
___
Linuxppc-dev mailing list
Linuxppc-dev@ozlabs.org
https://ozlabs.org/mailman/listinfo/linuxppc-dev

[PATCH 0/16] powerpc: Preliminary work to enable SMP BookE (v2)

2008-12-14 Thread Benjamin Herrenschmidt
This series of patches is aimed at supporting SMP on non-hash
based processors. It consists of a rework of the MMU context
management and TLB management, clearly splitting hash32, hash64
and nohash in both cases, adding SMP safe context handling and
some basic SMP TLB management.

There is room for improvements, such as implementing lazy TLB
flushing on processors without invalidate-by-PID support HW,
some better IPI mechanism, support for variable sizes PID,
lock less fast path in the MMU context switch, etc...
but it should basically work.

There are some semingly unrelated patches in the pile as they
are dependencies of the main ones so I'm including them in.
Some of these may already have been applied in Kumar or jwb
tree.

___
Linuxppc-dev mailing list
Linuxppc-dev@ozlabs.org
https://ozlabs.org/mailman/listinfo/linuxppc-dev


[PATCH 1/16] powerpc: Fix bogus cache flushing on all 40x and BookE processors v2

2008-12-14 Thread Benjamin Herrenschmidt
We were missing the CPU_FTR_NOEXECUTE bit in our cputable for all
these processors. The result is that update_mmu_cache() would flush
the cache for all pages mapped to userspace which is totally
unnecessary on those processors since we already handle flushing
on execute in the page fault path.

This should provide a nice speed up ;-)

Signed-off-by: Benjamin Herrenschmidt b...@kernel.crashing.org
---

This one fixes the E500 definition and uses a bit that works
for 32-bit processors

 arch/powerpc/include/asm/cputable.h |   15 ---
 1 file changed, 8 insertions(+), 7 deletions(-)

--- linux-work.orig/arch/powerpc/include/asm/cputable.h 2008-12-03 
13:32:53.0 +1100
+++ linux-work/arch/powerpc/include/asm/cputable.h  2008-12-08 
15:42:13.0 +1100
@@ -163,6 +163,7 @@ extern const char *powerpc_base_platform
 #define CPU_FTR_SPEASM_CONST(0x0200)
 #define CPU_FTR_NEED_PAIRED_STWCX  ASM_CONST(0x0400)
 #define CPU_FTR_LWSYNC ASM_CONST(0x0800)
+#define CPU_FTR_NOEXECUTE  ASM_CONST(0x1000)
 
 /*
  * Add the 64-bit processor unique features in the top half of the word;
@@ -177,7 +178,6 @@ extern const char *powerpc_base_platform
 #define CPU_FTR_SLBLONG_ASM_CONST(0x0001)
 #define CPU_FTR_16M_PAGE   LONG_ASM_CONST(0x0002)
 #define CPU_FTR_TLBIEL LONG_ASM_CONST(0x0004)
-#define CPU_FTR_NOEXECUTE  LONG_ASM_CONST(0x0008)
 #define CPU_FTR_IABR   LONG_ASM_CONST(0x0020)
 #define CPU_FTR_MMCRA  LONG_ASM_CONST(0x0040)
 #define CPU_FTR_CTRL   LONG_ASM_CONST(0x0080)
@@ -367,19 +367,20 @@ extern const char *powerpc_base_platform
 #define CPU_FTRS_CLASSIC32 (CPU_FTR_COMMON | \
CPU_FTR_USE_TB | CPU_FTR_HPTE_TABLE)
 #define CPU_FTRS_8XX   (CPU_FTR_USE_TB)
-#define CPU_FTRS_40X   (CPU_FTR_USE_TB | CPU_FTR_NODSISRALIGN)
-#define CPU_FTRS_44X   (CPU_FTR_USE_TB | CPU_FTR_NODSISRALIGN)
+#define CPU_FTRS_40X   (CPU_FTR_USE_TB | CPU_FTR_NODSISRALIGN | 
CPU_FTR_NOEXECUTE)
+#define CPU_FTRS_44X   (CPU_FTR_USE_TB | CPU_FTR_NODSISRALIGN | 
CPU_FTR_NOEXECUTE)
 #define CPU_FTRS_E200  (CPU_FTR_USE_TB | CPU_FTR_SPE_COMP | \
CPU_FTR_NODSISRALIGN | CPU_FTR_COHERENT_ICACHE | \
-   CPU_FTR_UNIFIED_ID_CACHE)
+   CPU_FTR_UNIFIED_ID_CACHE | CPU_FTR_NOEXECUTE)
 #define CPU_FTRS_E500  (CPU_FTR_MAYBE_CAN_DOZE | CPU_FTR_USE_TB | \
-   CPU_FTR_SPE_COMP | CPU_FTR_MAYBE_CAN_NAP | CPU_FTR_NODSISRALIGN)
+   CPU_FTR_SPE_COMP | CPU_FTR_MAYBE_CAN_NAP | CPU_FTR_NODSISRALIGN | \
+   CPU_FTR_NOEXECUTE)
 #define CPU_FTRS_E500_2(CPU_FTR_MAYBE_CAN_DOZE | CPU_FTR_USE_TB | \
CPU_FTR_SPE_COMP | CPU_FTR_MAYBE_CAN_NAP | CPU_FTR_BIG_PHYS | \
-   CPU_FTR_NODSISRALIGN)
+   CPU_FTR_NODSISRALIGN | CPU_FTR_NOEXECUTE)
 #define CPU_FTRS_E500MC(CPU_FTR_MAYBE_CAN_DOZE | CPU_FTR_USE_TB | \
CPU_FTR_MAYBE_CAN_NAP | CPU_FTR_BIG_PHYS | CPU_FTR_NODSISRALIGN | \
-   CPU_FTR_L2CSR | CPU_FTR_LWSYNC)
+   CPU_FTR_L2CSR | CPU_FTR_LWSYNC | CPU_FTR_NOEXECUTE)
 #define CPU_FTRS_GENERIC_32(CPU_FTR_COMMON | CPU_FTR_NODSISRALIGN)
 
 /* 64-bit CPUs */
___
Linuxppc-dev mailing list
Linuxppc-dev@ozlabs.org
https://ozlabs.org/mailman/listinfo/linuxppc-dev


[PATCH 2/16] powerpc: Fix asm EMIT_BUG_ENTRY with !CONFIG_BUG

2008-12-14 Thread Benjamin Herrenschmidt
Instead of not defining it at all, this defines the macro as
being empty, thus avoiding ifdef's in call sites when CONFIG_BUG
is not set.

Also removes an extra whitespace in the existing definition

Signed-off-by: Benjamin Herrenschmidt b...@kernel.crashing.org
---

 arch/powerpc/include/asm/bug.h |   11 ++-
 1 file changed, 10 insertions(+), 1 deletion(-)

--- linux-work.orig/arch/powerpc/include/asm/bug.h  2008-12-08 
14:37:16.0 +1100
+++ linux-work/arch/powerpc/include/asm/bug.h   2008-12-08 15:14:21.0 
+1100
@@ -3,6 +3,7 @@
 #ifdef __KERNEL__
 
 #include asm/asm-compat.h
+
 /*
  * Define an illegal instr to trap on the bug.
  * We don't use 0 because that marks the end of a function
@@ -14,6 +15,7 @@
 #ifdef CONFIG_BUG
 
 #ifdef __ASSEMBLY__
+#include asm/asm-offsets.h
 #ifdef CONFIG_DEBUG_BUGVERBOSE
 .macro EMIT_BUG_ENTRY addr,file,line,flags
 .section __bug_table,a
@@ -26,7 +28,7 @@
 .previous
 .endm
 #else
- .macro EMIT_BUG_ENTRY addr,file,line,flags
+.macro EMIT_BUG_ENTRY addr,file,line,flags
 .section __bug_table,a
 5001:   PPC_LONG \addr
 .short \flags
@@ -113,6 +115,13 @@
 #define HAVE_ARCH_BUG_ON
 #define HAVE_ARCH_WARN_ON
 #endif /* __ASSEMBLY __ */
+#else
+#ifdef __ASSEMBLY__
+.macro EMIT_BUG_ENTRY addr,file,line,flags
+.endm
+#else /* !__ASSEMBLY__ */
+#define _EMIT_BUG_ENTRY
+#endif
 #endif /* CONFIG_BUG */
 
 #include asm-generic/bug.h
___
Linuxppc-dev mailing list
Linuxppc-dev@ozlabs.org
https://ozlabs.org/mailman/listinfo/linuxppc-dev


[PATCH 3/16] powerpc/4xx: Extended DCR support v2

2008-12-14 Thread Benjamin Herrenschmidt
This adds supports to the extended DCR addressing via
the indirect mfdcrx/mtdcrx instructions supported by some
4xx cores (440H6 and later)

I enabled the feature for now only on AMCC 460 chips

Signed-off-by: Benjamin Herrenschmidt b...@kernel.crashing.org
---

This variant uses 440x6 instead of 440H6. I made no other
changes to the code as I think the codegen is the less bad I've
had so far and I rely on Josh further work on cleaning up the
type of 440core selection at Kconfig time so the feature are
properly reflected in the POSSIBLE and ALWAYS masks based on
the core selection. That way, if only one core type is selected
the feature test should resolve at compile time.


 arch/powerpc/include/asm/cputable.h   |7 ++-
 arch/powerpc/include/asm/dcr-native.h |   63 +++---
 arch/powerpc/kernel/cputable.c|4 +-
 arch/powerpc/sysdev/dcr-low.S |8 +++-
 4 files changed, 65 insertions(+), 17 deletions(-)

--- linux-work.orig/arch/powerpc/include/asm/cputable.h 2008-12-08 
15:56:42.0 +1100
+++ linux-work/arch/powerpc/include/asm/cputable.h  2008-12-09 
14:04:44.0 +1100
@@ -164,6 +164,7 @@ extern const char *powerpc_base_platform
 #define CPU_FTR_NEED_PAIRED_STWCX  ASM_CONST(0x0400)
 #define CPU_FTR_LWSYNC ASM_CONST(0x0800)
 #define CPU_FTR_NOEXECUTE  ASM_CONST(0x1000)
+#define CPU_FTR_INDEXED_DCRASM_CONST(0x2000)
 
 /*
  * Add the 64-bit processor unique features in the top half of the word;
@@ -369,6 +370,8 @@ extern const char *powerpc_base_platform
 #define CPU_FTRS_8XX   (CPU_FTR_USE_TB)
 #define CPU_FTRS_40X   (CPU_FTR_USE_TB | CPU_FTR_NODSISRALIGN | 
CPU_FTR_NOEXECUTE)
 #define CPU_FTRS_44X   (CPU_FTR_USE_TB | CPU_FTR_NODSISRALIGN | 
CPU_FTR_NOEXECUTE)
+#define CPU_FTRS_440x6 (CPU_FTR_USE_TB | CPU_FTR_NODSISRALIGN | 
CPU_FTR_NOEXECUTE | \
+   CPU_FTR_INDEXED_DCR)
 #define CPU_FTRS_E200  (CPU_FTR_USE_TB | CPU_FTR_SPE_COMP | \
CPU_FTR_NODSISRALIGN | CPU_FTR_COHERENT_ICACHE | \
CPU_FTR_UNIFIED_ID_CACHE | CPU_FTR_NOEXECUTE)
@@ -455,7 +458,7 @@ enum {
CPU_FTRS_40X |
 #endif
 #ifdef CONFIG_44x
-   CPU_FTRS_44X |
+   CPU_FTRS_44X | CPU_FTRS_440x6 |
 #endif
 #ifdef CONFIG_E200
CPU_FTRS_E200 |
@@ -495,7 +498,7 @@ enum {
CPU_FTRS_40X 
 #endif
 #ifdef CONFIG_44x
-   CPU_FTRS_44X 
+   CPU_FTRS_44X  CPU_FTRS_440x6 
 #endif
 #ifdef CONFIG_E200
CPU_FTRS_E200 
Index: linux-work/arch/powerpc/include/asm/dcr-native.h
===
--- linux-work.orig/arch/powerpc/include/asm/dcr-native.h   2008-09-29 
14:21:37.0 +1000
+++ linux-work/arch/powerpc/include/asm/dcr-native.h2008-12-08 
15:56:43.0 +1100
@@ -23,6 +23,7 @@
 #ifndef __ASSEMBLY__
 
 #include linux/spinlock.h
+#include asm/cputable.h
 
 typedef struct {
unsigned int base;
@@ -39,23 +40,45 @@ static inline bool dcr_map_ok_native(dcr
 #define dcr_read_native(host, dcr_n)   mfdcr(dcr_n + host.base)
 #define dcr_write_native(host, dcr_n, value)   mtdcr(dcr_n + host.base, value)
 
-/* Device Control Registers */
-void __mtdcr(int reg, unsigned int val);
-unsigned int __mfdcr(int reg);
+/* Table based DCR accessors */
+extern void __mtdcr(unsigned int reg, unsigned int val);
+extern unsigned int __mfdcr(unsigned int reg);
+
+/* mfdcrx/mtdcrx instruction based accessors. We hand code
+ * the opcodes in order not to depend on newer binutils
+ */
+static inline unsigned int mfdcrx(unsigned int reg)
+{
+   unsigned int ret;
+   asm volatile(.long 0x7c000206 | (%0  21) | (%1  16)
+: =r (ret) : r (reg));
+   return ret;
+}
+
+static inline void mtdcrx(unsigned int reg, unsigned int val)
+{
+   asm volatile(.long 0x7c000306 | (%0  21) | (%1  16)
+: : r (val), r (reg));
+}
+
 #define mfdcr(rn)  \
({unsigned int rval;\
-   if (__builtin_constant_p(rn))   \
+   if (__builtin_constant_p(rn)  rn  1024)  \
asm volatile(mfdcr %0, __stringify(rn)\
  : =r (rval));   \
+   else if (likely(cpu_has_feature(CPU_FTR_INDEXED_DCR)))  \
+   rval = mfdcrx(rn);  \
else\
rval = __mfdcr(rn); \
rval;})
 
 #define mtdcr(rn, v)   \
 do {   \
-   if (__builtin_constant_p(rn))   \
+   if (__builtin_constant_p(rn)  rn  1024)  \
asm volatile(mtdcr  __stringify(rn) ,%0 \

[PATCH 4/16] powerpc/fsl-booke: Fix problem with _tlbil_va

2008-12-14 Thread Benjamin Herrenschmidt
From: Kumar Gala ga...@kernel.crashing.org

An example calling sequence which we did see:

copy_user_highpage - kmap_atomic - flush_tlb_page - _tlbil_va

We got interrupted after setting up the MAS registers before the
tlbwe and the interrupt handler that caused the interrupt also did
a kmap_atomic (ide code) and thus on returning from the interrupt
the MAS registers no longer contained the proper values.

Since we dont save/restore MAS registers for normal interrupts we
need to disable interrupts in _tlbil_va to ensure atomicity.

Signed-off-by: Kumar Gala ga...@kernel.crashing.org
---

 arch/powerpc/kernel/misc_32.S |3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/powerpc/kernel/misc_32.S b/arch/powerpc/kernel/misc_32.S
index bdc8b0e..d108715 100644
--- a/arch/powerpc/kernel/misc_32.S
+++ b/arch/powerpc/kernel/misc_32.S
@@ -479,6 +479,8 @@ _GLOBAL(_tlbil_pid)
  * (no broadcast)
  */
 _GLOBAL(_tlbil_va)
+   mfmsr   r10
+   wrteei  0
slwir4,r4,16
mtspr   SPRN_MAS6,r4/* assume AS=0 for now */
tlbsx   0,r3
@@ -490,6 +492,7 @@ _GLOBAL(_tlbil_va)
tlbwe
msync
isync
+   wrtee   r10
blr
 #endif /* CONFIG_FSL_BOOKE */
 
-- 
1.5.6.5

___
Linuxppc-dev mailing list
Linuxppc-dev@ozlabs.org
https://ozlabs.org/mailman/listinfo/linuxppc-dev
___
Linuxppc-dev mailing list
Linuxppc-dev@ozlabs.org
https://ozlabs.org/mailman/listinfo/linuxppc-dev


[PATCH 5/16] powerpc/mm: Add local_flush_tlb_mm() to SW loaded TLB implementations

2008-12-14 Thread Benjamin Herrenschmidt
This adds a local_flush_tlb_mm() call as a pre-requisite for some
SMP work for BookE processors

Signed-off-by: Benjamin Herrenschmidt b...@kernel.crashing.org
---

 arch/powerpc/include/asm/tlbflush.h |5 +
 1 file changed, 5 insertions(+)

--- linux-work.orig/arch/powerpc/include/asm/tlbflush.h 2008-12-03 
14:33:02.0 +1100
+++ linux-work/arch/powerpc/include/asm/tlbflush.h  2008-12-03 
14:33:22.0 +1100
@@ -40,6 +40,11 @@ extern void _tlbil_va(unsigned long addr
 extern void _tlbia(void);
 #endif
 
+static inline void local_flush_tlb_mm(struct mm_struct *mm)
+{
+   _tlbil_pid(mm-context.id);
+}
+
 static inline void flush_tlb_mm(struct mm_struct *mm)
 {
_tlbil_pid(mm-context.id);
___
Linuxppc-dev mailing list
Linuxppc-dev@ozlabs.org
https://ozlabs.org/mailman/listinfo/linuxppc-dev


[PATCH 6/16] powerpc/mm: Split mmu_context handling v3

2008-12-14 Thread Benjamin Herrenschmidt
This splits the mmu_context handling between 32-bit hash based processors,
64-bit hash based processors and everybody else. This is preliminary work
for adding SMP support for BookE processors.

Signed-off-by: Benjamin Herrenschmidt b...@kernel.crashing.org
---
v2. address various comments for Josh and Stephen
v3. properly removes the old mmu_context_32.c and mmu_context_64.c

 arch/powerpc/include/asm/mmu_context.h   |  260 +++
 arch/powerpc/kernel/asm-offsets.c|1 
 arch/powerpc/kernel/head_32.S|   12 +
 arch/powerpc/kernel/ppc_ksyms.c  |3 
 arch/powerpc/kernel/swsusp.c |2 
 arch/powerpc/mm/Makefile |7 
 arch/powerpc/mm/mmu_context_32.c |   84 
 arch/powerpc/mm/mmu_context_64.c |   70 ---
 arch/powerpc/mm/mmu_context_hash32.c |  103 ++
 arch/powerpc/mm/mmu_context_hash64.c |   78 
 arch/powerpc/mm/mmu_context_nohash.c |  162 
 arch/powerpc/platforms/Kconfig.cputype   |   10 -
 arch/powerpc/platforms/powermac/cpufreq_32.c |2 
 drivers/macintosh/via-pmu.c  |4 
 14 files changed, 407 insertions(+), 391 deletions(-)

--- linux-work.orig/arch/powerpc/include/asm/mmu_context.h  2008-12-09 
16:30:57.0 +1100
+++ linux-work/arch/powerpc/include/asm/mmu_context.h   2008-12-09 
16:31:02.0 +1100
@@ -2,240 +2,26 @@
 #define __ASM_POWERPC_MMU_CONTEXT_H
 #ifdef __KERNEL__
 
+#include linux/kernel.h
+#include linux/mm.h
+#include linux/sched.h
+#include linux/spinlock.h
 #include asm/mmu.h   
 #include asm/cputable.h
 #include asm-generic/mm_hooks.h
-
-#ifndef CONFIG_PPC64
-#include asm/atomic.h
-#include linux/bitops.h
-
-/*
- * On 32-bit PowerPC 6xx/7xx/7xxx CPUs, we use a set of 16 VSIDs
- * (virtual segment identifiers) for each context.  Although the
- * hardware supports 24-bit VSIDs, and thus 1 million contexts,
- * we only use 32,768 of them.  That is ample, since there can be
- * at most around 30,000 tasks in the system anyway, and it means
- * that we can use a bitmap to indicate which contexts are in use.
- * Using a bitmap means that we entirely avoid all of the problems
- * that we used to have when the context number overflowed,
- * particularly on SMP systems.
- *  -- paulus.
- */
-
-/*
- * This function defines the mapping from contexts to VSIDs (virtual
- * segment IDs).  We use a skew on both the context and the high 4 bits
- * of the 32-bit virtual address (the effective segment ID) in order
- * to spread out the entries in the MMU hash table.  Note, if this
- * function is changed then arch/ppc/mm/hashtable.S will have to be
- * changed to correspond.
- */
-#define CTX_TO_VSID(ctx, va)   (((ctx) * (897 * 16) + ((va)  28) * 0x111) \
- 0xff)
-
-/*
-   The MPC8xx has only 16 contexts.  We rotate through them on each
-   task switch.  A better way would be to keep track of tasks that
-   own contexts, and implement an LRU usage.  That way very active
-   tasks don't always have to pay the TLB reload overhead.  The
-   kernel pages are mapped shared, so the kernel can run on behalf
-   of any task that makes a kernel entry.  Shared does not mean they
-   are not protected, just that the ASID comparison is not performed.
--- Dan
-
-   The IBM4xx has 256 contexts, so we can just rotate through these
-   as a way of switching contexts.  If the TID of the TLB is zero,
-   the PID/TID comparison is disabled, so we can use a TID of zero
-   to represent all kernel pages as shared among all contexts.
-   -- Dan
- */
-
-static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct 
*tsk)
-{
-}
-
-#ifdef CONFIG_8xx
-#define NO_CONTEXT 16
-#define LAST_CONTEXT   15
-#define FIRST_CONTEXT  0
-
-#elif defined(CONFIG_4xx)
-#define NO_CONTEXT 256
-#define LAST_CONTEXT   255
-#define FIRST_CONTEXT  1
-
-#elif defined(CONFIG_E200) || defined(CONFIG_E500)
-#define NO_CONTEXT 256
-#define LAST_CONTEXT   255
-#define FIRST_CONTEXT  1
-
-#else
-
-/* PPC 6xx, 7xx CPUs */
-#define NO_CONTEXT ((unsigned long) -1)
-#define LAST_CONTEXT   32767
-#define FIRST_CONTEXT  1
-#endif
-
-/*
- * Set the current MMU context.
- * On 32-bit PowerPCs (other than the 8xx embedded chips), this is done by
- * loading up the segment registers for the user part of the address space.
- *
- * Since the PGD is immediately available, it is much faster to simply
- * pass this along as a second parameter, which is required for 8xx and
- * can be used for debugging on all processors (if you happen to have
- * an Abatron).
- */
-extern void set_context(unsigned long contextid, pgd_t *pgd);
-
-/*
- * Bitmap of contexts in use.
- * The size of this bitmap is LAST_CONTEXT + 1 bits.
- */
-extern unsigned long context_map[];
-
-/*
- 

[PATCH 7/16] powerpc/mm: Rework context management for CPUs with no hash table v2

2008-12-14 Thread Benjamin Herrenschmidt
This reworks the context management code used by 4xx,8xx and
freescale BookE. It adds support for SMP by implementing a
concept of stale context map to lazily flush the TLB on
processors where a context may have been invalidated. This
also contains the ground work for generalizing such lazy TLB
flushing by just picking up a new PID and marking the old one
stale. This will be implemented later.

This is a first implementation that uses a global spinlock.

Ideally, we should try to get at least the fast path (context ID
already assigned) lockless or limited to a per context lock,
but for now this will do.

I tried to keep the UP case reasonably simple to avoid adding
too much overhead to 8xx which does a lot of context stealing
since it effectively has only 16 PIDs available.

Signed-off-by: Benjamin Herrenschmidt b...@kernel.crashing.org
---
v2. remove some bugs with active tracking on SMP

 arch/powerpc/include/asm/mmu-40x.h   |5 
 arch/powerpc/include/asm/mmu-44x.h   |5 
 arch/powerpc/include/asm/mmu-8xx.h   |3 
 arch/powerpc/include/asm/mmu-fsl-booke.h |5 
 arch/powerpc/include/asm/tlbflush.h  |2 
 arch/powerpc/mm/mmu_context_nohash.c |  262 +--
 6 files changed, 232 insertions(+), 50 deletions(-)

--- linux-work.orig/arch/powerpc/include/asm/mmu-40x.h  2008-09-29 
14:21:37.0 +1000
+++ linux-work/arch/powerpc/include/asm/mmu-40x.h   2008-12-09 
16:42:03.0 +1100
@@ -54,8 +54,9 @@
 #ifndef __ASSEMBLY__
 
 typedef struct {
-   unsigned long id;
-   unsigned long vdso_base;
+   unsigned intid;
+   unsigned intactive;
+   unsigned long   vdso_base;
 } mm_context_t;
 
 #endif /* !__ASSEMBLY__ */
Index: linux-work/arch/powerpc/include/asm/mmu-44x.h
===
--- linux-work.orig/arch/powerpc/include/asm/mmu-44x.h  2008-09-29 
14:21:37.0 +1000
+++ linux-work/arch/powerpc/include/asm/mmu-44x.h   2008-12-15 
10:12:39.0 +1100
@@ -56,8 +56,9 @@
 extern unsigned int tlb_44x_hwater;
 
 typedef struct {
-   unsigned long id;
-   unsigned long vdso_base;
+   unsigned intid;
+   unsigned intactive;
+   unsigned long   vdso_base;
 } mm_context_t;
 
 #endif /* !__ASSEMBLY__ */
Index: linux-work/arch/powerpc/include/asm/mmu-fsl-booke.h
===
--- linux-work.orig/arch/powerpc/include/asm/mmu-fsl-booke.h2008-12-08 
15:40:33.0 +1100
+++ linux-work/arch/powerpc/include/asm/mmu-fsl-booke.h 2008-12-09 
16:42:03.0 +1100
@@ -76,8 +76,9 @@
 #ifndef __ASSEMBLY__
 
 typedef struct {
-   unsigned long id;
-   unsigned long vdso_base;
+   unsigned intid;
+   unsigned intactive;
+   unsigned long   vdso_base;
 } mm_context_t;
 #endif /* !__ASSEMBLY__ */
 
Index: linux-work/arch/powerpc/mm/mmu_context_nohash.c
===
--- linux-work.orig/arch/powerpc/mm/mmu_context_nohash.c2008-12-09 
16:42:03.0 +1100
+++ linux-work/arch/powerpc/mm/mmu_context_nohash.c 2008-12-15 
10:13:05.0 +1100
@@ -14,13 +14,28 @@
  *  as published by the Free Software Foundation; either version
  *  2 of the License, or (at your option) any later version.
  *
+ * TODO:
+ *
+ *   - The global context lock will not scale very well
+ *   - The maps should be dynamically allocated to allow for processors
+ * that support more PID bits at runtime
+ *   - Implement flush_tlb_mm() by making the context stale and picking
+ * a new one
+ *   - More aggressively clear stale map bits and maybe find some way to
+ * also clear mm-cpu_vm_mask bits when processes are migrated
  */
 
+#undef DEBUG
+#define DEBUG_STEAL_ONLY
+#undef DEBUG_MAP_CONSISTENCY
+
+#include linux/kernel.h
 #include linux/mm.h
 #include linux/init.h
 
 #include asm/mmu_context.h
 #include asm/tlbflush.h
+#include linux/spinlock.h
 
 /*
  *   The MPC8xx has only 16 contexts.  We rotate through them on each
@@ -40,17 +55,14 @@
  */
 
 #ifdef CONFIG_8xx
-#define NO_CONTEXT 16
 #define LAST_CONTEXT   15
 #define FIRST_CONTEXT  0
 
 #elif defined(CONFIG_4xx)
-#define NO_CONTEXT 256
 #define LAST_CONTEXT   255
 #define FIRST_CONTEXT  1
 
 #elif defined(CONFIG_E200) || defined(CONFIG_E500)
-#define NO_CONTEXT 256
 #define LAST_CONTEXT   255
 #define FIRST_CONTEXT  1
 
@@ -58,11 +70,11 @@
 #error Unsupported processor type
 #endif
 
-static unsigned long next_mmu_context;
+static unsigned int next_context, nr_free_contexts;
 static unsigned long context_map[LAST_CONTEXT / BITS_PER_LONG + 1];
-static atomic_t nr_free_contexts;
+static unsigned long stale_map[NR_CPUS][LAST_CONTEXT / BITS_PER_LONG + 1];
 static struct mm_struct *context_mm[LAST_CONTEXT+1];
-static void steal_context(void);
+static spinlock_t 

[PATCH 8/16] powerpc/mm: Rename tlb_32.c and tlb_64.c to tlb_hash32.c and tlb_hash64.c

2008-12-14 Thread Benjamin Herrenschmidt
This renames the files to clarify the fact that they are used by
the hash based family of CPUs (the 603 being an exception in that
family but is still handled by that code).

This paves the way for the new tlb_nohash.c coming via a subsequent
patch.

Signed-off-by: Benjamin Herrenschmidt b...@kernel.crashing.org
---

 arch/powerpc/mm/Makefile |2 
 arch/powerpc/mm/tlb_32.c |  190 --
 arch/powerpc/mm/tlb_64.c |  211 ---
 arch/powerpc/mm/tlb_hash32.c |  190 ++
 arch/powerpc/mm/tlb_hash64.c |  211 +++
 5 files changed, 402 insertions(+), 402 deletions(-)

--- linux-work.orig/arch/powerpc/mm/tlb_32.c2008-12-09 16:30:49.0 
+1100
+++ /dev/null   1970-01-01 00:00:00.0 +
@@ -1,190 +0,0 @@
-/*
- * This file contains the routines for TLB flushing.
- * On machines where the MMU uses a hash table to store virtual to
- * physical translations, these routines flush entries from the
- * hash table also.
- *  -- paulus
- *
- *  Derived from arch/ppc/mm/init.c:
- *Copyright (C) 1995-1996 Gary Thomas (g...@linuxppc.org)
- *
- *  Modifications by Paul Mackerras (PowerMac) (pau...@cs.anu.edu.au)
- *  and Cort Dougan (PReP) (c...@cs.nmt.edu)
- *Copyright (C) 1996 Paul Mackerras
- *
- *  Derived from arch/i386/mm/init.c
- *Copyright (C) 1991, 1992, 1993, 1994  Linus Torvalds
- *
- *  This program is free software; you can redistribute it and/or
- *  modify it under the terms of the GNU General Public License
- *  as published by the Free Software Foundation; either version
- *  2 of the License, or (at your option) any later version.
- *
- */
-
-#include linux/kernel.h
-#include linux/mm.h
-#include linux/init.h
-#include linux/highmem.h
-#include linux/pagemap.h
-
-#include asm/tlbflush.h
-#include asm/tlb.h
-
-#include mmu_decl.h
-
-/*
- * Called when unmapping pages to flush entries from the TLB/hash table.
- */
-void flush_hash_entry(struct mm_struct *mm, pte_t *ptep, unsigned long addr)
-{
-   unsigned long ptephys;
-
-   if (Hash != 0) {
-   ptephys = __pa(ptep)  PAGE_MASK;
-   flush_hash_pages(mm-context.id, addr, ptephys, 1);
-   }
-}
-EXPORT_SYMBOL(flush_hash_entry);
-
-/*
- * Called by ptep_set_access_flags, must flush on CPUs for which the
- * DSI handler can't just fixup the TLB on a write fault
- */
-void flush_tlb_page_nohash(struct vm_area_struct *vma, unsigned long addr)
-{
-   if (Hash != 0)
-   return;
-   _tlbie(addr);
-}
-
-/*
- * Called at the end of a mmu_gather operation to make sure the
- * TLB flush is completely done.
- */
-void tlb_flush(struct mmu_gather *tlb)
-{
-   if (Hash == 0) {
-   /*
-* 603 needs to flush the whole TLB here since
-* it doesn't use a hash table.
-*/
-   _tlbia();
-   }
-}
-
-/*
- * TLB flushing:
- *
- *  - flush_tlb_mm(mm) flushes the specified mm context TLB's
- *  - flush_tlb_page(vma, vmaddr) flushes one page
- *  - flush_tlb_range(vma, start, end) flushes a range of pages
- *  - flush_tlb_kernel_range(start, end) flushes kernel pages
- *
- * since the hardware hash table functions as an extension of the
- * tlb as far as the linux tables are concerned, flush it too.
- *-- Cort
- */
-
-/*
- * 750 SMP is a Bad Idea because the 750 doesn't broadcast all
- * the cache operations on the bus.  Hence we need to use an IPI
- * to get the other CPU(s) to invalidate their TLBs.
- */
-#ifdef CONFIG_SMP_750
-#define FINISH_FLUSH   smp_send_tlb_invalidate(0)
-#else
-#define FINISH_FLUSH   do { } while (0)
-#endif
-
-static void flush_range(struct mm_struct *mm, unsigned long start,
-   unsigned long end)
-{
-   pmd_t *pmd;
-   unsigned long pmd_end;
-   int count;
-   unsigned int ctx = mm-context.id;
-
-   if (Hash == 0) {
-   _tlbia();
-   return;
-   }
-   start = PAGE_MASK;
-   if (start = end)
-   return;
-   end = (end - 1) | ~PAGE_MASK;
-   pmd = pmd_offset(pud_offset(pgd_offset(mm, start), start), start);
-   for (;;) {
-   pmd_end = ((start + PGDIR_SIZE)  PGDIR_MASK) - 1;
-   if (pmd_end  end)
-   pmd_end = end;
-   if (!pmd_none(*pmd)) {
-   count = ((pmd_end - start)  PAGE_SHIFT) + 1;
-   flush_hash_pages(ctx, start, pmd_val(*pmd), count);
-   }
-   if (pmd_end == end)
-   break;
-   start = pmd_end + 1;
-   ++pmd;
-   }
-}
-
-/*
- * Flush kernel TLB entries in the given range
- */
-void flush_tlb_kernel_range(unsigned long start, unsigned long end)
-{
-   flush_range(init_mm, start, end);
-   FINISH_FLUSH;
-}
-
-/*
- * Flush all the (user) entries for the address 

[PATCH 10/16] powerpc/mm: Remove flush_HPTE()

2008-12-14 Thread Benjamin Herrenschmidt
The function flush_HPTE() is used in only one place, the implementation
of DEBUG_PAGEALLOC on ppc32.

It's actually a dup of flush_tlb_page() though it's -slightly- more
efficient on hash based processors. We remove it and replace it by
a direct call to the hash flush code on those processors and to
flush_tlb_page() for everybody else.

Signed-off-by: Benjamin Herrenschmidt b...@kernel.crashing.org
---

 arch/powerpc/mm/mmu_decl.h   |   17 -
 arch/powerpc/mm/pgtable_32.c |6 +-
 2 files changed, 5 insertions(+), 18 deletions(-)

--- linux-work.orig/arch/powerpc/mm/mmu_decl.h  2008-12-10 17:01:18.0 
+1100
+++ linux-work/arch/powerpc/mm/mmu_decl.h   2008-12-10 17:01:35.0 
+1100
@@ -58,17 +58,14 @@ extern phys_addr_t lowmem_end_addr;
  * architectures.  -- Dan
  */
 #if defined(CONFIG_8xx)
-#define flush_HPTE(X, va, pg)  _tlbie(va, 0 /* 8xx doesn't care about PID */)
 #define MMU_init_hw()  do { } while(0)
 #define mmu_mapin_ram()(0UL)
 
 #elif defined(CONFIG_4xx)
-#define flush_HPTE(pid, va, pg)_tlbie(va, pid)
 extern void MMU_init_hw(void);
 extern unsigned long mmu_mapin_ram(void);
 
 #elif defined(CONFIG_FSL_BOOKE)
-#define flush_HPTE(pid, va, pg)_tlbie(va, pid)
 extern void MMU_init_hw(void);
 extern unsigned long mmu_mapin_ram(void);
 extern void adjust_total_lowmem(void);
@@ -77,18 +74,4 @@ extern void adjust_total_lowmem(void);
 /* anything 32-bit except 4xx or 8xx */
 extern void MMU_init_hw(void);
 extern unsigned long mmu_mapin_ram(void);
-
-/* Be carefulthis needs to be updated if we ever encounter 603 SMPs,
- * which includes all new 82xx processors.  We need tlbie/tlbsync here
- * in that case (I think). -- Dan.
- */
-static inline void flush_HPTE(unsigned context, unsigned long va,
- unsigned long pdval)
-{
-   if ((Hash != 0) 
-   mmu_has_feature(MMU_FTR_HPTE_TABLE))
-   flush_hash_pages(0, va, pdval, 1);
-   else
-   _tlbie(va);
-}
 #endif
Index: linux-work/arch/powerpc/mm/pgtable_32.c
===
--- linux-work.orig/arch/powerpc/mm/pgtable_32.c2008-12-10 
17:01:49.0 +1100
+++ linux-work/arch/powerpc/mm/pgtable_32.c 2008-12-10 17:04:36.0 
+1100
@@ -342,7 +342,11 @@ static int __change_page_attr(struct pag
return -EINVAL;
set_pte_at(init_mm, address, kpte, mk_pte(page, prot));
wmb();
-   flush_HPTE(0, address, pmd_val(*kpmd));
+#ifdef CONFIG_PPC_STD_MMU
+   flush_hash_pages(0, address, pmd_val(*kpmd), 1);
+#else
+   flush_tlb_page(NULL, address);
+#endif
pte_unmap(kpte);
 
return 0;
___
Linuxppc-dev mailing list
Linuxppc-dev@ozlabs.org
https://ozlabs.org/mailman/listinfo/linuxppc-dev


[PATCH 11/16] powerpc/mm: Add SMP support to no-hash TLB handling v3

2008-12-14 Thread Benjamin Herrenschmidt
This patch moves the whole no-hash TLB handling out of line into a
new tlb_nohash.c file, and implements some basic SMP support using
IPIs and/or broadcast tlbivax instructions.

Note that I'm using local invalidations for D-I cache coherency.

At worst, if another processor is trying to execute the same and
has the old entry in its TLB, it will just take a fault and re-do
the TLB flush locally (it won't re-do the cache flush in any case).

Signed-off-by: Benjamin Herrenschmidt b...@kernel.crashing.org
---

v2. This variant fixes usage of linux/spinlock.h instead of asm/spinlock.h
v3. Invadvertently un-EXPORT_SYMBOL'ed some cache flush calls on ppc64
v4. Fix differences in local_* flush variants between CPU types and
corresponding clash with highmem code. Remove remaining _tlbie calls
from nohash code.

 arch/powerpc/include/asm/highmem.h  |4 
 arch/powerpc/include/asm/mmu.h  |3 
 arch/powerpc/include/asm/tlbflush.h |   84 ++
 arch/powerpc/kernel/misc_32.S   |9 +
 arch/powerpc/kernel/ppc_ksyms.c |6 -
 arch/powerpc/mm/Makefile|2 
 arch/powerpc/mm/fault.c |2 
 arch/powerpc/mm/mem.c   |2 
 arch/powerpc/mm/tlb_hash32.c|4 
 arch/powerpc/mm/tlb_nohash.c|  209 
 10 files changed, 268 insertions(+), 57 deletions(-)

--- linux-work.orig/arch/powerpc/include/asm/tlbflush.h 2008-12-15 
14:36:20.0 +1100
+++ linux-work/arch/powerpc/include/asm/tlbflush.h  2008-12-15 
14:36:38.0 +1100
@@ -6,7 +6,9 @@
  *
  *  - flush_tlb_mm(mm) flushes the specified mm context TLB's
  *  - flush_tlb_page(vma, vmaddr) flushes one page
- *  - local_flush_tlb_page(vmaddr) flushes one page on the local processor
+ *  - local_flush_tlb_mm(mm) flushes the specified mm context on
+ *   the local processor
+ *  - local_flush_tlb_page(vma, vmaddr) flushes one page on the local processor
  *  - flush_tlb_page_nohash(vma, vmaddr) flushes one page if SW loaded TLB
  *  - flush_tlb_range(vma, start, end) flushes a range of pages
  *  - flush_tlb_kernel_range(start, end) flushes a range of kernel pages
@@ -18,7 +20,7 @@
  */
 #ifdef __KERNEL__
 
-#if defined(CONFIG_4xx) || defined(CONFIG_8xx) || defined(CONFIG_FSL_BOOKE)
+#ifdef CONFIG_PPC_MMU_NOHASH
 /*
  * TLB flushing for software loaded TLB chips
  *
@@ -31,10 +33,10 @@
 
 #define MMU_NO_CONTEXT ((unsigned int)-1)
 
-extern void _tlbie(unsigned long address, unsigned int pid);
 extern void _tlbil_all(void);
 extern void _tlbil_pid(unsigned int pid);
 extern void _tlbil_va(unsigned long address, unsigned int pid);
+extern void _tlbivax_bcast(unsigned long address, unsigned int pid);
 
 #if defined(CONFIG_40x) || defined(CONFIG_8xx)
 #define _tlbia()   asm volatile (tlbia; sync : : : memory)
@@ -42,48 +44,26 @@ extern void _tlbil_va(unsigned long addr
 extern void _tlbia(void);
 #endif
 
-static inline void local_flush_tlb_mm(struct mm_struct *mm)
-{
-   _tlbil_pid(mm-context.id);
-}
-
-static inline void flush_tlb_mm(struct mm_struct *mm)
-{
-   _tlbil_pid(mm-context.id);
-}
-
-static inline void local_flush_tlb_page(unsigned long vmaddr)
-{
-   _tlbil_va(vmaddr, 0);
-}
-
-static inline void flush_tlb_page(struct vm_area_struct *vma,
- unsigned long vmaddr)
-{
-   _tlbil_va(vmaddr, vma ? vma-vm_mm-context.id : 0);
-}
+extern void flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
+   unsigned long end);
+extern void flush_tlb_kernel_range(unsigned long start, unsigned long end);
 
-static inline void flush_tlb_page_nohash(struct vm_area_struct *vma,
-unsigned long vmaddr)
-{
-   flush_tlb_page(vma, vmaddr);
-}
+extern void local_flush_tlb_mm(struct mm_struct *mm);
+extern void local_flush_tlb_page(struct vm_area_struct *vma, unsigned long 
vmaddr);
 
-static inline void flush_tlb_range(struct vm_area_struct *vma,
-  unsigned long start, unsigned long end)
-{
-   _tlbil_pid(vma-vm_mm-context.id);
-}
+#ifdef CONFIG_SMP
+extern void flush_tlb_mm(struct mm_struct *mm);
+extern void flush_tlb_page(struct vm_area_struct *vma, unsigned long vmaddr);
+#else
+#define flush_tlb_mm(mm)   local_flush_tlb_mm(mm)
+#define flush_tlb_page(vma,addr)   local_flush_tlb_page(vma,addr)
+#endif
+#define flush_tlb_page_nohash(vma,addr)flush_tlb_page(vma,addr)
 
-static inline void flush_tlb_kernel_range(unsigned long start,
- unsigned long end)
-{
-   _tlbil_pid(0);
-}
+#elif defined(CONFIG_PPC_STD_MMU_32)
 
-#elif defined(CONFIG_PPC32)
 /*
- * TLB flushing for classic hash-MMMU 32-bit CPUs, 6xx, 7xx, 7xxx
+ * TLB flushing for classic hash-MMU 32-bit CPUs, 6xx, 7xx, 7xxx
  */
 extern void _tlbie(unsigned long address);
 extern void _tlbia(void);
@@ -94,14 +74,20 @@ extern void 

[PATCH 12/16] powerpc/mm: Split low level tlb invalidate for nohash processors

2008-12-14 Thread Benjamin Herrenschmidt
Currently, the various forms of low level TLB invalidations are all
implemented in misc_32.S for 32-bit processors, in a fairly scary
mess of #ifdef's and with interesting duplication such as a whole
bunch of code for FSL _tlbie and _tlbia which are no longer used.

This moves things around such that _tlbie is now defined in
hash_low_32.S and is only used by the 32-bit hash code, and all
nohash CPUs use the various _tlbil_* forms that are now moved to
a new file, tlb_nohash_low.S.

I moved all the definitions for that stuff out of include/asm/tlbflush.h as
they are really internal mm stuff, into mm/mmu_decl.h

The code should have no functional changes. I kept some variants
inline for trivial forms on things like 40x and 8xx. 

Signed-off-by: Benjamin Herrenschmidt b...@kernel.crashing.org
---

 arch/powerpc/include/asm/tlbflush.h |   14 --
 arch/powerpc/kernel/misc_32.S   |  233 
 arch/powerpc/kvm/powerpc.c  |2 
 arch/powerpc/mm/Makefile|3 
 arch/powerpc/mm/hash_low_32.S   |   76 +++
 arch/powerpc/mm/mmu_decl.h  |   48 +++
 arch/powerpc/mm/tlb_nohash_low.S|  165 +
 7 files changed, 292 insertions(+), 249 deletions(-)

--- linux-work.orig/arch/powerpc/include/asm/tlbflush.h 2008-12-15 
15:46:23.0 +1100
+++ linux-work/arch/powerpc/include/asm/tlbflush.h  2008-12-15 
15:46:56.0 +1100
@@ -33,17 +33,6 @@
 
 #define MMU_NO_CONTEXT ((unsigned int)-1)
 
-extern void _tlbil_all(void);
-extern void _tlbil_pid(unsigned int pid);
-extern void _tlbil_va(unsigned long address, unsigned int pid);
-extern void _tlbivax_bcast(unsigned long address, unsigned int pid);
-
-#if defined(CONFIG_40x) || defined(CONFIG_8xx)
-#define _tlbia()   asm volatile (tlbia; sync : : : memory)
-#else /* CONFIG_44x || CONFIG_FSL_BOOKE */
-extern void _tlbia(void);
-#endif
-
 extern void flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
unsigned long end);
 extern void flush_tlb_kernel_range(unsigned long start, unsigned long end);
@@ -65,9 +54,6 @@ extern void flush_tlb_page(struct vm_are
 /*
  * TLB flushing for classic hash-MMU 32-bit CPUs, 6xx, 7xx, 7xxx
  */
-extern void _tlbie(unsigned long address);
-extern void _tlbia(void);
-
 extern void flush_tlb_mm(struct mm_struct *mm);
 extern void flush_tlb_page(struct vm_area_struct *vma, unsigned long vmaddr);
 extern void flush_tlb_page_nohash(struct vm_area_struct *vma, unsigned long 
addr);
Index: linux-work/arch/powerpc/mm/mmu_decl.h
===
--- linux-work.orig/arch/powerpc/mm/mmu_decl.h  2008-12-15 15:46:23.0 
+1100
+++ linux-work/arch/powerpc/mm/mmu_decl.h   2008-12-15 15:46:56.0 
+1100
@@ -22,10 +22,58 @@
 #include asm/tlbflush.h
 #include asm/mmu.h
 
+#ifdef CONFIG_PPC_MMU_NOHASH
+
+/*
+ * On 40x and 8xx, we directly inline tlbia and tlbivax
+ */
+#if defined(CONFIG_40x) || defined(CONFIG_8xx)
+static inline void _tlbil_all(void)
+{
+   asm volatile (sync; tlbia; isync : : : memory)
+}
+static inline void _tlbil_pid(unsigned int pid)
+{
+   asm volatile (sync; tlbia; isync : : : memory)
+}
+#else /* CONFIG_40x || CONFIG_8xx */
+extern void _tlbil_all(void);
+extern void _tlbil_pid(unsigned int pid);
+#endif /* !(CONFIG_40x || CONFIG_8xx) */
+
+/*
+ * On 8xx, we directly inline tlbie, on others, it's extern
+ */
+#ifdef CONFIG_8xx
+static inline void _tlbil_va(unsigned long address, unsigned int pid)
+{
+   asm volatile (tlbie %0; sync : : r (address) : memory)
+}
+#else /* CONFIG_8xx */
+extern void _tlbil_va(unsigned long address, unsigned int pid);
+#endif /* CONIFG_8xx */
+
+/*
+ * As of today, we don't support tlbivax broadcast on any
+ * implementation. When that becomes the case, this will be
+ * an extern.
+ */
+static inline void _tlbivax_bcast(unsigned long address, unsigned int pid)
+{
+   BUG();
+}
+
+#else /* CONFIG_PPC_MMU_NOHASH */
+
 extern void hash_preload(struct mm_struct *mm, unsigned long ea,
 unsigned long access, unsigned long trap);
 
 
+extern void _tlbie(unsigned long address);
+extern void _tlbia(void);
+
+#endif /* CONFIG_PPC_MMU_NOHASH */
+
 #ifdef CONFIG_PPC32
 extern void mapin_ram(void);
 extern int map_page(unsigned long va, phys_addr_t pa, int flags);
Index: linux-work/arch/powerpc/mm/tlb_nohash_low.S
===
--- /dev/null   1970-01-01 00:00:00.0 +
+++ linux-work/arch/powerpc/mm/tlb_nohash_low.S 2008-12-15 15:47:57.0 
+1100
@@ -0,0 +1,165 @@
+/*
+ * This file contains low-level functions for performing various
+ * types of TLB invalidations on various processors with no hash
+ * table.
+ *
+ * This file implements the following functions for all no-hash
+ * processors. Some aren't implemented for some variants. Some
+ * are inline in tlbflush.h
+ *
+ * - 

[PATCH 13/16] powerpc/44x: No need to mask MSR:CE, ME or DE in _tlbil_va on 440

2008-12-14 Thread Benjamin Herrenschmidt
The handlers for Critical, Machine Check or Debug interrupts
will save and restore MMUCR nowadays, thus we only need to
disable normal interrupts when invalidating TLB entries.

Signed-off-by: Benjamin Herrenschmidt b...@kernel.crashing.org
---

 arch/powerpc/mm/tlb_nohash_low.S |   19 ++-
 1 file changed, 10 insertions(+), 9 deletions(-)

--- linux-work.orig/arch/powerpc/mm/tlb_nohash_low.S2008-12-15 
13:34:57.0 +1100
+++ linux-work/arch/powerpc/mm/tlb_nohash_low.S 2008-12-15 13:35:07.0 
+1100
@@ -75,18 +75,19 @@ _GLOBAL(_tlbil_va)
mfspr   r5,SPRN_MMUCR
rlwimi  r5,r4,0,24,31   /* Set TID */
 
-   /* We have to run the search with interrupts disabled, even critical
-* and debug interrupts (in fact the only critical exceptions we have
-* are debug and machine check).  Otherwise  an interrupt which causes
-* a TLB miss can clobber the MMUCR between the mtspr and the tlbsx. */
+   /* We have to run the search with interrupts disabled, otherwise
+* an interrupt which causes a TLB miss can clobber the MMUCR
+* between the mtspr and the tlbsx.
+*
+* Critical and Machine Check interrupts take care of saving
+* and restoring MMUCR, so only normal interrupts have to be
+* taken care of.
+*/
mfmsr   r4
-   lis r6,(MSR_EE|MSR_CE|MSR_ME|MSR_DE)@ha
-   addir6,r6,(MSR_EE|MSR_CE|MSR_ME|MSR_DE)@l
-   andcr6,r4,r6
-   mtmsr   r6
+   wrteei  0
mtspr   SPRN_MMUCR,r5
tlbsx.  r3, 0, r3
-   mtmsr   r4
+   wrtee   r4
bne 1f
sync
/* There are only 64 TLB entries, so r3  64,
___
Linuxppc-dev mailing list
Linuxppc-dev@ozlabs.org
https://ozlabs.org/mailman/listinfo/linuxppc-dev


[PATCH 15/16] powerpc/mm: Rework usage of _PAGE_COHERENT/NO_CACHE/GUARDED

2008-12-14 Thread Benjamin Herrenschmidt
Currently, we never set _PAGE_COHERENT in the PTEs, we just OR it in
in the hash code based on some CPU feature bit. We also manipulate
_PAGE_NO_CACHE and _PAGE_GUARDED by hand in all sorts of places.

This changes the logic so that instead, the PTE now contains
_PAGE_COHERENT for all normal RAM pages thay have I = 0 on platforms
that need it. The hash code clears it if the feature bit is not set.

It also adds some clean accessors to setup various valid combinations
of access flags and change various bits of code to use them instead.

This should help having the PTE actually containing the bit
combinations that we really want.

I also removed _PAGE_GUARDED from _PAGE_BASE on 44x and instead
set it explicitely from the TLB miss. I will ultimately remove it
completely as it appears that it might not be needed after all
but in the meantime, having it in the TLB miss makes things a
lot easier.

Signed-off-by: Benjamin Herrenschmidt b...@kernel.crashing.org
---

 arch/powerpc/include/asm/pgtable-ppc32.h |   42 ---
 arch/powerpc/include/asm/pgtable-ppc64.h |   13 -
 arch/powerpc/include/asm/pgtable.h   |   26 +++
 arch/powerpc/kernel/head_44x.S   |1 
 arch/powerpc/kernel/pci-common.c |   24 ++---
 arch/powerpc/mm/hash_low_32.S|4 +-
 arch/powerpc/mm/mem.c|4 +-
 arch/powerpc/platforms/cell/spufs/file.c |   27 ++-
 drivers/video/controlfb.c|4 +-
 9 files changed, 68 insertions(+), 77 deletions(-)

--- linux-work.orig/arch/powerpc/include/asm/pgtable-ppc32.h2008-11-24 
14:48:55.0 +1100
+++ linux-work/arch/powerpc/include/asm/pgtable-ppc32.h 2008-12-15 
15:34:16.0 +1100
@@ -228,9 +228,10 @@ extern int icache_44x_need_flush;
  *   - FILE *must* be in the bottom three bits because swap cache
  * entries use the top 29 bits for TLB2.
  *
- *   - CACHE COHERENT bit (M) has no effect on PPC440 core, because it
- * doesn't support SMP. So we can use this as software bit, like
- * DIRTY.
+ *   - CACHE COHERENT bit (M) has no effect on original PPC440 cores,
+ * because it doesn't support SMP. However, some later 460 variants
+ * have -some- form of SMP support and so I keep the bit there for
+ * future use
  *
  * With the PPC 44x Linux implementation, the 0-11th LSBs of the PTE are used
  * for memory protection related functions (see PTE structure in
@@ -436,20 +437,23 @@ extern int icache_44x_need_flush;
 _PAGE_USER | _PAGE_ACCESSED | \
 _PAGE_RW | _PAGE_HWWRITE | _PAGE_DIRTY | \
 _PAGE_EXEC | _PAGE_HWEXEC)
+
 /*
- * Note: the _PAGE_COHERENT bit automatically gets set in the hardware
- * PTE if CONFIG_SMP is defined (hash_page does this); there is no need
- * to have it in the Linux PTE, and in fact the bit could be reused for
- * another purpose.  -- paulus.
+ * We define 2 sets of base prot bits, one for basic pages (ie,
+ * cacheable kernel and user pages) and one for non cacheable
+ * pages. We always set _PAGE_COHERENT when SMP is enabled or
+ * the processor might need it for DMA coherency.
  */
-
-#ifdef CONFIG_44x
-#define _PAGE_BASE (_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_GUARDED)
+#if defined(CONFIG_SMP) || defined(CONFIG_PPC_STD_MMU)
+#define _PAGE_BASE (_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_COHERENT)
 #else
 #define _PAGE_BASE (_PAGE_PRESENT | _PAGE_ACCESSED)
 #endif
+#define _PAGE_BASE_NC  (_PAGE_PRESENT | _PAGE_ACCESSED)
+
 #define _PAGE_WRENABLE (_PAGE_RW | _PAGE_DIRTY | _PAGE_HWWRITE)
 #define _PAGE_KERNEL   (_PAGE_BASE | _PAGE_SHARED | _PAGE_WRENABLE)
+#define _PAGE_KERNEL_NC(_PAGE_BASE_NC | _PAGE_SHARED | _PAGE_WRENABLE 
| _PAGE_NO_CACHE)
 
 #ifdef CONFIG_PPC_STD_MMU
 /* On standard PPC MMU, no user access implies kernel read/write access,
@@ -459,7 +463,7 @@ extern int icache_44x_need_flush;
 #define _PAGE_KERNEL_RO(_PAGE_BASE | _PAGE_SHARED)
 #endif
 
-#define _PAGE_IO   (_PAGE_KERNEL | _PAGE_NO_CACHE | _PAGE_GUARDED)
+#define _PAGE_IO   (_PAGE_KERNEL_NC | _PAGE_GUARDED)
 #define _PAGE_RAM  (_PAGE_KERNEL | _PAGE_HWEXEC)
 
 #if defined(CONFIG_KGDB) || defined(CONFIG_XMON) || defined(CONFIG_BDI_SWITCH) 
||\
@@ -552,9 +556,6 @@ static inline int pte_young(pte_t pte)  
 static inline int pte_file(pte_t pte)  { return pte_val(pte)  
_PAGE_FILE; }
 static inline int pte_special(pte_t pte)   { return pte_val(pte)  
_PAGE_SPECIAL; }
 
-static inline void pte_uncache(pte_t pte)   { pte_val(pte) |= 
_PAGE_NO_CACHE; }
-static inline void pte_cache(pte_t pte) { pte_val(pte) = 
~_PAGE_NO_CACHE; }
-
 static inline pte_t pte_wrprotect(pte_t pte) {
pte_val(pte) = ~(_PAGE_RW | _PAGE_HWWRITE); return pte; }
 static inline pte_t pte_mkclean(pte_t pte) {
@@ -693,10 +694,11 @@ static inline void __set_pte_at(struct m
 #endif
 }
 
+
 static inline void 

[PATCH 16/16] powerpc/44x: 44x TLB doesn't need Guarded set for all pages

2008-12-14 Thread Benjamin Herrenschmidt
After discussing with chip designers, it appears that it's not
necessary to set G everywhere on 440 cores. The various core
errata related to prefetch should be sorted out by firmware by
disabling icache prefetching in CCR0. We add the workaround to
the kernel however just in case ld firmwares don't do it.

This is valid for -all- 4xx core variants. Later ones hard wire
the absence of prefetch but it doesn't harm to clear the bits
in CCR0 (they should already be cleared anyway).

We still leave G=1 on the linear mapping for now, we need to
stop over-mapping RAM to be able to remove it.

Signed-off-by: Benjamin Herrenschmidt b...@kernel.crashing.org
---

 arch/powerpc/kernel/head_44x.S |   12 +++-
 1 file changed, 11 insertions(+), 1 deletion(-)

--- linux-work.orig/arch/powerpc/kernel/head_44x.S  2008-12-10 
16:11:35.0 +1100
+++ linux-work/arch/powerpc/kernel/head_44x.S   2008-12-10 16:29:08.0 
+1100
@@ -69,6 +69,17 @@ _ENTRY(_start);
li  r24,0   /* CPU number */
 
 /*
+ * In case the firmware didn't do it, we apply some workarounds
+ * that are good for all 440 core variants here
+ */
+   mfspr   r3,SPRN_CCR0
+   rlwinm  r3,r3,0,0,27/* disable icache prefetch */
+   isync
+   mtspr   SPRN_CCR0,r3
+   isync
+   sync
+
+/*
  * Set up the initial MMU state
  *
  * We are still executing code at the virtual address
@@ -570,7 +581,6 @@ finish_tlb_load:
rlwimi  r10,r12,29,30,30/* DIRTY - SW position */
and r11,r12,r10 /* Mask PTE bits to keep */
andi.   r10,r12,_PAGE_USER  /* User page ? */
-   ori r11,r11,_PAGE_GUARDED   /* 440 errata, needs G set */
beq 1f  /* nope, leave U bits empty */
rlwimi  r11,r11,3,26,28 /* yes, copy S bits to U */
 1: tlbwe   r11,r13,PPC44x_TLB_ATTRIB   /* Write ATTRIB */
___
Linuxppc-dev mailing list
Linuxppc-dev@ozlabs.org
https://ozlabs.org/mailman/listinfo/linuxppc-dev


Re: [PATCH 4/16] powerpc/fsl-booke: Fix problem with _tlbil_va

2008-12-14 Thread Stephen Rothwell
Hi Ben,

On Mon, 15 Dec 2008 16:44:21 +1100 Benjamin Herrenschmidt 
b...@kernel.crashing.org wrote:

 From: Kumar Gala ga...@kernel.crashing.org
 
 An example calling sequence which we did see:

This one is already in Linus' tree as of today.
-- 
Cheers,
Stephen Rothwells...@canb.auug.org.au
http://www.canb.auug.org.au/~sfr/


pgpVmRsElUuzZ.pgp
Description: PGP signature
___
Linuxppc-dev mailing list
Linuxppc-dev@ozlabs.org
https://ozlabs.org/mailman/listinfo/linuxppc-dev

Re: [PATCH 4/16] powerpc/fsl-booke: Fix problem with _tlbil_va

2008-12-14 Thread Benjamin Herrenschmidt
On Mon, 2008-12-15 at 17:59 +1100, Stephen Rothwell wrote:
 Hi Ben,
 
 On Mon, 15 Dec 2008 16:44:21 +1100 Benjamin Herrenschmidt 
 b...@kernel.crashing.org wrote:
 
  From: Kumar Gala ga...@kernel.crashing.org
  
  An example calling sequence which we did see:
 
 This one is already in Linus' tree as of today.

Ah indeed, it wasn't in powerpc yet which is why I left in there, since
that's what my series is based on.

I expect a few of those near the top of the pile to also go separate
ways via Kumar or Josh...

I grouped all them to make the dependency chain clear and because that
way it actually builds on top of today's powerpc master :-)

Once we are passed reviews etc... we can always sort out the details on
how to merge the various bits. Hopefully soon since it's now getting
some fairly good testing by Kumar and I and some of them already has
-some- amount of review.

Cheers,
Ben.


___
Linuxppc-dev mailing list
Linuxppc-dev@ozlabs.org
https://ozlabs.org/mailman/listinfo/linuxppc-dev


Re: [RESEND] [PATCH] powerpc: remove dead BIO_VMERGE_BOUNDARY definition

2008-12-14 Thread Jens Axboe
On Sun, Dec 14 2008, FUJITA Tomonori wrote:
 This is a resend of:
 
 http://marc.info/?l=linux-kernelm=122482703616607w=2
 
 =
 From: FUJITA Tomonori fujita.tomon...@lab.ntt.co.jp
 Subject: [PATCH] powerpc: remove dead BIO_VMERGE_BOUNDARY definition
 
 The block layer dropped the virtual merge feature
 (b8b3e16cfe6435d961f6aaebcfd52a1ff2a988c5). BIO_VMERGE_BOUNDARY
 definition is meaningless now (For POWER, BIO_VMERGE_BOUNDARY has been
 meaningless for a long time since POWER disables the virtual merge
 feature).
 
 Signed-off-by: FUJITA Tomonori fujita.tomon...@lab.ntt.co.jp

Acked-by: Jens Axboe jens.ax...@oracle.com

 ---
  arch/powerpc/include/asm/io.h |7 ---
  1 files changed, 0 insertions(+), 7 deletions(-)
 
 diff --git a/arch/powerpc/include/asm/io.h b/arch/powerpc/include/asm/io.h
 index 08266d2..494cd8b 100644
 --- a/arch/powerpc/include/asm/io.h
 +++ b/arch/powerpc/include/asm/io.h
 @@ -713,13 +713,6 @@ static inline void * phys_to_virt(unsigned long address)
   */
  #define page_to_phys(page)   ((phys_addr_t)page_to_pfn(page)  PAGE_SHIFT)
  
 -/* We do NOT want virtual merging, it would put too much pressure on
 - * our iommu allocator. Instead, we want drivers to be smart enough
 - * to coalesce sglists that happen to have been mapped in a contiguous
 - * way by the iommu
 - */
 -#define BIO_VMERGE_BOUNDARY  0
 -
  /*
   * 32 bits still uses virt_to_bus() for it's implementation of DMA
   * mappings se we have to keep it defined here. We also have some old
 -- 
 1.5.5.GIT
 

-- 
Jens Axboe

___
Linuxppc-dev mailing list
Linuxppc-dev@ozlabs.org
https://ozlabs.org/mailman/listinfo/linuxppc-dev