As our SMP implementation uses MESI protocols.  Grouping together data
which is mostly only read together means that we avoid unnecessary
cache line bouncing when this code shares a cache line with other data.

In other words, cache lines associated with read-mostly data are
expected to spend most of their time in shared state.

Signed-off-by: Russell King <rmk+ker...@arm.linux.org.uk>
---
 arch/arm/include/asm/cache.h  |    2 ++
 arch/arm/kernel/vmlinux.lds.S |    1 +
 2 files changed, 3 insertions(+), 0 deletions(-)

diff --git a/arch/arm/include/asm/cache.h b/arch/arm/include/asm/cache.h
index 9d61220..75fe66b 100644
--- a/arch/arm/include/asm/cache.h
+++ b/arch/arm/include/asm/cache.h
@@ -23,4 +23,6 @@
 #define ARCH_SLAB_MINALIGN 8
 #endif
 
+#define __read_mostly __attribute__((__section__(".data..read_mostly")))
+
 #endif
diff --git a/arch/arm/kernel/vmlinux.lds.S b/arch/arm/kernel/vmlinux.lds.S
index cead889..1581f6d 100644
--- a/arch/arm/kernel/vmlinux.lds.S
+++ b/arch/arm/kernel/vmlinux.lds.S
@@ -167,6 +167,7 @@ SECTIONS
 
                NOSAVE_DATA
                CACHELINE_ALIGNED_DATA(32)
+               READ_MOSTLY_DATA(32)
 
                /*
                 * The exception fixup table (might need resorting at runtime)
-- 
1.6.2.5

--
To unsubscribe from this list: send the line "unsubscribe linux-omap" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to