Re: [PATCH v2 15/31] arm64: SMP support

2012-08-17 Thread Tony Lindgren
* Catalin Marinas  [120817 02:33]:
> On Fri, Aug 17, 2012 at 10:21:33AM +0100, Tony Lindgren wrote:
> > * Catalin Marinas  [120814 11:05]:
> > > This patch adds SMP initialisation and spinlocks implementation for
> > > AArch64. The spinlock support uses the new load-acquire/store-release
> > > instructions to avoid explicit barriers. The architecture also specifies
> > > that an event is automatically generated when clearing the exclusive
> > > monitor state to wake up processors in WFE, so there is no need for an
> > > explicit DSB/SEV instruction sequence. The SEVL instruction is used to
> > > set the exclusive monitor locally as there is no conditional WFE and a
> > > branch is more expensive.
> > 
> > Do we always have SMP hardware on arm64? Or are we going to need to
> > again add smp_on_up support later on?
> 
> There isn't anything in the architecture specs that mandates multiple
> cores but given the current trend it's very likely that we'll always
> have MP.
> 
> An improvement in AArch64 is that we can use the SMP cache/TLB ops (the
> inner shareable variants) even on a UP system so there is no need for
> run-time code patching for correct execution.

That's good to hear!

Tony
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 15/31] arm64: SMP support

2012-08-17 Thread Catalin Marinas
On Fri, Aug 17, 2012 at 10:21:33AM +0100, Tony Lindgren wrote:
> * Catalin Marinas  [120814 11:05]:
> > This patch adds SMP initialisation and spinlocks implementation for
> > AArch64. The spinlock support uses the new load-acquire/store-release
> > instructions to avoid explicit barriers. The architecture also specifies
> > that an event is automatically generated when clearing the exclusive
> > monitor state to wake up processors in WFE, so there is no need for an
> > explicit DSB/SEV instruction sequence. The SEVL instruction is used to
> > set the exclusive monitor locally as there is no conditional WFE and a
> > branch is more expensive.
> 
> Do we always have SMP hardware on arm64? Or are we going to need to
> again add smp_on_up support later on?

There isn't anything in the architecture specs that mandates multiple
cores but given the current trend it's very likely that we'll always
have MP.

An improvement in AArch64 is that we can use the SMP cache/TLB ops (the
inner shareable variants) even on a UP system so there is no need for
run-time code patching for correct execution.

-- 
Catalin
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 15/31] arm64: SMP support

2012-08-17 Thread Tony Lindgren
* Catalin Marinas  [120814 11:05]:
> This patch adds SMP initialisation and spinlocks implementation for
> AArch64. The spinlock support uses the new load-acquire/store-release
> instructions to avoid explicit barriers. The architecture also specifies
> that an event is automatically generated when clearing the exclusive
> monitor state to wake up processors in WFE, so there is no need for an
> explicit DSB/SEV instruction sequence. The SEVL instruction is used to
> set the exclusive monitor locally as there is no conditional WFE and a
> branch is more expensive.

Do we always have SMP hardware on arm64? Or are we going to need to
again add smp_on_up support later on?

Other than that:

Acked-by: Tony Lindgren 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 15/31] arm64: SMP support

2012-08-17 Thread Tony Lindgren
* Catalin Marinas catalin.mari...@arm.com [120814 11:05]:
 This patch adds SMP initialisation and spinlocks implementation for
 AArch64. The spinlock support uses the new load-acquire/store-release
 instructions to avoid explicit barriers. The architecture also specifies
 that an event is automatically generated when clearing the exclusive
 monitor state to wake up processors in WFE, so there is no need for an
 explicit DSB/SEV instruction sequence. The SEVL instruction is used to
 set the exclusive monitor locally as there is no conditional WFE and a
 branch is more expensive.

Do we always have SMP hardware on arm64? Or are we going to need to
again add smp_on_up support later on?

Other than that:

Acked-by: Tony Lindgren t...@atomide.com
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 15/31] arm64: SMP support

2012-08-17 Thread Catalin Marinas
On Fri, Aug 17, 2012 at 10:21:33AM +0100, Tony Lindgren wrote:
 * Catalin Marinas catalin.mari...@arm.com [120814 11:05]:
  This patch adds SMP initialisation and spinlocks implementation for
  AArch64. The spinlock support uses the new load-acquire/store-release
  instructions to avoid explicit barriers. The architecture also specifies
  that an event is automatically generated when clearing the exclusive
  monitor state to wake up processors in WFE, so there is no need for an
  explicit DSB/SEV instruction sequence. The SEVL instruction is used to
  set the exclusive monitor locally as there is no conditional WFE and a
  branch is more expensive.
 
 Do we always have SMP hardware on arm64? Or are we going to need to
 again add smp_on_up support later on?

There isn't anything in the architecture specs that mandates multiple
cores but given the current trend it's very likely that we'll always
have MP.

An improvement in AArch64 is that we can use the SMP cache/TLB ops (the
inner shareable variants) even on a UP system so there is no need for
run-time code patching for correct execution.

-- 
Catalin
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 15/31] arm64: SMP support

2012-08-17 Thread Tony Lindgren
* Catalin Marinas catalin.mari...@arm.com [120817 02:33]:
 On Fri, Aug 17, 2012 at 10:21:33AM +0100, Tony Lindgren wrote:
  * Catalin Marinas catalin.mari...@arm.com [120814 11:05]:
   This patch adds SMP initialisation and spinlocks implementation for
   AArch64. The spinlock support uses the new load-acquire/store-release
   instructions to avoid explicit barriers. The architecture also specifies
   that an event is automatically generated when clearing the exclusive
   monitor state to wake up processors in WFE, so there is no need for an
   explicit DSB/SEV instruction sequence. The SEVL instruction is used to
   set the exclusive monitor locally as there is no conditional WFE and a
   branch is more expensive.
  
  Do we always have SMP hardware on arm64? Or are we going to need to
  again add smp_on_up support later on?
 
 There isn't anything in the architecture specs that mandates multiple
 cores but given the current trend it's very likely that we'll always
 have MP.
 
 An improvement in AArch64 is that we can use the SMP cache/TLB ops (the
 inner shareable variants) even on a UP system so there is no need for
 run-time code patching for correct execution.

That's good to hear!

Tony
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 15/31] arm64: SMP support

2012-08-15 Thread Arnd Bergmann
On Tuesday 14 August 2012, Catalin Marinas wrote:
> This patch adds SMP initialisation and spinlocks implementation for
> AArch64. The spinlock support uses the new load-acquire/store-release
> instructions to avoid explicit barriers. The architecture also specifies
> that an event is automatically generated when clearing the exclusive
> monitor state to wake up processors in WFE, so there is no need for an
> explicit DSB/SEV instruction sequence. The SEVL instruction is used to
> set the exclusive monitor locally as there is no conditional WFE and a
> branch is more expensive.
> 
> For the SMP booting protocol, see Documentation/arm64/booting.txt.
> 
> Signed-off-by: Will Deacon 
> Signed-off-by: Marc Zyngier 
> Signed-off-by: Catalin Marinas 

Acked-by: Arnd Bergmann 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 15/31] arm64: SMP support

2012-08-15 Thread Arnd Bergmann
On Tuesday 14 August 2012, Catalin Marinas wrote:
 This patch adds SMP initialisation and spinlocks implementation for
 AArch64. The spinlock support uses the new load-acquire/store-release
 instructions to avoid explicit barriers. The architecture also specifies
 that an event is automatically generated when clearing the exclusive
 monitor state to wake up processors in WFE, so there is no need for an
 explicit DSB/SEV instruction sequence. The SEVL instruction is used to
 set the exclusive monitor locally as there is no conditional WFE and a
 branch is more expensive.
 
 For the SMP booting protocol, see Documentation/arm64/booting.txt.
 
 Signed-off-by: Will Deacon will.dea...@arm.com
 Signed-off-by: Marc Zyngier marc.zyng...@arm.com
 Signed-off-by: Catalin Marinas catalin.mari...@arm.com

Acked-by: Arnd Bergmann a...@arndb.de
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 15/31] arm64: SMP support

2012-08-14 Thread Olof Johansson
Hi,

On Tue, Aug 14, 2012 at 06:52:16PM +0100, Catalin Marinas wrote:
> This patch adds SMP initialisation and spinlocks implementation for
> AArch64. The spinlock support uses the new load-acquire/store-release
> instructions to avoid explicit barriers. The architecture also specifies
> that an event is automatically generated when clearing the exclusive
> monitor state to wake up processors in WFE, so there is no need for an
> explicit DSB/SEV instruction sequence. The SEVL instruction is used to
> set the exclusive monitor locally as there is no conditional WFE and a
> branch is more expensive.
> 
> For the SMP booting protocol, see Documentation/arm64/booting.txt.
> 
> Signed-off-by: Will Deacon 
> Signed-off-by: Marc Zyngier 
> Signed-off-by: Catalin Marinas 
> ---

> diff --git a/arch/arm64/include/asm/spinlock.h 
> b/arch/arm64/include/asm/spinlock.h
> new file mode 100644
> index 000..34a37fb
> --- /dev/null
> +++ b/arch/arm64/include/asm/spinlock.h
> @@ -0,0 +1,199 @@
> +/*
> + * Copyright (C) 2012 ARM Ltd.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program.  If not, see .
> + */
> +#ifndef __ASM_SPINLOCK_H
> +#define __ASM_SPINLOCK_H
> +
> +#include 
> +#include 
> +
> +/*
> + * AArch64 Spin-locking.
> + *
> + * We exclusively read the old value.  If it is zero, we may have
> + * won the lock, so we try exclusively storing it.  A memory barrier
> + * is required after we get a lock, and before we release it, because
> + * V6 CPUs are assumed to have weakly ordered memory.

This comment should be updated, to mention the implicit locking and remove the
reference to V6?

Also, ignore previous questions on another reply about need for barriers,
obviously not needed given the load-acquire/store-release semantics.



-Olof
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v2 15/31] arm64: SMP support

2012-08-14 Thread Catalin Marinas
This patch adds SMP initialisation and spinlocks implementation for
AArch64. The spinlock support uses the new load-acquire/store-release
instructions to avoid explicit barriers. The architecture also specifies
that an event is automatically generated when clearing the exclusive
monitor state to wake up processors in WFE, so there is no need for an
explicit DSB/SEV instruction sequence. The SEVL instruction is used to
set the exclusive monitor locally as there is no conditional WFE and a
branch is more expensive.

For the SMP booting protocol, see Documentation/arm64/booting.txt.

Signed-off-by: Will Deacon 
Signed-off-by: Marc Zyngier 
Signed-off-by: Catalin Marinas 
---
 arch/arm64/include/asm/hardirq.h|5 +
 arch/arm64/include/asm/smp.h|   69 +
 arch/arm64/include/asm/spinlock.h   |  199 +
 arch/arm64/include/asm/spinlock_types.h |   38 +++
 arch/arm64/kernel/smp.c |  469 +++
 5 files changed, 780 insertions(+), 0 deletions(-)
 create mode 100644 arch/arm64/include/asm/smp.h
 create mode 100644 arch/arm64/include/asm/spinlock.h
 create mode 100644 arch/arm64/include/asm/spinlock_types.h
 create mode 100644 arch/arm64/kernel/smp.c

diff --git a/arch/arm64/include/asm/hardirq.h b/arch/arm64/include/asm/hardirq.h
index c6c9514..5075463 100644
--- a/arch/arm64/include/asm/hardirq.h
+++ b/arch/arm64/include/asm/hardirq.h
@@ -20,8 +20,13 @@
 #include 
 #include 
 
+#define NR_IPI 4
+
 typedef struct {
unsigned int __softirq_pending;
+#ifdef CONFIG_SMP
+   unsigned int ipi_irqs[NR_IPI];
+#endif
 } cacheline_aligned irq_cpustat_t;
 
 #include  /* Standard mappings for irq_cpustat_t above */
diff --git a/arch/arm64/include/asm/smp.h b/arch/arm64/include/asm/smp.h
new file mode 100644
index 000..7e34295
--- /dev/null
+++ b/arch/arm64/include/asm/smp.h
@@ -0,0 +1,69 @@
+/*
+ * Copyright (C) 2012 ARM Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see .
+ */
+#ifndef __ASM_SMP_H
+#define __ASM_SMP_H
+
+#include 
+#include 
+#include 
+
+#ifndef CONFIG_SMP
+# error " included in non-SMP build"
+#endif
+
+#define raw_smp_processor_id() (current_thread_info()->cpu)
+
+struct seq_file;
+
+/*
+ * generate IPI list text
+ */
+extern void show_ipi_list(struct seq_file *p, int prec);
+
+/*
+ * Called from C code, this handles an IPI.
+ */
+extern void handle_IPI(int ipinr, struct pt_regs *regs);
+
+/*
+ * Setup the set of possible CPUs (via set_cpu_possible)
+ */
+extern void smp_init_cpus(void);
+
+/*
+ * Provide a function to raise an IPI cross call on CPUs in callmap.
+ */
+extern void set_smp_cross_call(void (*)(const struct cpumask *, unsigned int));
+
+/*
+ * Called from the secondary holding pen, this is the secondary CPU entry 
point.
+ */
+asmlinkage void secondary_start_kernel(void);
+
+/*
+ * Initial data for bringing up a secondary CPU.
+ */
+struct secondary_data {
+   void *stack;
+};
+extern struct secondary_data secondary_data;
+extern void secondary_holding_pen(void);
+extern volatile unsigned long secondary_holding_pen_release;
+
+extern void arch_send_call_function_single_ipi(int cpu);
+extern void arch_send_call_function_ipi_mask(const struct cpumask *mask);
+
+#endif /* ifndef __ASM_SMP_H */
diff --git a/arch/arm64/include/asm/spinlock.h 
b/arch/arm64/include/asm/spinlock.h
new file mode 100644
index 000..34a37fb
--- /dev/null
+++ b/arch/arm64/include/asm/spinlock.h
@@ -0,0 +1,199 @@
+/*
+ * Copyright (C) 2012 ARM Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see .
+ */
+#ifndef __ASM_SPINLOCK_H
+#define __ASM_SPINLOCK_H
+
+#include 
+#include 
+
+/*
+ * AArch64 Spin-locking.
+ *
+ * We exclusively read the old value.  If it is zero, we may have
+ * won the lock, so we try exclusively storing it.  A memory barrier
+ * is required after we get a lock, and before we release it, because
+ * V6 CPUs are assumed to 

[PATCH v2 15/31] arm64: SMP support

2012-08-14 Thread Catalin Marinas
This patch adds SMP initialisation and spinlocks implementation for
AArch64. The spinlock support uses the new load-acquire/store-release
instructions to avoid explicit barriers. The architecture also specifies
that an event is automatically generated when clearing the exclusive
monitor state to wake up processors in WFE, so there is no need for an
explicit DSB/SEV instruction sequence. The SEVL instruction is used to
set the exclusive monitor locally as there is no conditional WFE and a
branch is more expensive.

For the SMP booting protocol, see Documentation/arm64/booting.txt.

Signed-off-by: Will Deacon will.dea...@arm.com
Signed-off-by: Marc Zyngier marc.zyng...@arm.com
Signed-off-by: Catalin Marinas catalin.mari...@arm.com
---
 arch/arm64/include/asm/hardirq.h|5 +
 arch/arm64/include/asm/smp.h|   69 +
 arch/arm64/include/asm/spinlock.h   |  199 +
 arch/arm64/include/asm/spinlock_types.h |   38 +++
 arch/arm64/kernel/smp.c |  469 +++
 5 files changed, 780 insertions(+), 0 deletions(-)
 create mode 100644 arch/arm64/include/asm/smp.h
 create mode 100644 arch/arm64/include/asm/spinlock.h
 create mode 100644 arch/arm64/include/asm/spinlock_types.h
 create mode 100644 arch/arm64/kernel/smp.c

diff --git a/arch/arm64/include/asm/hardirq.h b/arch/arm64/include/asm/hardirq.h
index c6c9514..5075463 100644
--- a/arch/arm64/include/asm/hardirq.h
+++ b/arch/arm64/include/asm/hardirq.h
@@ -20,8 +20,13 @@
 #include linux/threads.h
 #include asm/irq.h
 
+#define NR_IPI 4
+
 typedef struct {
unsigned int __softirq_pending;
+#ifdef CONFIG_SMP
+   unsigned int ipi_irqs[NR_IPI];
+#endif
 } cacheline_aligned irq_cpustat_t;
 
 #include linux/irq_cpustat.h /* Standard mappings for irq_cpustat_t above */
diff --git a/arch/arm64/include/asm/smp.h b/arch/arm64/include/asm/smp.h
new file mode 100644
index 000..7e34295
--- /dev/null
+++ b/arch/arm64/include/asm/smp.h
@@ -0,0 +1,69 @@
+/*
+ * Copyright (C) 2012 ARM Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see http://www.gnu.org/licenses/.
+ */
+#ifndef __ASM_SMP_H
+#define __ASM_SMP_H
+
+#include linux/threads.h
+#include linux/cpumask.h
+#include linux/thread_info.h
+
+#ifndef CONFIG_SMP
+# error asm/smp.h included in non-SMP build
+#endif
+
+#define raw_smp_processor_id() (current_thread_info()-cpu)
+
+struct seq_file;
+
+/*
+ * generate IPI list text
+ */
+extern void show_ipi_list(struct seq_file *p, int prec);
+
+/*
+ * Called from C code, this handles an IPI.
+ */
+extern void handle_IPI(int ipinr, struct pt_regs *regs);
+
+/*
+ * Setup the set of possible CPUs (via set_cpu_possible)
+ */
+extern void smp_init_cpus(void);
+
+/*
+ * Provide a function to raise an IPI cross call on CPUs in callmap.
+ */
+extern void set_smp_cross_call(void (*)(const struct cpumask *, unsigned int));
+
+/*
+ * Called from the secondary holding pen, this is the secondary CPU entry 
point.
+ */
+asmlinkage void secondary_start_kernel(void);
+
+/*
+ * Initial data for bringing up a secondary CPU.
+ */
+struct secondary_data {
+   void *stack;
+};
+extern struct secondary_data secondary_data;
+extern void secondary_holding_pen(void);
+extern volatile unsigned long secondary_holding_pen_release;
+
+extern void arch_send_call_function_single_ipi(int cpu);
+extern void arch_send_call_function_ipi_mask(const struct cpumask *mask);
+
+#endif /* ifndef __ASM_SMP_H */
diff --git a/arch/arm64/include/asm/spinlock.h 
b/arch/arm64/include/asm/spinlock.h
new file mode 100644
index 000..34a37fb
--- /dev/null
+++ b/arch/arm64/include/asm/spinlock.h
@@ -0,0 +1,199 @@
+/*
+ * Copyright (C) 2012 ARM Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see http://www.gnu.org/licenses/.
+ */
+#ifndef __ASM_SPINLOCK_H
+#define __ASM_SPINLOCK_H
+
+#include asm/spinlock_types.h
+#include asm/processor.h
+
+/*
+ * AArch64 Spin-locking.
+ *
+ * We exclusively read the old value.  If it 

Re: [PATCH v2 15/31] arm64: SMP support

2012-08-14 Thread Olof Johansson
Hi,

On Tue, Aug 14, 2012 at 06:52:16PM +0100, Catalin Marinas wrote:
 This patch adds SMP initialisation and spinlocks implementation for
 AArch64. The spinlock support uses the new load-acquire/store-release
 instructions to avoid explicit barriers. The architecture also specifies
 that an event is automatically generated when clearing the exclusive
 monitor state to wake up processors in WFE, so there is no need for an
 explicit DSB/SEV instruction sequence. The SEVL instruction is used to
 set the exclusive monitor locally as there is no conditional WFE and a
 branch is more expensive.
 
 For the SMP booting protocol, see Documentation/arm64/booting.txt.
 
 Signed-off-by: Will Deacon will.dea...@arm.com
 Signed-off-by: Marc Zyngier marc.zyng...@arm.com
 Signed-off-by: Catalin Marinas catalin.mari...@arm.com
 ---

 diff --git a/arch/arm64/include/asm/spinlock.h 
 b/arch/arm64/include/asm/spinlock.h
 new file mode 100644
 index 000..34a37fb
 --- /dev/null
 +++ b/arch/arm64/include/asm/spinlock.h
 @@ -0,0 +1,199 @@
 +/*
 + * Copyright (C) 2012 ARM Ltd.
 + *
 + * This program is free software; you can redistribute it and/or modify
 + * it under the terms of the GNU General Public License version 2 as
 + * published by the Free Software Foundation.
 + *
 + * This program is distributed in the hope that it will be useful,
 + * but WITHOUT ANY WARRANTY; without even the implied warranty of
 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
 + * GNU General Public License for more details.
 + *
 + * You should have received a copy of the GNU General Public License
 + * along with this program.  If not, see http://www.gnu.org/licenses/.
 + */
 +#ifndef __ASM_SPINLOCK_H
 +#define __ASM_SPINLOCK_H
 +
 +#include asm/spinlock_types.h
 +#include asm/processor.h
 +
 +/*
 + * AArch64 Spin-locking.
 + *
 + * We exclusively read the old value.  If it is zero, we may have
 + * won the lock, so we try exclusively storing it.  A memory barrier
 + * is required after we get a lock, and before we release it, because
 + * V6 CPUs are assumed to have weakly ordered memory.

This comment should be updated, to mention the implicit locking and remove the
reference to V6?

Also, ignore previous questions on another reply about need for barriers,
obviously not needed given the load-acquire/store-release semantics.



-Olof
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/