Re: [PATCH v5 4/4] ARM: Add support for Hisilicon Kunpeng L3 cache controller

2021-01-30 Thread Leizhen (ThunderTown)



On 2021/1/29 21:54, Leizhen (ThunderTown) wrote:
> 
> 
> On 2021/1/29 18:26, Arnd Bergmann wrote:
>> On Fri, Jan 29, 2021 at 9:16 AM Arnd Bergmann  wrote:
>>> On Fri, Jan 29, 2021 at 8:23 AM Leizhen (ThunderTown)
>>>  wrote:
 On 2021/1/28 22:24, Arnd Bergmann wrote:
> On Sat, Jan 16, 2021 at 4:27 AM Zhen Lei  
> wrote:
>> diff --git a/arch/arm/mm/Makefile b/arch/arm/mm/Makefile
>> +
>> +static void l3cache_maint_common(u32 range, u32 op_type)
>> +{
>> +   u32 reg;
>> +
>> +   reg = readl(l3_ctrl_base + L3_MAINT_CTRL);
>> +   reg &= ~(L3_MAINT_RANGE_MASK | L3_MAINT_TYPE_MASK);
>> +   reg |= range | op_type;
>> +   reg |= L3_MAINT_STATUS_START;
>> +   writel(reg, l3_ctrl_base + L3_MAINT_CTRL);
>
> Are there contents of L3_MAINT_CTRL that need to be preserved
> across calls and can not be inferred? A 'readl()' is often expensive,
> so it might be more efficient if you can avoid that.

 Right, this readl() can be replaced with readl_relaxed(). Thanks.

 I'll check and correct the readl() and writel() in other places.
>>>
>>> What I meant is that if you want to replace them, you should provide
>>> performance numbers that show how much difference this makes
>>> and add comments in the source code explaining how you proved that
>>> the _relaxed() version is actually correct.
>>
>> Another clarification, as there are actually two independent
>> points here:
>>
>> * if you can completely remove the readl() above and just write a
>>   hardcoded value into the register, or perhaps read the original
>>   value once at boot time, that is probably a win because it
>>   avoids one of the barriers in the beginning. The datasheet should
>>   tell you if there are any bits in the register that have to be
>>   preserved
> 
> Code coupling will become very strong.
> 
>>
>> * Regarding the _relaxed() accessors, it's a lot harder to know
>>   whether that is safe, as you first have to show, in particular in case
>>   any of the accesses stop being guarded by the spinlock in that
>>   case, and whether there may be a case where you have to
>>   serialize the memory access against accesses that are still in the
>>   store queue or prefetched.
>>
>> Whether this matters at all depends mostly on the type of devices
>> you are driving on your SoC. If you have any high-speed network
>> interfaces that are unable to do cache coherent DMA, any extra
>> instruction here may impact the number of packets you can transfer,
>> but if all your high-speed devices are connected to a coherent
>> interconnect, I would just go with the obvious approach and use
>> the safe MMIO accessors everywhere.
> 
> In fact, this driver has been running on an earlier version for several years
> and has not received any feedback about the performance issue. So I didn't
> try to optimize it when I first sent these patches. I had to reconsider it
> until you noticed it.
> 
> How about keeping it unchanged for the moment? It'll take a lot of time and
> energy to retest.

In the spirit of code excellence, it's still necessary to optimize it.
Yesterday, my family urged me to go back, I wrote it in a hurry.

> 
>>
>>Arnd
>>
>> ___
>> linux-arm-kernel mailing list
>> linux-arm-ker...@lists.infradead.org
>> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
>>
>>
> 
> 
> ___
> linux-arm-kernel mailing list
> linux-arm-ker...@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
> 
> .
> 



Re: [PATCH v5 4/4] ARM: Add support for Hisilicon Kunpeng L3 cache controller

2021-01-30 Thread Leizhen (ThunderTown)



On 2021/1/29 18:33, Russell King - ARM Linux admin wrote:
> On Fri, Jan 29, 2021 at 11:26:38AM +0100, Arnd Bergmann wrote:
>> Another clarification, as there are actually two independent
>> points here:
>>
>> * if you can completely remove the readl() above and just write a
>>   hardcoded value into the register, or perhaps read the original
>>   value once at boot time, that is probably a win because it
>>   avoids one of the barriers in the beginning. The datasheet should
>>   tell you if there are any bits in the register that have to be
>>   preserved
>>
>> * Regarding the _relaxed() accessors, it's a lot harder to know
>>   whether that is safe, as you first have to show, in particular in case
>>   any of the accesses stop being guarded by the spinlock in that
>>   case, and whether there may be a case where you have to
>>   serialize the memory access against accesses that are still in the
>>   store queue or prefetched.
>>
>> Whether this matters at all depends mostly on the type of devices
>> you are driving on your SoC. If you have any high-speed network
>> interfaces that are unable to do cache coherent DMA, any extra
>> instruction here may impact the number of packets you can transfer,
>> but if all your high-speed devices are connected to a coherent
>> interconnect, I would just go with the obvious approach and use
>> the safe MMIO accessors everywhere.
> 
> For L2 cache code, I would say the opposite, actually, because it is
> all too easy to get into a deadlock otherwise.
> 
> If you implement the sync callback, that will be called from every
> non-relaxed accessor, which means if you need to take some kind of
> lock in the sync callback and elsewhere in the L2 cache code, you will
> definitely deadlock.
> 
> It is safer to put explicit barriers where it is necessary.
> 
> Also remember that the barrier in readl() etc is _after_ the read, not
> before, and the barrier in writel() is _before_ the write, not after.
> The point is to ensure that DMA memory accesses are properly ordered
> with the IO-accessing instructions.

Yes, I known it. writel() must be used for the write operations that control
"start/stop" or "enable/disable" function, to ensure that the data of previous
write operations reaches the target. I've met this kind of problem before.

> 
> So, using readl_relaxed() with a read-modify-write is entirely sensible
> provided you do not access DMA memory inbetween.

Actually, I don't think this register is that complicated. I copied the code
back below. All the bits of L3_MAINT_CTRL are not affected by DMA access 
operations.
The software change the "range | op_type" to specify the operation type and 
scope,
the set the bit "L3_MAINT_STATUS_START" to start the operation. Then wait for 
that
bit to change from 1 to 0 by hardware.

+   reg = readl(l3_ctrl_base + L3_MAINT_CTRL);
+   reg &= ~(L3_MAINT_RANGE_MASK | L3_MAINT_TYPE_MASK);
+   reg |= range | op_type;
+   reg |= L3_MAINT_STATUS_START;
+   writel(reg, l3_ctrl_base + L3_MAINT_CTRL);
+
+   /* Wait until the hardware maintenance operation is complete. */
+   do {
+   cpu_relax();
+   reg = readl(l3_ctrl_base + L3_MAINT_CTRL);
+   } while ((reg & L3_MAINT_STATUS_MASK) != L3_MAINT_STATUS_END);

> 



Re: [PATCH v5 4/4] ARM: Add support for Hisilicon Kunpeng L3 cache controller

2021-01-29 Thread Russell King - ARM Linux admin
On Fri, Jan 29, 2021 at 11:26:38AM +0100, Arnd Bergmann wrote:
> Another clarification, as there are actually two independent
> points here:
> 
> * if you can completely remove the readl() above and just write a
>   hardcoded value into the register, or perhaps read the original
>   value once at boot time, that is probably a win because it
>   avoids one of the barriers in the beginning. The datasheet should
>   tell you if there are any bits in the register that have to be
>   preserved
> 
> * Regarding the _relaxed() accessors, it's a lot harder to know
>   whether that is safe, as you first have to show, in particular in case
>   any of the accesses stop being guarded by the spinlock in that
>   case, and whether there may be a case where you have to
>   serialize the memory access against accesses that are still in the
>   store queue or prefetched.
> 
> Whether this matters at all depends mostly on the type of devices
> you are driving on your SoC. If you have any high-speed network
> interfaces that are unable to do cache coherent DMA, any extra
> instruction here may impact the number of packets you can transfer,
> but if all your high-speed devices are connected to a coherent
> interconnect, I would just go with the obvious approach and use
> the safe MMIO accessors everywhere.

For L2 cache code, I would say the opposite, actually, because it is
all too easy to get into a deadlock otherwise.

If you implement the sync callback, that will be called from every
non-relaxed accessor, which means if you need to take some kind of
lock in the sync callback and elsewhere in the L2 cache code, you will
definitely deadlock.

It is safer to put explicit barriers where it is necessary.

Also remember that the barrier in readl() etc is _after_ the read, not
before, and the barrier in writel() is _before_ the write, not after.
The point is to ensure that DMA memory accesses are properly ordered
with the IO-accessing instructions.

So, using readl_relaxed() with a read-modify-write is entirely sensible
provided you do not access DMA memory inbetween.

-- 
RMK's Patch system: https://www.armlinux.org.uk/developer/patches/
FTTP is here! 40Mbps down 10Mbps up. Decent connectivity at last!


Re: [PATCH v5 4/4] ARM: Add support for Hisilicon Kunpeng L3 cache controller

2021-01-28 Thread Leizhen (ThunderTown)



On 2021/1/28 22:24, Arnd Bergmann wrote:
> On Sat, Jan 16, 2021 at 4:27 AM Zhen Lei  wrote:
>> diff --git a/arch/arm/mm/Makefile b/arch/arm/mm/Makefile
>> +
>> +static void l3cache_maint_common(u32 range, u32 op_type)
>> +{
>> +   u32 reg;
>> +
>> +   reg = readl(l3_ctrl_base + L3_MAINT_CTRL);
>> +   reg &= ~(L3_MAINT_RANGE_MASK | L3_MAINT_TYPE_MASK);
>> +   reg |= range | op_type;
>> +   reg |= L3_MAINT_STATUS_START;
>> +   writel(reg, l3_ctrl_base + L3_MAINT_CTRL);
> 
> Are there contents of L3_MAINT_CTRL that need to be preserved
> across calls and can not be inferred? A 'readl()' is often expensive,
> so it might be more efficient if you can avoid that.

Right, this readl() can be replaced with readl_relaxed(). Thanks.

I'll check and correct the readl() and writel() in other places.

> 
>> +static inline void l3cache_flush_all_nolock(void)
>> +{
>> +   l3cache_maint_common(L3_MAINT_RANGE_ALL, L3_MAINT_TYPE_FLUSH);
>> +}
>> +
>> +static void l3cache_flush_all(void)
>> +{
>> +   unsigned long flags;
>> +
>> +   spin_lock_irqsave(_lock, flags);
>> +   l3cache_flush_all_nolock();
>> +   spin_unlock_irqrestore(_lock, flags);
>> +}
> 
> I see that cache-l2x0 uses raw_spin_lock_irqsave() instead of
> spin_lock_irqsave(), to avoid preemption in the middle of a cache
> operation. This is probably a good idea here as well.

I don't think there's any essential difference between the two! I don't know
if the compiler or tool will do anything extra. I checked the git log of the
l2x0 driver and it used raw_spin_lock_irqsave() at the beginning. Maybe
there's a description in 2.6. Since you mentioned this potential risk, I'll
change it to raw_spin_lock_irqsave.

include/linux/spinlock.h:
static __always_inline raw_spinlock_t *spinlock_check(spinlock_t *lock)
{
return >rlock;
}

#define spin_lock_irqsave(lock, flags)  \
do {\
raw_spin_lock_irqsave(spinlock_check(lock), flags); \
} while (0)

> 
> I also see that l2x0 uses readl_relaxed(), to avoid a deadlock
> in l2x0_cache_sync(). This may also be beneficial for performance
> reasons, so it might be helpful to compare performance
> overhead. On the other hand, readl()/writel() are usually the
> safe choice, as those avoid the need to argue over whether
> the relaxed versions are safe in all corner cases.
> 
>> +static int __init l3cache_init(void)
>> +{
>> +   u32 reg;
>> +   struct device_node *node;
>> +
>> +   node = of_find_matching_node(NULL, l3cache_ids);
>> +   if (!node)
>> +   return -ENODEV;
> 
> I think the initcall should return '0' to indicate success when running
> a kernel with this driver built-in on a platform that does not have
> this device.

I have added "depends on ARCH_KUNPENG50X" for this driver. But it's OK to
return 0.

> 
>> diff --git a/arch/arm/mm/cache-kunpeng-l3.h b/arch/arm/mm/cache-kunpeng-l3.h
>> new file mode 100644
>> index 000..9ef6a53e7d4db49
>> --- /dev/null
>> +++ b/arch/arm/mm/cache-kunpeng-l3.h
>> @@ -0,0 +1,30 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +#ifndef __CACHE_KUNPENG_L3_H
>> +#define __CACHE_KUNPENG_L3_H
>> +
>> +#define L3_CACHE_LINE_SHITF6
> 
> I would suggest moving the contents of the header file into the .c file,
> since there is only a single user of these macros.

Okay, I'll move it.

> 
>   Arnd
> 
> .
> 



Re: [PATCH v5 4/4] ARM: Add support for Hisilicon Kunpeng L3 cache controller

2021-01-28 Thread Arnd Bergmann
On Sat, Jan 16, 2021 at 4:27 AM Zhen Lei  wrote:
> diff --git a/arch/arm/mm/Makefile b/arch/arm/mm/Makefile
> +
> +static void l3cache_maint_common(u32 range, u32 op_type)
> +{
> +   u32 reg;
> +
> +   reg = readl(l3_ctrl_base + L3_MAINT_CTRL);
> +   reg &= ~(L3_MAINT_RANGE_MASK | L3_MAINT_TYPE_MASK);
> +   reg |= range | op_type;
> +   reg |= L3_MAINT_STATUS_START;
> +   writel(reg, l3_ctrl_base + L3_MAINT_CTRL);

Are there contents of L3_MAINT_CTRL that need to be preserved
across calls and can not be inferred? A 'readl()' is often expensive,
so it might be more efficient if you can avoid that.

> +static inline void l3cache_flush_all_nolock(void)
> +{
> +   l3cache_maint_common(L3_MAINT_RANGE_ALL, L3_MAINT_TYPE_FLUSH);
> +}
> +
> +static void l3cache_flush_all(void)
> +{
> +   unsigned long flags;
> +
> +   spin_lock_irqsave(_lock, flags);
> +   l3cache_flush_all_nolock();
> +   spin_unlock_irqrestore(_lock, flags);
> +}

I see that cache-l2x0 uses raw_spin_lock_irqsave() instead of
spin_lock_irqsave(), to avoid preemption in the middle of a cache
operation. This is probably a good idea here as well.

I also see that l2x0 uses readl_relaxed(), to avoid a deadlock
in l2x0_cache_sync(). This may also be beneficial for performance
reasons, so it might be helpful to compare performance
overhead. On the other hand, readl()/writel() are usually the
safe choice, as those avoid the need to argue over whether
the relaxed versions are safe in all corner cases.

> +static int __init l3cache_init(void)
> +{
> +   u32 reg;
> +   struct device_node *node;
> +
> +   node = of_find_matching_node(NULL, l3cache_ids);
> +   if (!node)
> +   return -ENODEV;

I think the initcall should return '0' to indicate success when running
a kernel with this driver built-in on a platform that does not have
this device.

> diff --git a/arch/arm/mm/cache-kunpeng-l3.h b/arch/arm/mm/cache-kunpeng-l3.h
> new file mode 100644
> index 000..9ef6a53e7d4db49
> --- /dev/null
> +++ b/arch/arm/mm/cache-kunpeng-l3.h
> @@ -0,0 +1,30 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +#ifndef __CACHE_KUNPENG_L3_H
> +#define __CACHE_KUNPENG_L3_H
> +
> +#define L3_CACHE_LINE_SHITF6

I would suggest moving the contents of the header file into the .c file,
since there is only a single user of these macros.

  Arnd


[PATCH v5 4/4] ARM: Add support for Hisilicon Kunpeng L3 cache controller

2021-01-15 Thread Zhen Lei
Add support for the Hisilicon Kunpeng L3 cache controller as used with
Kunpeng506 and Kunpeng509 SoCs.

These Hisilicon SoCs support LPAE, so the physical addresses is wider than
32-bits, but the actual bit width does not exceed 36 bits. When the cache
operation is performed based on the address range, the upper 30 bits of
the physical address are recorded in registers L3_MAINT_START and
L3_MAINT_END, and ignore the lower 6 bits cacheline offset.

Signed-off-by: Zhen Lei 
---
 arch/arm/mm/Kconfig|  10 +++
 arch/arm/mm/Makefile   |   1 +
 arch/arm/mm/cache-kunpeng-l3.c | 153 +
 arch/arm/mm/cache-kunpeng-l3.h |  30 +++
 4 files changed, 194 insertions(+)
 create mode 100644 arch/arm/mm/cache-kunpeng-l3.c
 create mode 100644 arch/arm/mm/cache-kunpeng-l3.h

diff --git a/arch/arm/mm/Kconfig b/arch/arm/mm/Kconfig
index 02692fbe2db5c59..d2082503de053d2 100644
--- a/arch/arm/mm/Kconfig
+++ b/arch/arm/mm/Kconfig
@@ -1070,6 +1070,16 @@ config CACHE_XSC3L2
help
  This option enables the L2 cache on XScale3.
 
+config CACHE_KUNPENG_L3
+   bool "Enable the Hisilicon Kunpeng L3 cache controller"
+   depends on ARCH_KUNPENG50X && OF
+   default y
+   select OUTER_CACHE
+   help
+ This option enables the Kunpeng L3 cache controller on Hisilicon
+ Kunpeng506 and Kunpeng509 SoCs. It supports a maximum of 36-bit
+ physical addresses.
+
 config ARM_L1_CACHE_SHIFT_6
bool
default y if CPU_V7
diff --git a/arch/arm/mm/Makefile b/arch/arm/mm/Makefile
index 3510503bc5e688b..ececc5489e353eb 100644
--- a/arch/arm/mm/Makefile
+++ b/arch/arm/mm/Makefile
@@ -112,6 +112,7 @@ obj-$(CONFIG_CACHE_L2X0_PMU)+= cache-l2x0-pmu.o
 obj-$(CONFIG_CACHE_XSC3L2) += cache-xsc3l2.o
 obj-$(CONFIG_CACHE_TAUROS2)+= cache-tauros2.o
 obj-$(CONFIG_CACHE_UNIPHIER)   += cache-uniphier.o
+obj-$(CONFIG_CACHE_KUNPENG_L3) += cache-kunpeng-l3.o
 
 KASAN_SANITIZE_kasan_init.o:= n
 obj-$(CONFIG_KASAN)+= kasan_init.o
diff --git a/arch/arm/mm/cache-kunpeng-l3.c b/arch/arm/mm/cache-kunpeng-l3.c
new file mode 100644
index 000..cb81f15d26a0cf2
--- /dev/null
+++ b/arch/arm/mm/cache-kunpeng-l3.c
@@ -0,0 +1,153 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2021 Hisilicon Limited.
+ */
+
+#include 
+#include 
+#include 
+#include 
+
+#include 
+
+#include "cache-kunpeng-l3.h"
+
+static DEFINE_SPINLOCK(l3cache_lock);
+static void __iomem *l3_ctrl_base;
+
+
+static void l3cache_maint_common(u32 range, u32 op_type)
+{
+   u32 reg;
+
+   reg = readl(l3_ctrl_base + L3_MAINT_CTRL);
+   reg &= ~(L3_MAINT_RANGE_MASK | L3_MAINT_TYPE_MASK);
+   reg |= range | op_type;
+   reg |= L3_MAINT_STATUS_START;
+   writel(reg, l3_ctrl_base + L3_MAINT_CTRL);
+
+   /* Wait until the hardware maintenance operation is complete. */
+   do {
+   cpu_relax();
+   reg = readl(l3_ctrl_base + L3_MAINT_CTRL);
+   } while ((reg & L3_MAINT_STATUS_MASK) != L3_MAINT_STATUS_END);
+}
+
+static void l3cache_maint_range(phys_addr_t start, phys_addr_t end, u32 
op_type)
+{
+   start = start >> L3_CACHE_LINE_SHITF;
+   end = ((end - 1) >> L3_CACHE_LINE_SHITF) + 1;
+
+   writel(start, l3_ctrl_base + L3_MAINT_START);
+   writel(end, l3_ctrl_base + L3_MAINT_END);
+
+   l3cache_maint_common(L3_MAINT_RANGE_ADDR, op_type);
+}
+
+static inline void l3cache_flush_all_nolock(void)
+{
+   l3cache_maint_common(L3_MAINT_RANGE_ALL, L3_MAINT_TYPE_FLUSH);
+}
+
+static void l3cache_flush_all(void)
+{
+   unsigned long flags;
+
+   spin_lock_irqsave(_lock, flags);
+   l3cache_flush_all_nolock();
+   spin_unlock_irqrestore(_lock, flags);
+}
+
+static void l3cache_inv_range(phys_addr_t start, phys_addr_t end)
+{
+   unsigned long flags;
+
+   spin_lock_irqsave(_lock, flags);
+   l3cache_maint_range(start, end, L3_MAINT_TYPE_INV);
+   spin_unlock_irqrestore(_lock, flags);
+}
+
+static void l3cache_clean_range(phys_addr_t start, phys_addr_t end)
+{
+   unsigned long flags;
+
+   spin_lock_irqsave(_lock, flags);
+   l3cache_maint_range(start, end, L3_MAINT_TYPE_CLEAN);
+   spin_unlock_irqrestore(_lock, flags);
+}
+
+static void l3cache_flush_range(phys_addr_t start, phys_addr_t end)
+{
+   unsigned long flags;
+
+   spin_lock_irqsave(_lock, flags);
+   l3cache_maint_range(start, end, L3_MAINT_TYPE_FLUSH);
+   spin_unlock_irqrestore(_lock, flags);
+}
+
+static void l3cache_disable(void)
+{
+   unsigned long flags;
+
+   spin_lock_irqsave(_lock, flags);
+   l3cache_flush_all_nolock();
+   writel(L3_CTRL_DISABLE, l3_ctrl_base + L3_CTRL);
+   spin_unlock_irqrestore(_lock, flags);
+}
+
+static const struct of_device_id l3cache_ids[] __initconst = {
+   {.compatible = "hisilicon,kunpeng-l3cache", .data = NULL},
+   {}
+};
+
+static int __init l3cache_init(void)
+{
+   u32