Re: [PATCH v2 2/4] iommu: Add Allwinner H6 IOMMU driver

2020-04-20 Thread Robin Murphy

On 2020-04-20 3:39 pm, Maxime Ripard wrote:

Hi,

On Wed, Apr 08, 2020 at 04:06:49PM +0200, Joerg Roedel wrote:

On Wed, Apr 01, 2020 at 01:47:10PM +0200, Maxime Ripard wrote:

As far as I understand it, the page table can be accessed concurrently
since the framework doesn't seem to provide any serialization /
locking, shouldn't we have some locks to prevent concurrent access?


The dma-iommu code makes sure that there are no concurrent accesses to
the same address-range of the page-table, but there can (and will) be
concurrent accesses to the same page-table, just for different parts of
the address space.

Making this lock-less usually involves updating non-leaf page-table
entries using atomic compare-exchange instructions.


That makes sense, thanks!

I'm not sure what I should compare with though, do you want to compare with 0 to
check if there's already a page table assigned to that DTE? If so, then we
should also allocate the possible page table before the fact so that we have
something to swap with, and deallocate it if we already had one?


Indeed, for an example see arm_v7s_install_table() and how 
__arm_v7s_map() calls it. The LPAE version in io-pgtable-arm.c does the 
same too, but with some extra software-bit handshaking to track the 
cache maintenance state as an optimisation, which you can probably do 
without trying to make sense of ;)


Robin.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v2 2/4] iommu: Add Allwinner H6 IOMMU driver

2020-04-20 Thread Maxime Ripard
Hi,

On Wed, Apr 08, 2020 at 04:06:49PM +0200, Joerg Roedel wrote:
> On Wed, Apr 01, 2020 at 01:47:10PM +0200, Maxime Ripard wrote:
> > As far as I understand it, the page table can be accessed concurrently
> > since the framework doesn't seem to provide any serialization /
> > locking, shouldn't we have some locks to prevent concurrent access?
> 
> The dma-iommu code makes sure that there are no concurrent accesses to
> the same address-range of the page-table, but there can (and will) be
> concurrent accesses to the same page-table, just for different parts of
> the address space.
> 
> Making this lock-less usually involves updating non-leaf page-table
> entries using atomic compare-exchange instructions.

That makes sense, thanks!

I'm not sure what I should compare with though, do you want to compare with 0 to
check if there's already a page table assigned to that DTE? If so, then we
should also allocate the possible page table before the fact so that we have
something to swap with, and deallocate it if we already had one?

Maxime


signature.asc
Description: PGP signature
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: [PATCH v2 2/4] iommu: Add Allwinner H6 IOMMU driver

2020-04-08 Thread Joerg Roedel
Hi Maxime,

On Wed, Apr 01, 2020 at 01:47:10PM +0200, Maxime Ripard wrote:
> As far as I understand it, the page table can be accessed concurrently
> since the framework doesn't seem to provide any serialization /
> locking, shouldn't we have some locks to prevent concurrent access?

The dma-iommu code makes sure that there are no concurrent accesses to
the same address-range of the page-table, but there can (and will) be
concurrent accesses to the same page-table, just for different parts of
the address space.

Making this lock-less usually involves updating non-leaf page-table
entries using atomic compare-exchange instructions.

> > > + *pte_addr = sun50i_mk_pte(paddr, prot);
> > > + sun50i_table_flush(sun50i_domain, pte_addr, 1);
> >
> > This maps only one page, right? But the function needs to map up to
> > 'size' as given in the parameter list.
> 
> It does, but pgsize_bitmap is set to 4k only (since the hardware only
> supports that), so we would have multiple calls to map, each time with
> a single page judging from:
> https://elixir.bootlin.com/linux/latest/source/drivers/iommu/iommu.c#L1948
> 
> Right?

Okay, you are right here. Just note that when this function is called
for every 4k page it should better be fast and avoid slow things like
TLB flushes.

> The vendor driver was doing something along those lines and I wanted
> to be conservative with the cache management if we didn't run into
> performances issues, but I'll convert to the iotlb callbacks then.

Yeah, that definitly helps performance.

Regards,

Joerg
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v2 2/4] iommu: Add Allwinner H6 IOMMU driver

2020-04-01 Thread Maxime Ripard
Hi Jörg,

Thanks for your review, I'll fix the issues you pointed out and I left
out.

On Mon, Mar 02, 2020 at 04:36:06PM +0100, Joerg Roedel wrote:
> On Thu, Feb 20, 2020 at 07:15:14PM +0100, Maxime Ripard wrote:
> > +struct sun50i_iommu_domain {
> > +   struct iommu_domain domain;
> > +
> > +   /* Number of devices attached to the domain */
> > +   refcount_t refcnt;
> > +
> > +   /* Lock to modify the Directory Table */
> > +   spinlock_t dt_lock;
>
> I suggest you make page-table updates lock-less. Otherwise this lock
> will become a bottle-neck when using the IOMMU through DMA-API.

As far as I understand it, the page table can be accessed concurrently
since the framework doesn't seem to provide any serialization /
locking, shouldn't we have some locks to prevent concurrent access?

> > +
> > +static int sun50i_iommu_map(struct iommu_domain *domain, unsigned long 
> > iova,
> > +   phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
> > +{
> > +   struct sun50i_iommu_domain *sun50i_domain = to_sun50i_domain(domain);
> > +   struct sun50i_iommu *iommu = sun50i_domain->iommu;
> > +   u32 pte_index;
> > +   u32 *page_table, *pte_addr;
> > +   unsigned long flags;
> > +   int ret = 0;
> > +
> > +   spin_lock_irqsave(_domain->dt_lock, flags);
> > +   page_table = sun50i_dte_get_page_table(sun50i_domain, iova, gfp);
> > +   if (IS_ERR(page_table)) {
> > +   ret = PTR_ERR(page_table);
> > +   goto out;
> > +   }
> > +
> > +   pte_index = sun50i_iova_get_pte_index(iova);
> > +   pte_addr = _table[pte_index];
> > +   if (sun50i_pte_is_page_valid(*pte_addr)) {
>
> You can use unlikely() here.
>
> > +   phys_addr_t page_phys = sun50i_pte_get_page_address(*pte_addr);
> > +   dev_err(iommu->dev,
> > +   "iova %pad already mapped to %pa cannot remap to %pa 
> > prot: %#x\n",
> > +   , _phys, , prot);
> > +   ret = -EBUSY;
> > +   goto out;
> > +   }
> > +
> > +   *pte_addr = sun50i_mk_pte(paddr, prot);
> > +   sun50i_table_flush(sun50i_domain, pte_addr, 1);
>
> This maps only one page, right? But the function needs to map up to
> 'size' as given in the parameter list.

It does, but pgsize_bitmap is set to 4k only (since the hardware only
supports that), so we would have multiple calls to map, each time with
a single page judging from:
https://elixir.bootlin.com/linux/latest/source/drivers/iommu/iommu.c#L1948

Right?

> > +
> > +   spin_lock_irqsave(>iommu_lock, flags);
> > +   sun50i_iommu_tlb_invalidate(iommu, iova);
> > +   spin_unlock_irqrestore(>iommu_lock, flags);
>
> Why is there a need to flush the TLB here? The IOMMU-API provides
> call-backs so that the user of the API can decide when it wants
> to flush the IO/TLB. Such flushes are usually expensive and doing them
> on every map and unmap will cost significant performance.

The vendor driver was doing something along those lines and I wanted
to be conservative with the cache management if we didn't run into
performances issues, but I'll convert to the iotlb callbacks then.

Thanks!
Maxime


signature.asc
Description: PGP signature
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: [PATCH v2 2/4] iommu: Add Allwinner H6 IOMMU driver

2020-03-02 Thread Joerg Roedel
Hi Maxime,

On Thu, Feb 20, 2020 at 07:15:14PM +0100, Maxime Ripard wrote:
> +struct sun50i_iommu_domain {
> + struct iommu_domain domain;
> +
> + /* Number of devices attached to the domain */
> + refcount_t refcnt;
> +
> + /* Lock to modify the Directory Table */
> + spinlock_t dt_lock;

I suggest you make page-table updates lock-less. Otherwise this lock
will become a bottle-neck when using the IOMMU through DMA-API.

> +
> +static int sun50i_iommu_map(struct iommu_domain *domain, unsigned long iova,
> + phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
> +{
> + struct sun50i_iommu_domain *sun50i_domain = to_sun50i_domain(domain);
> + struct sun50i_iommu *iommu = sun50i_domain->iommu;
> + u32 pte_index;
> + u32 *page_table, *pte_addr;
> + unsigned long flags;
> + int ret = 0;
> +
> + spin_lock_irqsave(_domain->dt_lock, flags);
> + page_table = sun50i_dte_get_page_table(sun50i_domain, iova, gfp);
> + if (IS_ERR(page_table)) {
> + ret = PTR_ERR(page_table);
> + goto out;
> + }
> +
> + pte_index = sun50i_iova_get_pte_index(iova);
> + pte_addr = _table[pte_index];
> + if (sun50i_pte_is_page_valid(*pte_addr)) {

You can use unlikely() here.

> + phys_addr_t page_phys = sun50i_pte_get_page_address(*pte_addr);
> + dev_err(iommu->dev,
> + "iova %pad already mapped to %pa cannot remap to %pa 
> prot: %#x\n",
> + , _phys, , prot);
> + ret = -EBUSY;
> + goto out;
> + }
> +
> + *pte_addr = sun50i_mk_pte(paddr, prot);
> + sun50i_table_flush(sun50i_domain, pte_addr, 1);

This maps only one page, right? But the function needs to map up to
'size' as given in the parameter list.

> +
> + spin_lock_irqsave(>iommu_lock, flags);
> + sun50i_iommu_tlb_invalidate(iommu, iova);
> + spin_unlock_irqrestore(>iommu_lock, flags);

Why is there a need to flush the TLB here? The IOMMU-API provides
call-backs so that the user of the API can decide when it wants
to flush the IO/TLB. Such flushes are usually expensive and doing them
on every map and unmap will cost significant performance.

> +static size_t sun50i_iommu_unmap(struct iommu_domain *domain, unsigned long 
> iova,
> +  size_t size, struct iommu_iotlb_gather *gather)
> +{
> + struct sun50i_iommu_domain *sun50i_domain = to_sun50i_domain(domain);
> + struct sun50i_iommu *iommu = sun50i_domain->iommu;
> + unsigned long flags;
> + phys_addr_t pt_phys;
> + dma_addr_t pte_dma;
> + u32 *pte_addr;
> + u32 dte;
> +
> + spin_lock_irqsave(_domain->dt_lock, flags);
> +
> + dte = sun50i_domain->dt[sun50i_iova_get_dte_index(iova)];
> + if (!sun50i_dte_is_pt_valid(dte)) {
> + spin_unlock_irqrestore(_domain->dt_lock, flags);
> + return 0;
> + }
> +
> + pt_phys = sun50i_dte_get_pt_address(dte);
> + pte_addr = (u32 *)phys_to_virt(pt_phys) + 
> sun50i_iova_get_pte_index(iova);
> + pte_dma = pt_phys + sun50i_iova_get_pte_index(iova) * PT_ENTRY_SIZE;
> +
> + if (!sun50i_pte_is_page_valid(*pte_addr)) {
> + spin_unlock_irqrestore(_domain->dt_lock, flags);
> + return 0;
> + }
> +
> + memset(pte_addr, 0, sizeof(*pte_addr));
> + sun50i_table_flush(sun50i_domain, pte_addr, 1);
> +
> + spin_lock(>iommu_lock);
> + sun50i_iommu_tlb_invalidate(iommu, iova);
> + sun50i_iommu_ptw_invalidate(iommu, iova);
> + spin_unlock(>iommu_lock);

Same objections as in the map function. This only unmaps one page, and
is the IO/TLB flush really needed here?

> +static struct iommu_domain *sun50i_iommu_domain_alloc(unsigned type)
> +{
> + struct sun50i_iommu_domain *sun50i_domain;
> +
> + if (type != IOMMU_DOMAIN_DMA && type != IOMMU_DOMAIN_UNMANAGED)
> + return NULL;

I think you should at least also support identity domains here. The
iommu-core code might allocate those for default domains.


Regards,

Joerg
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v2 2/4] iommu: Add Allwinner H6 IOMMU driver

2020-02-20 Thread Maxime Ripard
The Allwinner H6 has introduced an IOMMU for a few DMA controllers, mostly
video related: the display engine, the video decoders / encoders, the
camera capture controller, etc.

The design is pretty simple compared to other IOMMUs found in SoCs: there's
a single instance, controlling all the masters, with a single address
space.

It also features a performance monitoring unit that allows to retrieve
various informations (per-master and global TLB accesses, hits and misses,
access latency, etc) that isn't supported at the moment.

Signed-off-by: Maxime Ripard 
---
 drivers/iommu/Kconfig|9 +-
 drivers/iommu/Makefile   |1 +-
 drivers/iommu/sun50i-iommu.c | 1072 +++-
 3 files changed, 1082 insertions(+)
 create mode 100644 drivers/iommu/sun50i-iommu.c

diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
index d2fade984999..87677ea98427 100644
--- a/drivers/iommu/Kconfig
+++ b/drivers/iommu/Kconfig
@@ -302,6 +302,15 @@ config ROCKCHIP_IOMMU
  Say Y here if you are using a Rockchip SoC that includes an IOMMU
  device.
 
+config SUN50I_IOMMU
+   bool "Allwinner H6 IOMMU Support"
+   depends on ARCH_SUNXI || COMPILE_TEST
+   select ARM_DMA_USE_IOMMU
+   select IOMMU_API
+   select IOMMU_DMA
+   help
+ Support for the IOMMU introduced in the Allwinner H6 SoCs.
+
 config TEGRA_IOMMU_GART
bool "Tegra GART IOMMU Support"
depends on ARCH_TEGRA_2x_SOC
diff --git a/drivers/iommu/Makefile b/drivers/iommu/Makefile
index 2104fb8afc06..dd1ff336b9b9 100644
--- a/drivers/iommu/Makefile
+++ b/drivers/iommu/Makefile
@@ -29,6 +29,7 @@ obj-$(CONFIG_MTK_IOMMU_V1) += mtk_iommu_v1.o
 obj-$(CONFIG_OMAP_IOMMU) += omap-iommu.o
 obj-$(CONFIG_OMAP_IOMMU_DEBUG) += omap-iommu-debug.o
 obj-$(CONFIG_ROCKCHIP_IOMMU) += rockchip-iommu.o
+obj-$(CONFIG_SUN50I_IOMMU) += sun50i-iommu.o
 obj-$(CONFIG_TEGRA_IOMMU_GART) += tegra-gart.o
 obj-$(CONFIG_TEGRA_IOMMU_SMMU) += tegra-smmu.o
 obj-$(CONFIG_EXYNOS_IOMMU) += exynos-iommu.o
diff --git a/drivers/iommu/sun50i-iommu.c b/drivers/iommu/sun50i-iommu.c
new file mode 100644
index ..81ba5f562bd2
--- /dev/null
+++ b/drivers/iommu/sun50i-iommu.c
@@ -0,0 +1,1072 @@
+// SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+// Copyright (C) 2016-2018, Allwinner Technology CO., LTD.
+// Copyright (C) 2019-2020, Cerno
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#define IOMMU_RESET_REG0x010
+#define IOMMU_ENABLE_REG   0x020
+#define IOMMU_ENABLE_ENABLEBIT(0)
+
+#define IOMMU_BYPASS_REG   0x030
+#define IOMMU_AUTO_GATING_REG  0x040
+#define IOMMU_AUTO_GATING_ENABLE   BIT(0)
+
+#define IOMMU_WBUF_CTRL_REG0x044
+#define IOMMU_OOO_CTRL_REG 0x048
+#define IOMMU_4KB_BDY_PRT_CTRL_REG 0x04c
+#define IOMMU_TTB_REG  0x050
+#define IOMMU_TLB_ENABLE_REG   0x060
+#define IOMMU_TLB_PREFETCH_REG 0x070
+#define IOMMU_TLB_PREFETCH_MASTER_ENABLE(m)BIT(m)
+
+#define IOMMU_TLB_FLUSH_REG0x080
+#define IOMMU_TLB_FLUSH_PTW_CACHE  BIT(17)
+#define IOMMU_TLB_FLUSH_MACRO_TLB  BIT(16)
+#define IOMMU_TLB_FLUSH_MICRO_TLB(i)   (BIT(i) & GENMASK(5, 0))
+
+#define IOMMU_TLB_IVLD_ADDR_REG0x090
+#define IOMMU_TLB_IVLD_ADDR_MASK_REG   0x094
+#define IOMMU_TLB_IVLD_ENABLE_REG  0x098
+#define IOMMU_TLB_IVLD_ENABLE_ENABLE   BIT(0)
+
+#define IOMMU_PC_IVLD_ADDR_REG 0x0a0
+#define IOMMU_PC_IVLD_ENABLE_REG   0x0a8
+#define IOMMU_PC_IVLD_ENABLE_ENABLEBIT(0)
+
+#define IOMMU_DM_AUT_CTRL_REG(d)   (0x0b0 + ((d) / 2) * 4)
+#define IOMMU_DM_AUT_CTRL_RD_UNAVAIL(d, m) (1 << (((d & 1) * 16) + ((m) * 
2)))
+#define IOMMU_DM_AUT_CTRL_WR_UNAVAIL(d, m) (1 << (((d & 1) * 16) + ((m) * 
2) + 1))
+
+#define IOMMU_DM_AUT_OVWT_REG  0x0d0
+#define IOMMU_INT_ENABLE_REG   0x100
+#define IOMMU_INT_CLR_REG  0x104
+#define IOMMU_INT_STA_REG  0x108
+#define IOMMU_INT_ERR_ADDR_REG(i)  (0x110 + (i) * 4)
+#define IOMMU_INT_ERR_ADDR_L1_REG  0x130
+#define IOMMU_INT_ERR_ADDR_L2_REG  0x134
+#define IOMMU_INT_ERR_DATA_REG(i)  (0x150 + (i) * 4)
+#define IOMMU_L1PG_INT_REG 0x0180
+#define IOMMU_L2PG_INT_REG 0x0184
+
+#define IOMMU_INT_INVALID_L2PG BIT(17)
+#define IOMMU_INT_INVALID_L1PG BIT(16)
+#define IOMMU_INT_MASTER_PERMISSION(m) BIT(m)
+#define IOMMU_INT_MASTER_MASK  (IOMMU_INT_MASTER_PERMISSION(0) 
| \
+IOMMU_INT_MASTER_PERMISSION(1) 
| \
+