Re: [PATCH 21/29] mm: remove the pgprot argument to __vmalloc

2020-04-30 Thread John Dorminy
>> On Tue, Apr 14, 2020 at 03:13:40PM +0200, Christoph Hellwig wrote:
>> > The pgprot argument to __vmalloc is always PROT_KERNEL now, so remove
>> > it.

Greetings;

I recently noticed this change via the linux-next tree.

It may not be possible to edit at this late date, but the change
description refers to PROT_KERNEL, which is a symbol which does not
appear to exist; perhaps PAGE_KERNEL was meant? The mismatch caused me
and a couple other folks some confusion briefly until we decided it
was supposed to be PAGE_KERNEL; if it's not too late, editing the
description to clarify so would be nice.

Many thanks.

John Dorminy

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 21/29] mm: remove the pgprot argument to __vmalloc

2020-04-30 Thread John Dorminy
Greetings;

I recently noticed this change via the linux-next tree.

It may not be possible to edit at this late date, but the change
description refers to PROT_KERNEL, which is a symbol which does not appear
to exist; perhaps PAGE_KERNEL was meant? The mismatch caused me and a
couple other folks some confusion briefly until we decided it was supposed
to be PAGE_KERNEL; if it's not too late, editing the description to clarify
so would be nice.

Many thanks.

John Dorminy



On Tue, Apr 14, 2020 at 11:15 AM Wei Liu  wrote:

> On Tue, Apr 14, 2020 at 03:13:40PM +0200, Christoph Hellwig wrote:
> > The pgprot argument to __vmalloc is always PROT_KERNEL now, so remove
> > it.
> >
> > Signed-off-by: Christoph Hellwig 
> > Reviewed-by: Michael Kelley  [hyperv]
> > Acked-by: Gao Xiang  [erofs]
> > Acked-by: Peter Zijlstra (Intel) 
> > ---
> >  arch/x86/hyperv/hv_init.c  |  3 +--
> [...]
> >
> > diff --git a/arch/x86/hyperv/hv_init.c b/arch/x86/hyperv/hv_init.c
> > index 5a4b363ba67b..a3d689dfc745 100644
> > --- a/arch/x86/hyperv/hv_init.c
> > +++ b/arch/x86/hyperv/hv_init.c
> > @@ -95,8 +95,7 @@ static int hv_cpu_init(unsigned int cpu)
> >* not be stopped in the case of CPU offlining and the VM will
> hang.
> >*/
> >   if (!*hvp) {
> > - *hvp = __vmalloc(PAGE_SIZE, GFP_KERNEL | __GFP_ZERO,
> > -  PAGE_KERNEL);
> > + *hvp = __vmalloc(PAGE_SIZE, GFP_KERNEL | __GFP_ZERO);
> >   }
>
> Acked-by: Wei Liu 
>
>
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: [PATCH v6 00/25] iommu: Shared Virtual Addressing for SMMUv3

2020-04-30 Thread Jacob Pan
On Thu, 30 Apr 2020 16:33:59 +0200
Jean-Philippe Brucker  wrote:

> Shared Virtual Addressing (SVA) allows to share process page tables
> with devices using the IOMMU, PASIDs and I/O page faults. Add SVA
> support to the Arm SMMUv3 driver.
> 
> Since v5 [1]:
> 
> * Added patches 1-3. Patch 1 adds a PASID field to mm_struct as
>   discussed in [1] and [2]. This is also needed for Intel ENQCMD.
> Patch 2 adds refcounts to IOASID and patch 3 adds a couple of helpers
> to allocate the PASID.
> 
> * Dropped most of iommu-sva.c. After getting rid of io_mm following
>   review of v5, there wasn't enough generic code left to justify the
>   indirect branch overhead of io_mm_ops in the MMU notifiers. I ended
> up with more glue than useful code, and couldn't find an easy way to
> deal with domains in the SMMU driver (we keep PASID tables per domain,
>   while x86 keeps them per device). The direct approach in patch 17 is
>   nicer and a little easier to read. The SMMU driver only gained 160
>   lines, while iommu-sva lost 470 lines.
> 
>   As a result I dropped the MMU notifier patch.
> 
>   Jacob, one upside of this rework is that we now free ioasids in
>   blocking context, which might help with your addition of notifiers
> to ioasid.c
> 
Thanks for the note. It does make notifier much easier, plus the
refcount can alleviate the constraint on ordering.

I guess we don't share mmu notifier code for now :)

> * Simplified io-pgfault a bit, since flush() isn't called from mm exit
>   path anymore.
> 
> * Fixed a bug in patch 17 (don't clear the stall bit when stall is
>   forced).
> 
> You can find the latest version on https://jpbrucker.net/git/linux
> branch sva/current, and sva/zip-devel for the Hisilicon zip
> accelerator.
> 
> [1]
> https://lore.kernel.org/linux-iommu/20200414170252.714402-1-jean-phili...@linaro.org/
> [2]
> https://lore.kernel.org/linux-iommu/1585596788-193989-6-git-send-email-fenghua...@intel.com/
> 
> Jean-Philippe Brucker (25):
>   mm: Add a PASID field to mm_struct
>   iommu/ioasid: Add ioasid references
>   iommu/sva: Add PASID helpers
>   iommu: Add a page fault handler
>   iommu/iopf: Handle mm faults
>   arm64: mm: Add asid_gen_match() helper
>   arm64: mm: Pin down ASIDs for sharing mm with devices
>   iommu/io-pgtable-arm: Move some definitions to a header
>   iommu/arm-smmu-v3: Manage ASIDs with xarray
>   arm64: cpufeature: Export symbol read_sanitised_ftr_reg()
>   iommu/arm-smmu-v3: Share process page tables
>   iommu/arm-smmu-v3: Seize private ASID
>   iommu/arm-smmu-v3: Add support for VHE
>   iommu/arm-smmu-v3: Enable broadcast TLB maintenance
>   iommu/arm-smmu-v3: Add SVA feature checking
>   iommu/arm-smmu-v3: Add SVA device feature
>   iommu/arm-smmu-v3: Implement iommu_sva_bind/unbind()
>   iommu/arm-smmu-v3: Hook up ATC invalidation to mm ops
>   iommu/arm-smmu-v3: Add support for Hardware Translation Table Update
>   iommu/arm-smmu-v3: Maintain a SID->device structure
>   dt-bindings: document stall property for IOMMU masters
>   iommu/arm-smmu-v3: Add stall support for platform devices
>   PCI/ATS: Add PRI stubs
>   PCI/ATS: Export PRI functions
>   iommu/arm-smmu-v3: Add support for PRI
> 
>  drivers/iommu/Kconfig |   11 +
>  drivers/iommu/Makefile|2 +
>  .../devicetree/bindings/iommu/iommu.txt   |   18 +
>  arch/arm64/include/asm/mmu.h  |1 +
>  arch/arm64/include/asm/mmu_context.h  |   11 +-
>  drivers/iommu/io-pgtable-arm.h|   30 +
>  drivers/iommu/iommu-sva.h |   15 +
>  include/linux/ioasid.h|   10 +-
>  include/linux/iommu.h |   53 +
>  include/linux/mm_types.h  |4 +
>  include/linux/pci-ats.h   |8 +
>  arch/arm64/kernel/cpufeature.c|1 +
>  arch/arm64/mm/context.c   |  103 +-
>  drivers/iommu/arm-smmu-v3.c   | 1554
> +++-- drivers/iommu/io-pgfault.c|
> 458 + drivers/iommu/io-pgtable-arm.c|   27 +-
>  drivers/iommu/ioasid.c|   30 +-
>  drivers/iommu/iommu-sva.c |   85 +
>  drivers/iommu/of_iommu.c  |5 +-
>  drivers/pci/ats.c |4 +
>  MAINTAINERS   |3 +-
>  21 files changed, 2275 insertions(+), 158 deletions(-)
>  create mode 100644 drivers/iommu/io-pgtable-arm.h
>  create mode 100644 drivers/iommu/iommu-sva.h
>  create mode 100644 drivers/iommu/io-pgfault.c
>  create mode 100644 drivers/iommu/iommu-sva.c
> 

[Jacob Pan]
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v6 17/25] iommu/arm-smmu-v3: Implement iommu_sva_bind/unbind()

2020-04-30 Thread Jacob Pan
Hi Jean,

A couple question on how SMMU handles CD.v and translation disable.

On Thu, 30 Apr 2020 16:34:16 +0200
Jean-Philippe Brucker  wrote:

> The sva_bind() function allows devices to access process address
> spaces using a PASID (aka SSID).
> 
> (1) bind() allocates or gets an existing MMU notifier tied to the
> (domain, mm) pair. Each mm gets one PASID.
> 
> (2) Any change to the address space calls invalidate_range() which
> sends ATC invalidations (in a subsequent patch).
> 
> (3) When the process address space dies, the release() notifier
> disables the CD to allow reclaiming the page tables. Since release()
> has to be light we do not instruct device drivers to stop DMA here,
> we just ignore incoming page faults.
> 
> To avoid any event 0x0a print (C_BAD_CD) we disable translation
> without clearing CD.V. PCIe Translation Requests and Page Requests
> are silently denied. Don't clear the R bit because the S bit can't
> be cleared when STALL_MODEL==0b10 (forced), and clearing R without
> clearing S is useless. Faulting transactions will stall and will
> be aborted by the IOPF handler.
> 
> (4) After stopping DMA, the device driver releases the bond by calling
> unbind(). We release the MMU notifier, free the PASID and the
> bond.
> 
> Three structures keep track of bonds:
> * arm_smmu_bond: one per (device, mm) pair, the handle returned to the
>   device driver for a bind() request.
> * arm_smmu_mmu_notifier: one per (domain, mm) pair, deals with ATS/TLB
>   invalidations and clearing the context descriptor on mm exit.
> * arm_smmu_ctx_desc: one per mm, holds the pinned ASID and pgd.
> 
> Signed-off-by: Jean-Philippe Brucker 
> ---
> v5->v6:
> * Implement bind() directly instead of going through io_mm_ops
> * Don't clear S and R bits in step (3), it doesn't work with
>   STALL_FORCE.
> ---
>  drivers/iommu/Kconfig   |   1 +
>  drivers/iommu/arm-smmu-v3.c | 256
> +++- 2 files changed, 253
> insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
> index 1e64ee6592e16..f863c4562feeb 100644
> --- a/drivers/iommu/Kconfig
> +++ b/drivers/iommu/Kconfig
> @@ -432,6 +432,7 @@ config ARM_SMMU_V3
>   tristate "ARM Ltd. System MMU Version 3 (SMMUv3) Support"
>   depends on ARM64
>   select IOMMU_API
> + select IOMMU_SVA
>   select IOMMU_IO_PGTABLE_LPAE
>   select GENERIC_MSI_IRQ_DOMAIN
>   help
> diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
> index c7942d0540599..00e5b69bb81a5 100644
> --- a/drivers/iommu/arm-smmu-v3.c
> +++ b/drivers/iommu/arm-smmu-v3.c
> @@ -24,6 +24,7 @@
>  #include 
>  #include 
>  #include 
> +#include 
>  #include 
>  #include 
>  #include 
> @@ -36,6 +37,7 @@
>  #include 
>  
>  #include "io-pgtable-arm.h"
> +#include "iommu-sva.h"
>  
>  /* MMIO registers */
>  #define ARM_SMMU_IDR00x0
> @@ -731,8 +733,31 @@ struct arm_smmu_domain {
>  
>   struct list_headdevices;
>   spinlock_t  devices_lock;
> +
> + struct mmu_notifier_ops mn_ops;
>  };
>  
> +struct arm_smmu_mmu_notifier {
> + struct mmu_notifier mn;
> + struct arm_smmu_ctx_desc*cd;
> + boolcleared;
> + refcount_t  refs;
> + struct arm_smmu_domain  *domain;
> +};
> +
> +#define mn_to_smmu(mn) container_of(mn, struct
> arm_smmu_mmu_notifier, mn) +
> +struct arm_smmu_bond {
> + struct iommu_svasva;
> + struct mm_struct*mm;
> + struct arm_smmu_mmu_notifier*smmu_mn;
> + struct list_headlist;
> + refcount_t  refs;
> +};
> +
> +#define sva_to_bond(handle) \
> + container_of(handle, struct arm_smmu_bond, sva)
> +
>  struct arm_smmu_option_prop {
>   u32 opt;
>   const char *prop;
> @@ -742,6 +767,13 @@ static DEFINE_XARRAY_ALLOC1(asid_xa);
>  static DEFINE_SPINLOCK(contexts_lock);
>  static DEFINE_MUTEX(arm_smmu_sva_lock);
>  
> +/*
> + * When a process dies, DMA is still running but we need to clear
> the pgd. If we
> + * simply cleared the valid bit from the context descriptor, we'd
> get event 0x0a
> + * which are not recoverable.
> + */
> +static struct arm_smmu_ctx_desc invalid_cd = { 0 };
> +
>  static struct arm_smmu_option_prop arm_smmu_options[] = {
>   { ARM_SMMU_OPT_SKIP_PREFETCH,
> "hisilicon,broken-prefetch-cmd" }, { ARM_SMMU_OPT_PAGE0_REGS_ONLY,
> "cavium,cn9900-broken-page1-regspace"}, @@ -1652,7 +1684,9 @@ static
> int __arm_smmu_write_ctx_desc(struct arm_smmu_domain *smmu_domain,
>* (2) Install a secondary CD, for SID+SSID traffic.
>* (3) Update ASID of a CD. Atomically write the first 64
> bits of the
>* CD, then invalidate the old entry and mappings.
> -  * (4) Remove a secondary CD.
> +  * (4) Quiesce the context without clearing the valid bit.
> 

Re: [PATCH v6 02/25] iommu/ioasid: Add ioasid references

2020-04-30 Thread Jacob Pan
On Thu, 30 Apr 2020 11:39:31 -0700
Jacob Pan  wrote:

> > -void ioasid_free(ioasid_t ioasid)
> > +bool ioasid_free(ioasid_t ioasid)
> >  {
Sorry I missed this in the last reply.

I think free needs to be unconditional since there is not a good way to
fail it.

Also can we have more symmetric APIs, seems we don't have ioasid_put()
in this patchset.
How about?
ioasid_alloc()
ioasid_free(); //drop reference, mark inactive, but not reclaimed if
refcount is not zero.
ioasid_get() // returns err if the ioasid is marked inactive by
ioasid_free()
ioasid_put();// drop reference, reclaim if refcount is 0.

It is similar to get/put/alloc/free pids.


___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v6 02/25] iommu/ioasid: Add ioasid references

2020-04-30 Thread Jacob Pan
On Thu, 30 Apr 2020 16:34:01 +0200
Jean-Philippe Brucker  wrote:

> Let IOASID users take references to existing ioasids with
> ioasid_get(). ioasid_free() drops a reference and only frees the
> ioasid when its reference number is zero. It returns whether the
> ioasid was freed.
> 
Looks good to me, I was planning to do the same for VT-d use. Just a
couple of points for potential extension. I can rebase on top of this.


> Signed-off-by: Jean-Philippe Brucker 
> ---
>  include/linux/ioasid.h | 10 --
>  drivers/iommu/ioasid.c | 30 +-
>  2 files changed, 37 insertions(+), 3 deletions(-)
> 
> diff --git a/include/linux/ioasid.h b/include/linux/ioasid.h
> index 6f000d7a0ddcd..609ba6f15b9e3 100644
> --- a/include/linux/ioasid.h
> +++ b/include/linux/ioasid.h
> @@ -34,7 +34,8 @@ struct ioasid_allocator_ops {
>  #if IS_ENABLED(CONFIG_IOASID)
>  ioasid_t ioasid_alloc(struct ioasid_set *set, ioasid_t min, ioasid_t
> max, void *private);
> -void ioasid_free(ioasid_t ioasid);
> +void ioasid_get(ioasid_t ioasid);
> +bool ioasid_free(ioasid_t ioasid);
>  void *ioasid_find(struct ioasid_set *set, ioasid_t ioasid,
> bool (*getter)(void *));
>  int ioasid_register_allocator(struct ioasid_allocator_ops
> *allocator); @@ -48,10 +49,15 @@ static inline ioasid_t
> ioasid_alloc(struct ioasid_set *set, ioasid_t min, return
> INVALID_IOASID; }
>  
> -static inline void ioasid_free(ioasid_t ioasid)
> +static inline void ioasid_get(ioasid_t ioasid)
>  {
>  }
>  
> +static inline bool ioasid_free(ioasid_t ioasid)
> +{
> + return false;
> +}
> +
>  static inline void *ioasid_find(struct ioasid_set *set, ioasid_t
> ioasid, bool (*getter)(void *))
>  {
> diff --git a/drivers/iommu/ioasid.c b/drivers/iommu/ioasid.c
> index 0f8dd377aada3..46511ac53e0c8 100644
> --- a/drivers/iommu/ioasid.c
> +++ b/drivers/iommu/ioasid.c
> @@ -15,6 +15,7 @@ struct ioasid_data {
>   struct ioasid_set *set;
>   void *private;
>   struct rcu_head rcu;
> + refcount_t refs;
>  };
>  
>  /*
> @@ -314,6 +315,7 @@ ioasid_t ioasid_alloc(struct ioasid_set *set,
> ioasid_t min, ioasid_t max, 
>   data->set = set;
>   data->private = private;
> + refcount_set(>refs, 1);
>  
>   /*
>* Custom allocator needs allocator data to perform platform
> specific @@ -345,12 +347,33 @@ ioasid_t ioasid_alloc(struct
> ioasid_set *set, ioasid_t min, ioasid_t max, }
>  EXPORT_SYMBOL_GPL(ioasid_alloc);
>  
> +/**
> + * ioasid_get - obtain a reference to the IOASID
> + */
> +void ioasid_get(ioasid_t ioasid)
why void? what if the ioasid is not valid.

> +{
> + struct ioasid_data *ioasid_data;
> +
> + spin_lock(_allocator_lock);
> + ioasid_data = xa_load(_allocator->xa, ioasid);
> + if (ioasid_data)
> + refcount_inc(_data->refs);
> + spin_unlock(_allocator_lock);
> +}
> +EXPORT_SYMBOL_GPL(ioasid_get);
> +
>  /**
>   * ioasid_free - Free an IOASID
>   * @ioasid: the ID to remove
> + *
> + * Put a reference to the IOASID, free it when the number of
> references drops to
> + * zero.
> + *
> + * Return: %true if the IOASID was freed, %false otherwise.
>   */
> -void ioasid_free(ioasid_t ioasid)
> +bool ioasid_free(ioasid_t ioasid)
>  {
> + bool free = false;
>   struct ioasid_data *ioasid_data;
>  
>   spin_lock(_allocator_lock);
> @@ -360,6 +383,10 @@ void ioasid_free(ioasid_t ioasid)
>   goto exit_unlock;
>   }
>  
> + free = refcount_dec_and_test(_data->refs);
> + if (!free)
> + goto exit_unlock;
> +
Just FYI, we may need to add states for the IOASID, i.g. mark the IOASID
inactive after free. And prohibit ioasid_get() after freed. For VT-d,
this is useful when KVM queries the IOASID.

>   active_allocator->ops->free(ioasid,
> active_allocator->ops->pdata); /* Custom allocator needs additional
> steps to free the xa element */ if (active_allocator->flags &
> IOASID_ALLOCATOR_CUSTOM) { @@ -369,6 +396,7 @@ void
> ioasid_free(ioasid_t ioasid) 
>  exit_unlock:
>   spin_unlock(_allocator_lock);
> + return free;
>  }
>  EXPORT_SYMBOL_GPL(ioasid_free);
>  

[Jacob Pan]
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v6 11/25] iommu/arm-smmu-v3: Share process page tables

2020-04-30 Thread Suzuki K Poulose

On 04/30/2020 03:34 PM, Jean-Philippe Brucker wrote:

With Shared Virtual Addressing (SVA), we need to mirror CPU TTBR, TCR,
MAIR and ASIDs in SMMU contexts. Each SMMU has a single ASID space split
into two sets, shared and private. Shared ASIDs correspond to those
obtained from the arch ASID allocator, and private ASIDs are used for
"classic" map/unmap DMA.

Cc: Suzuki K Poulose 
Signed-off-by: Jean-Philippe Brucker 
---



+
+   tcr = FIELD_PREP(CTXDESC_CD_0_TCR_T0SZ, 64ULL - VA_BITS) |
+ FIELD_PREP(CTXDESC_CD_0_TCR_IRGN0, ARM_LPAE_TCR_RGN_WBWA) |
+ FIELD_PREP(CTXDESC_CD_0_TCR_ORGN0, ARM_LPAE_TCR_RGN_WBWA) |
+ FIELD_PREP(CTXDESC_CD_0_TCR_SH0, ARM_LPAE_TCR_SH_IS) |
+ CTXDESC_CD_0_TCR_EPD1 | CTXDESC_CD_0_AA64;
+
+   switch (PAGE_SIZE) {
+   case SZ_4K:
+   tcr |= FIELD_PREP(CTXDESC_CD_0_TCR_TG0, ARM_LPAE_TCR_TG0_4K);
+   break;
+   case SZ_16K:
+   tcr |= FIELD_PREP(CTXDESC_CD_0_TCR_TG0, ARM_LPAE_TCR_TG0_16K);
+   break;
+   case SZ_64K:
+   tcr |= FIELD_PREP(CTXDESC_CD_0_TCR_TG0, ARM_LPAE_TCR_TG0_64K);
+   break;
+   default:
+   WARN_ON(1);
+   ret = -EINVAL;
+   goto err_free_asid;
+   }
+
+   reg = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1);
+   par = cpuid_feature_extract_unsigned_field(reg, 
ID_AA64MMFR0_PARANGE_SHIFT);
+   tcr |= FIELD_PREP(CTXDESC_CD_0_TCR_IPS, par);
+
+   cd->ttbr = virt_to_phys(mm->pgd);


Does the TTBR follow the same layout as TTBR_ELx for 52bit IPA ? i.e, 
TTBR[5:2] = BADDR[51:48] ? Are you covered for that ?


Suzuki
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 5/5] virtio: Add bounce DMA ops

2020-04-30 Thread Konrad Rzeszutek Wilk
On Wed, Apr 29, 2020 at 06:20:48AM -0400, Michael S. Tsirkin wrote:
> On Wed, Apr 29, 2020 at 03:39:53PM +0530, Srivatsa Vaddagiri wrote:
> > That would still not work I think where swiotlb is used for pass-thr devices
> > (when private memory is fine) as well as virtio devices (when shared memory 
> > is
> > required).
> 
> So that is a separate question. When there are multiple untrusted
> devices, at the moment it looks like a single bounce buffer is used.
> 
> Which to me seems like a security problem, I think we should protect
> untrusted devices from each other.

There are two DMA pools code in Linux already - the TTM one for graphics
and the mm/dmapool.c - could those be used instead? Or augmented at least?
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v6 10/25] arm64: cpufeature: Export symbol read_sanitised_ftr_reg()

2020-04-30 Thread Suzuki K Poulose

On 04/30/2020 03:34 PM, Jean-Philippe Brucker wrote:

The SMMUv3 driver would like to read the MMFR0 PARANGE field in order to
share CPU page tables with devices. Allow the driver to be built as
module by exporting the read_sanitized_ftr_reg() cpufeature symbol.

Cc: Suzuki K Poulose 
Signed-off-by: Jean-Philippe Brucker 


Acked-by: Suzuki K Poulose 


---
  arch/arm64/kernel/cpufeature.c | 1 +
  1 file changed, 1 insertion(+)

diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 9fac745aa7bb2..5f6adbf4ae893 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -841,6 +841,7 @@ u64 read_sanitised_ftr_reg(u32 id)
BUG_ON(!regp);
return regp->sys_val;
  }
+EXPORT_SYMBOL_GPL(read_sanitised_ftr_reg);
  
  #define read_sysreg_case(r)	\

case r: return read_sysreg_s(r)



___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v6 08/25] iommu/io-pgtable-arm: Move some definitions to a header

2020-04-30 Thread Jean-Philippe Brucker
Extract some of the most generic TCR defines, so they can be reused by
the page table sharing code.

Signed-off-by: Jean-Philippe Brucker 
---
v5->v6: Update MAINTAINERS
---
 drivers/iommu/io-pgtable-arm.h | 30 ++
 drivers/iommu/io-pgtable-arm.c | 27 ++-
 MAINTAINERS|  3 +--
 3 files changed, 33 insertions(+), 27 deletions(-)
 create mode 100644 drivers/iommu/io-pgtable-arm.h

diff --git a/drivers/iommu/io-pgtable-arm.h b/drivers/iommu/io-pgtable-arm.h
new file mode 100644
index 0..ba7cfdf7afa03
--- /dev/null
+++ b/drivers/iommu/io-pgtable-arm.h
@@ -0,0 +1,30 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+#ifndef IO_PGTABLE_ARM_H_
+#define IO_PGTABLE_ARM_H_
+
+#define ARM_LPAE_TCR_TG0_4K0
+#define ARM_LPAE_TCR_TG0_64K   1
+#define ARM_LPAE_TCR_TG0_16K   2
+
+#define ARM_LPAE_TCR_TG1_16K   1
+#define ARM_LPAE_TCR_TG1_4K2
+#define ARM_LPAE_TCR_TG1_64K   3
+
+#define ARM_LPAE_TCR_SH_NS 0
+#define ARM_LPAE_TCR_SH_OS 2
+#define ARM_LPAE_TCR_SH_IS 3
+
+#define ARM_LPAE_TCR_RGN_NC0
+#define ARM_LPAE_TCR_RGN_WBWA  1
+#define ARM_LPAE_TCR_RGN_WT2
+#define ARM_LPAE_TCR_RGN_WB3
+
+#define ARM_LPAE_TCR_PS_32_BIT 0x0ULL
+#define ARM_LPAE_TCR_PS_36_BIT 0x1ULL
+#define ARM_LPAE_TCR_PS_40_BIT 0x2ULL
+#define ARM_LPAE_TCR_PS_42_BIT 0x3ULL
+#define ARM_LPAE_TCR_PS_44_BIT 0x4ULL
+#define ARM_LPAE_TCR_PS_48_BIT 0x5ULL
+#define ARM_LPAE_TCR_PS_52_BIT 0x6ULL
+
+#endif /* IO_PGTABLE_ARM_H_ */
diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c
index 04fbd4bf0ff9f..f71a2eade04ab 100644
--- a/drivers/iommu/io-pgtable-arm.c
+++ b/drivers/iommu/io-pgtable-arm.c
@@ -20,6 +20,8 @@
 
 #include 
 
+#include "io-pgtable-arm.h"
+
 #define ARM_LPAE_MAX_ADDR_BITS 52
 #define ARM_LPAE_S2_MAX_CONCAT_PAGES   16
 #define ARM_LPAE_MAX_LEVELS4
@@ -100,23 +102,6 @@
 #define ARM_LPAE_PTE_MEMATTR_DEV   (((arm_lpae_iopte)0x1) << 2)
 
 /* Register bits */
-#define ARM_LPAE_TCR_TG0_4K0
-#define ARM_LPAE_TCR_TG0_64K   1
-#define ARM_LPAE_TCR_TG0_16K   2
-
-#define ARM_LPAE_TCR_TG1_16K   1
-#define ARM_LPAE_TCR_TG1_4K2
-#define ARM_LPAE_TCR_TG1_64K   3
-
-#define ARM_LPAE_TCR_SH_NS 0
-#define ARM_LPAE_TCR_SH_OS 2
-#define ARM_LPAE_TCR_SH_IS 3
-
-#define ARM_LPAE_TCR_RGN_NC0
-#define ARM_LPAE_TCR_RGN_WBWA  1
-#define ARM_LPAE_TCR_RGN_WT2
-#define ARM_LPAE_TCR_RGN_WB3
-
 #define ARM_LPAE_VTCR_SL0_MASK 0x3
 
 #define ARM_LPAE_TCR_T0SZ_SHIFT0
@@ -124,14 +109,6 @@
 #define ARM_LPAE_VTCR_PS_SHIFT 16
 #define ARM_LPAE_VTCR_PS_MASK  0x7
 
-#define ARM_LPAE_TCR_PS_32_BIT 0x0ULL
-#define ARM_LPAE_TCR_PS_36_BIT 0x1ULL
-#define ARM_LPAE_TCR_PS_40_BIT 0x2ULL
-#define ARM_LPAE_TCR_PS_42_BIT 0x3ULL
-#define ARM_LPAE_TCR_PS_44_BIT 0x4ULL
-#define ARM_LPAE_TCR_PS_48_BIT 0x5ULL
-#define ARM_LPAE_TCR_PS_52_BIT 0x6ULL
-
 #define ARM_LPAE_MAIR_ATTR_SHIFT(n)((n) << 3)
 #define ARM_LPAE_MAIR_ATTR_MASK0xff
 #define ARM_LPAE_MAIR_ATTR_DEVICE  0x04
diff --git a/MAINTAINERS b/MAINTAINERS
index 26f281d9f32a4..c637d38764594 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1463,8 +1463,7 @@ L:linux-arm-ker...@lists.infradead.org (moderated 
for non-subscribers)
 S: Maintained
 F: Documentation/devicetree/bindings/iommu/arm,smmu*
 F: drivers/iommu/arm-smmu*
-F: drivers/iommu/io-pgtable-arm-v7s.c
-F: drivers/iommu/io-pgtable-arm.c
+F: drivers/iommu/io-pgtable-arm*
 
 ARM SUB-ARCHITECTURES
 L: linux-arm-ker...@lists.infradead.org (moderated for non-subscribers)
-- 
2.26.2

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v6 05/25] iommu/iopf: Handle mm faults

2020-04-30 Thread Jean-Philippe Brucker
When a recoverable page fault is handled by the fault workqueue, find the
associated mm and call handle_mm_fault.

Signed-off-by: Jean-Philippe Brucker 
---
v5->v6: select CONFIG_IOMMU_SVA
---
 drivers/iommu/Kconfig  |  1 +
 drivers/iommu/io-pgfault.c | 79 +-
 2 files changed, 78 insertions(+), 2 deletions(-)

diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
index 4f33e489f0726..1e64ee6592e16 100644
--- a/drivers/iommu/Kconfig
+++ b/drivers/iommu/Kconfig
@@ -109,6 +109,7 @@ config IOMMU_SVA
 
 config IOMMU_PAGE_FAULT
bool
+   select IOMMU_SVA
 
 config FSL_PAMU
bool "Freescale IOMMU support"
diff --git a/drivers/iommu/io-pgfault.c b/drivers/iommu/io-pgfault.c
index 38732e97faac1..09a71dc4de20a 100644
--- a/drivers/iommu/io-pgfault.c
+++ b/drivers/iommu/io-pgfault.c
@@ -7,9 +7,12 @@
 
 #include 
 #include 
+#include 
 #include 
 #include 
 
+#include "iommu-sva.h"
+
 /**
  * struct iopf_queue - IO Page Fault queue
  * @wq: the fault workqueue
@@ -68,8 +71,57 @@ static int iopf_complete_group(struct device *dev, struct 
iopf_fault *iopf,
 static enum iommu_page_response_code
 iopf_handle_single(struct iopf_fault *iopf)
 {
-   /* TODO */
-   return -ENODEV;
+   vm_fault_t ret;
+   struct mm_struct *mm;
+   struct vm_area_struct *vma;
+   unsigned int access_flags = 0;
+   unsigned int fault_flags = FAULT_FLAG_REMOTE;
+   struct iommu_fault_page_request *prm = >fault.prm;
+   enum iommu_page_response_code status = IOMMU_PAGE_RESP_INVALID;
+
+   if (!(prm->flags & IOMMU_FAULT_PAGE_REQUEST_PASID_VALID))
+   return status;
+
+   mm = iommu_sva_find(prm->pasid);
+   if (IS_ERR_OR_NULL(mm))
+   return status;
+
+   down_read(>mmap_sem);
+
+   vma = find_extend_vma(mm, prm->addr);
+   if (!vma)
+   /* Unmapped area */
+   goto out_put_mm;
+
+   if (prm->perm & IOMMU_FAULT_PERM_READ)
+   access_flags |= VM_READ;
+
+   if (prm->perm & IOMMU_FAULT_PERM_WRITE) {
+   access_flags |= VM_WRITE;
+   fault_flags |= FAULT_FLAG_WRITE;
+   }
+
+   if (prm->perm & IOMMU_FAULT_PERM_EXEC) {
+   access_flags |= VM_EXEC;
+   fault_flags |= FAULT_FLAG_INSTRUCTION;
+   }
+
+   if (!(prm->perm & IOMMU_FAULT_PERM_PRIV))
+   fault_flags |= FAULT_FLAG_USER;
+
+   if (access_flags & ~vma->vm_flags)
+   /* Access fault */
+   goto out_put_mm;
+
+   ret = handle_mm_fault(vma, prm->addr, fault_flags);
+   status = ret & VM_FAULT_ERROR ? IOMMU_PAGE_RESP_INVALID :
+   IOMMU_PAGE_RESP_SUCCESS;
+
+out_put_mm:
+   up_read(>mmap_sem);
+   mmput(mm);
+
+   return status;
 }
 
 static void iopf_handle_group(struct work_struct *work)
@@ -104,6 +156,29 @@ static void iopf_handle_group(struct work_struct *work)
  *
  * Add a fault to the device workqueue, to be handled by mm.
  *
+ * This module doesn't handle PCI PASID Stop Marker; IOMMU drivers must discard
+ * them before reporting faults. A PASID Stop Marker (LRW = 0b100) doesn't
+ * expect a response. It may be generated when disabling a PASID (issuing a
+ * PASID stop request) by some PCI devices.
+ *
+ * The PASID stop request is issued by the device driver before unbind(). Once
+ * it completes, no page request is generated for this PASID anymore and
+ * outstanding ones have been pushed to the IOMMU (as per PCIe 4.0r1.0 - 6.20.1
+ * and 10.4.1.2 - Managing PASID TLP Prefix Usage). Some PCI devices will wait
+ * for all outstanding page requests to come back with a response before
+ * completing the PASID stop request. Others do not wait for page responses, 
and
+ * instead issue this Stop Marker that tells us when the PASID can be
+ * reallocated.
+ *
+ * It is safe to discard the Stop Marker because it is an optimization.
+ * a. Page requests, which are posted requests, have been flushed to the IOMMU
+ *when the stop request completes.
+ * b. We flush all fault queues on unbind() before freeing the PASID.
+ *
+ * So even though the Stop Marker might be issued by the device *after* the 
stop
+ * request completes, outstanding faults will have been dealt with by the time
+ * we free the PASID.
+ *
  * Return: 0 on success and <0 on error.
  */
 int iommu_queue_iopf(struct iommu_fault *fault, void *cookie)
-- 
2.26.2

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v6 07/25] arm64: mm: Pin down ASIDs for sharing mm with devices

2020-04-30 Thread Jean-Philippe Brucker
To enable address space sharing with the IOMMU, introduce mm_context_get()
and mm_context_put(), that pin down a context and ensure that it will keep
its ASID after a rollover. Export the symbols to let the modular SMMUv3
driver use them.

Pinning is necessary because a device constantly needs a valid ASID,
unlike tasks that only require one when running. Without pinning, we would
need to notify the IOMMU when we're about to use a new ASID for a task,
and it would get complicated when a new task is assigned a shared ASID.
Consider the following scenario with no ASID pinned:

1. Task t1 is running on CPUx with shared ASID (gen=1, asid=1)
2. Task t2 is scheduled on CPUx, gets ASID (1, 2)
3. Task tn is scheduled on CPUy, a rollover occurs, tn gets ASID (2, 1)
   We would now have to immediately generate a new ASID for t1, notify
   the IOMMU, and finally enable task tn. We are holding the lock during
   all that time, since we can't afford having another CPU trigger a
   rollover. The IOMMU issues invalidation commands that can take tens of
   milliseconds.

It gets needlessly complicated. All we wanted to do was schedule task tn,
that has no business with the IOMMU. By letting the IOMMU pin tasks when
needed, we avoid stalling the slow path, and let the pinning fail when
we're out of shareable ASIDs.

After a rollover, the allocator expects at least one ASID to be available
in addition to the reserved ones (one per CPU). So (NR_ASIDS - NR_CPUS -
1) is the maximum number of ASIDs that can be shared with the IOMMU.

Signed-off-by: Jean-Philippe Brucker 
---
 arch/arm64/include/asm/mmu.h |  1 +
 arch/arm64/include/asm/mmu_context.h | 11 +++-
 arch/arm64/mm/context.c  | 95 +++-
 3 files changed, 104 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
index 68140fdd89d6b..bbdd291e31d59 100644
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -19,6 +19,7 @@
 
 typedef struct {
atomic64_t  id;
+   unsigned long   pinned;
void*vdso;
unsigned long   flags;
 } mm_context_t;
diff --git a/arch/arm64/include/asm/mmu_context.h 
b/arch/arm64/include/asm/mmu_context.h
index ab46187c63001..69599a64945b0 100644
--- a/arch/arm64/include/asm/mmu_context.h
+++ b/arch/arm64/include/asm/mmu_context.h
@@ -177,7 +177,13 @@ static inline void cpu_replace_ttbr1(pgd_t *pgdp)
 #define destroy_context(mm)do { } while(0)
 void check_and_switch_context(struct mm_struct *mm, unsigned int cpu);
 
-#define init_new_context(tsk,mm)   ({ atomic64_set(&(mm)->context.id, 0); 
0; })
+static inline int
+init_new_context(struct task_struct *tsk, struct mm_struct *mm)
+{
+   atomic64_set(>context.id, 0);
+   mm->context.pinned = 0;
+   return 0;
+}
 
 #ifdef CONFIG_ARM64_SW_TTBR0_PAN
 static inline void update_saved_ttbr0(struct task_struct *tsk,
@@ -250,6 +256,9 @@ switch_mm(struct mm_struct *prev, struct mm_struct *next,
 void verify_cpu_asid_bits(void);
 void post_ttbr_update_workaround(void);
 
+unsigned long mm_context_get(struct mm_struct *mm);
+void mm_context_put(struct mm_struct *mm);
+
 #endif /* !__ASSEMBLY__ */
 
 #endif /* !__ASM_MMU_CONTEXT_H */
diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c
index d702d60e64dab..d0ddd413f5645 100644
--- a/arch/arm64/mm/context.c
+++ b/arch/arm64/mm/context.c
@@ -27,6 +27,10 @@ static DEFINE_PER_CPU(atomic64_t, active_asids);
 static DEFINE_PER_CPU(u64, reserved_asids);
 static cpumask_t tlb_flush_pending;
 
+static unsigned long max_pinned_asids;
+static unsigned long nr_pinned_asids;
+static unsigned long *pinned_asid_map;
+
 #define ASID_MASK  (~GENMASK(asid_bits - 1, 0))
 #define ASID_FIRST_VERSION (1UL << asid_bits)
 
@@ -74,6 +78,9 @@ void verify_cpu_asid_bits(void)
 
 static void set_kpti_asid_bits(void)
 {
+   unsigned int k;
+   u8 *dst = (u8 *)asid_map;
+   u8 *src = (u8 *)pinned_asid_map;
unsigned int len = BITS_TO_LONGS(NUM_USER_ASIDS) * sizeof(unsigned 
long);
/*
 * In case of KPTI kernel/user ASIDs are allocated in
@@ -81,7 +88,8 @@ static void set_kpti_asid_bits(void)
 * is set, then the ASID will map only userspace. Thus
 * mark even as reserved for kernel.
 */
-   memset(asid_map, 0xaa, len);
+   for (k = 0; k < len; k++)
+   dst[k] = src[k] | 0xaa;
 }
 
 static void set_reserved_asid_bits(void)
@@ -89,7 +97,7 @@ static void set_reserved_asid_bits(void)
if (arm64_kernel_unmapped_at_el0())
set_kpti_asid_bits();
else
-   bitmap_clear(asid_map, 0, NUM_USER_ASIDS);
+   bitmap_copy(asid_map, pinned_asid_map, NUM_USER_ASIDS);
 }
 
 #define asid_gen_match(asid) \
@@ -165,6 +173,14 @@ static u64 new_context(struct mm_struct *mm)
if (check_update_reserved_asid(asid, newasid))
return newasid;
 
+  

[PATCH v6 04/25] iommu: Add a page fault handler

2020-04-30 Thread Jean-Philippe Brucker
Some systems allow devices to handle I/O Page Faults in the core mm. For
example systems implementing the PCIe PRI extension or Arm SMMU stall
model. Infrastructure for reporting these recoverable page faults was
added to the IOMMU core by commit 0c830e6b3282 ("iommu: Introduce device
fault report API"). Add a page fault handler for host SVA.

IOMMU driver can now instantiate several fault workqueues and link them
to IOPF-capable devices. Drivers can choose between a single global
workqueue, one per IOMMU device, one per low-level fault queue, one per
domain, etc.

When it receives a fault event, supposedly in an IRQ handler, the IOMMU
driver reports the fault using iommu_report_device_fault(), which calls
the registered handler. The page fault handler then calls the mm fault
handler, and reports either success or failure with iommu_page_response().
When the handler succeeded, the IOMMU retries the access.

The iopf_param pointer could be embedded into iommu_fault_param. But
putting iopf_param into the iommu_param structure allows us not to care
about ordering between calls to iopf_queue_add_device() and
iommu_register_device_fault_handler().

Signed-off-by: Jean-Philippe Brucker 
---
v5->v6: Simplify flush. As we're not flushing in the mm exit path
  anymore, we can mandate that IOMMU drivers flush their low-level queue
  themselves before calling iopf_queue_flush_dev(). No need to register
  a flush callback anymore.
---
 drivers/iommu/Kconfig  |   3 +
 drivers/iommu/Makefile |   1 +
 include/linux/iommu.h  |  51 +
 drivers/iommu/io-pgfault.c | 383 +
 4 files changed, 438 insertions(+)
 create mode 100644 drivers/iommu/io-pgfault.c

diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
index 5327ec663dea1..4f33e489f0726 100644
--- a/drivers/iommu/Kconfig
+++ b/drivers/iommu/Kconfig
@@ -107,6 +107,9 @@ config IOMMU_SVA
bool
select IOASID
 
+config IOMMU_PAGE_FAULT
+   bool
+
 config FSL_PAMU
bool "Freescale IOMMU support"
depends on PCI
diff --git a/drivers/iommu/Makefile b/drivers/iommu/Makefile
index 40c800dd4e3ef..bf5cb4ee84093 100644
--- a/drivers/iommu/Makefile
+++ b/drivers/iommu/Makefile
@@ -4,6 +4,7 @@ obj-$(CONFIG_IOMMU_API) += iommu-traces.o
 obj-$(CONFIG_IOMMU_API) += iommu-sysfs.o
 obj-$(CONFIG_IOMMU_DEBUGFS) += iommu-debugfs.o
 obj-$(CONFIG_IOMMU_DMA) += dma-iommu.o
+obj-$(CONFIG_IOMMU_PAGE_FAULT) += io-pgfault.o
 obj-$(CONFIG_IOMMU_IO_PGTABLE) += io-pgtable.o
 obj-$(CONFIG_IOMMU_IO_PGTABLE_ARMV7S) += io-pgtable-arm-v7s.o
 obj-$(CONFIG_IOMMU_IO_PGTABLE_LPAE) += io-pgtable-arm.o
diff --git a/include/linux/iommu.h b/include/linux/iommu.h
index b62525747bd91..a1201c94f6ace 100644
--- a/include/linux/iommu.h
+++ b/include/linux/iommu.h
@@ -46,6 +46,7 @@ struct iommu_domain;
 struct notifier_block;
 struct iommu_sva;
 struct iommu_fault_event;
+struct iopf_queue;
 
 /* iommu fault flags */
 #define IOMMU_FAULT_READ   0x0
@@ -347,6 +348,7 @@ struct iommu_fault_param {
  * struct dev_iommu - Collection of per-device IOMMU data
  *
  * @fault_param: IOMMU detected device fault reporting data
+ * @iopf_param: I/O Page Fault queue and data
  * @fwspec: IOMMU fwspec data
  * @priv:   IOMMU Driver private data
  *
@@ -356,6 +358,7 @@ struct iommu_fault_param {
 struct dev_iommu {
struct mutex lock;
struct iommu_fault_param*fault_param;
+   struct iopf_device_param*iopf_param;
struct iommu_fwspec *fwspec;
void*priv;
 };
@@ -1067,4 +1070,52 @@ void iommu_debugfs_setup(void);
 static inline void iommu_debugfs_setup(void) {}
 #endif
 
+#ifdef CONFIG_IOMMU_PAGE_FAULT
+extern int iommu_queue_iopf(struct iommu_fault *fault, void *cookie);
+
+extern int iopf_queue_add_device(struct iopf_queue *queue, struct device *dev);
+extern int iopf_queue_remove_device(struct iopf_queue *queue,
+   struct device *dev);
+extern int iopf_queue_flush_dev(struct device *dev, int pasid);
+extern struct iopf_queue *iopf_queue_alloc(const char *name);
+extern void iopf_queue_free(struct iopf_queue *queue);
+extern int iopf_queue_discard_partial(struct iopf_queue *queue);
+#else /* CONFIG_IOMMU_PAGE_FAULT */
+static inline int iommu_queue_iopf(struct iommu_fault *fault, void *cookie)
+{
+   return -ENODEV;
+}
+
+static inline int iopf_queue_add_device(struct iopf_queue *queue,
+   struct device *dev)
+{
+   return -ENODEV;
+}
+
+static inline int iopf_queue_remove_device(struct iopf_queue *queue,
+  struct device *dev)
+{
+   return -ENODEV;
+}
+
+static inline int iopf_queue_flush_dev(struct device *dev, int pasid)
+{
+   return -ENODEV;
+}
+
+static inline struct iopf_queue *iopf_queue_alloc(const char *name)
+{
+   return NULL;
+}
+
+static inline void iopf_queue_free(struct iopf_queue *queue)
+{
+}
+

[PATCH v6 16/25] iommu/arm-smmu-v3: Add SVA device feature

2020-04-30 Thread Jean-Philippe Brucker
Implement the IOMMU device feature callbacks to support the SVA feature.
At the moment dev_has_feat() returns false since I/O Page Faults isn't
yet implemented.

Signed-off-by: Jean-Philippe Brucker 
---
 drivers/iommu/arm-smmu-v3.c | 125 
 1 file changed, 125 insertions(+)

diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
index 9b90cc57a609b..c7942d0540599 100644
--- a/drivers/iommu/arm-smmu-v3.c
+++ b/drivers/iommu/arm-smmu-v3.c
@@ -700,6 +700,8 @@ struct arm_smmu_master {
u32 *sids;
unsigned intnum_sids;
boolats_enabled;
+   boolsva_enabled;
+   struct list_headbonds;
unsigned intssid_bits;
 };
 
@@ -738,6 +740,7 @@ struct arm_smmu_option_prop {
 
 static DEFINE_XARRAY_ALLOC1(asid_xa);
 static DEFINE_SPINLOCK(contexts_lock);
+static DEFINE_MUTEX(arm_smmu_sva_lock);
 
 static struct arm_smmu_option_prop arm_smmu_options[] = {
{ ARM_SMMU_OPT_SKIP_PREFETCH, "hisilicon,broken-prefetch-cmd" },
@@ -3003,6 +3006,19 @@ static int arm_smmu_attach_dev(struct iommu_domain 
*domain, struct device *dev)
master = dev_iommu_priv_get(dev);
smmu = master->smmu;
 
+   /*
+* Checking that SVA is disabled ensures that this device isn't bound to
+* any mm, and can be safely detached from its old domain. Bonds cannot
+* be removed concurrently since we're holding the group mutex.
+*/
+   mutex_lock(_smmu_sva_lock);
+   if (master->sva_enabled) {
+   mutex_unlock(_smmu_sva_lock);
+   dev_err(dev, "cannot attach - SVA enabled\n");
+   return -EBUSY;
+   }
+   mutex_unlock(_smmu_sva_lock);
+
arm_smmu_detach_dev(master);
 
mutex_lock(_domain->init_mutex);
@@ -3151,6 +3167,7 @@ static int arm_smmu_add_device(struct device *dev)
master->smmu = smmu;
master->sids = fwspec->ids;
master->num_sids = fwspec->num_ids;
+   INIT_LIST_HEAD(>bonds);
dev_iommu_priv_set(dev, master);
 
/* Check the SIDs are in range of the SMMU and our stream table */
@@ -3220,6 +3237,7 @@ static void arm_smmu_remove_device(struct device *dev)
 
master = dev_iommu_priv_get(dev);
smmu = master->smmu;
+   WARN_ON(master->sva_enabled);
arm_smmu_detach_dev(master);
iommu_group_remove_device(dev);
iommu_device_unlink(>iommu, dev);
@@ -3339,6 +3357,109 @@ static void arm_smmu_get_resv_regions(struct device 
*dev,
iommu_dma_get_resv_regions(dev, head);
 }
 
+static bool arm_smmu_iopf_supported(struct arm_smmu_master *master)
+{
+   return false;
+}
+
+static bool arm_smmu_dev_has_feature(struct device *dev,
+enum iommu_dev_features feat)
+{
+   struct arm_smmu_master *master = dev_iommu_priv_get(dev);
+
+   if (!master)
+   return false;
+
+   switch (feat) {
+   case IOMMU_DEV_FEAT_SVA:
+   if (!(master->smmu->features & ARM_SMMU_FEAT_SVA))
+   return false;
+
+   /* SSID and IOPF support are mandatory for the moment */
+   return master->ssid_bits && arm_smmu_iopf_supported(master);
+   default:
+   return false;
+   }
+}
+
+static bool arm_smmu_dev_feature_enabled(struct device *dev,
+enum iommu_dev_features feat)
+{
+   bool enabled = false;
+   struct arm_smmu_master *master = dev_iommu_priv_get(dev);
+
+   if (!master)
+   return false;
+
+   switch (feat) {
+   case IOMMU_DEV_FEAT_SVA:
+   mutex_lock(_smmu_sva_lock);
+   enabled = master->sva_enabled;
+   mutex_unlock(_smmu_sva_lock);
+   return enabled;
+   default:
+   return false;
+   }
+}
+
+static int arm_smmu_dev_enable_sva(struct device *dev)
+{
+   struct arm_smmu_master *master = dev_iommu_priv_get(dev);
+
+   mutex_lock(_smmu_sva_lock);
+   master->sva_enabled = true;
+   mutex_unlock(_smmu_sva_lock);
+
+   return 0;
+}
+
+static int arm_smmu_dev_disable_sva(struct device *dev)
+{
+   struct arm_smmu_master *master = dev_iommu_priv_get(dev);
+
+   mutex_lock(_smmu_sva_lock);
+   if (!list_empty(>bonds)) {
+   dev_err(dev, "cannot disable SVA, device is bound\n");
+   mutex_unlock(_smmu_sva_lock);
+   return -EBUSY;
+   }
+   master->sva_enabled = false;
+   mutex_unlock(_smmu_sva_lock);
+
+   return 0;
+}
+
+static int arm_smmu_dev_enable_feature(struct device *dev,
+  enum iommu_dev_features feat)
+{
+   if (!arm_smmu_dev_has_feature(dev, feat))
+   return -ENODEV;
+
+   if (arm_smmu_dev_feature_enabled(dev, feat))
+

[PATCH v6 15/25] iommu/arm-smmu-v3: Add SVA feature checking

2020-04-30 Thread Jean-Philippe Brucker
Aggregate all sanity-checks for sharing CPU page tables with the SMMU
under a single ARM_SMMU_FEAT_SVA bit. For PCIe SVA, users also need to
check FEAT_ATS and FEAT_PRI. For platform SVA, they will most likely have
to check FEAT_STALLS.

Cc: Suzuki K Poulose 
Signed-off-by: Jean-Philippe Brucker 
---
 drivers/iommu/arm-smmu-v3.c | 72 +
 1 file changed, 72 insertions(+)

diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
index a562c4b243292..9b90cc57a609b 100644
--- a/drivers/iommu/arm-smmu-v3.c
+++ b/drivers/iommu/arm-smmu-v3.c
@@ -657,6 +657,7 @@ struct arm_smmu_device {
 #define ARM_SMMU_FEAT_RANGE_INV(1 << 15)
 #define ARM_SMMU_FEAT_E2H  (1 << 16)
 #define ARM_SMMU_FEAT_BTM  (1 << 17)
+#define ARM_SMMU_FEAT_SVA  (1 << 18)
u32 features;
 
 #define ARM_SMMU_OPT_SKIP_PREFETCH (1 << 0)
@@ -3925,6 +3926,74 @@ static int arm_smmu_device_reset(struct arm_smmu_device 
*smmu, bool bypass)
return 0;
 }
 
+static bool arm_smmu_supports_sva(struct arm_smmu_device *smmu)
+{
+   unsigned long reg, fld;
+   unsigned long oas;
+   unsigned long asid_bits;
+
+   u32 feat_mask = ARM_SMMU_FEAT_BTM | ARM_SMMU_FEAT_COHERENCY;
+
+   if ((smmu->features & feat_mask) != feat_mask)
+   return false;
+
+   if (!(smmu->pgsize_bitmap & PAGE_SIZE))
+   return false;
+
+   /*
+* Get the smallest PA size of all CPUs (sanitized by cpufeature). We're
+* not even pretending to support AArch32 here.
+*/
+   reg = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1);
+   fld = cpuid_feature_extract_unsigned_field(reg, 
ID_AA64MMFR0_PARANGE_SHIFT);
+   switch (fld) {
+   case 0x0:
+   oas = 32;
+   break;
+   case 0x1:
+   oas = 36;
+   break;
+   case 0x2:
+   oas = 40;
+   break;
+   case 0x3:
+   oas = 42;
+   break;
+   case 0x4:
+   oas = 44;
+   break;
+   case 0x5:
+   oas = 48;
+   break;
+   case 0x6:
+   oas = 52;
+   break;
+   default:
+   return false;
+   }
+
+   /* abort if MMU outputs addresses greater than what we support. */
+   if (smmu->oas < oas)
+   return false;
+
+   /* We can support bigger ASIDs than the CPU, but not smaller */
+   fld = cpuid_feature_extract_unsigned_field(reg, 
ID_AA64MMFR0_ASID_SHIFT);
+   asid_bits = fld ? 16 : 8;
+   if (smmu->asid_bits < asid_bits)
+   return false;
+
+   /*
+* See max_pinned_asids in arch/arm64/mm/context.c. The following is
+* generally the maximum number of bindable processes.
+*/
+   if (IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0))
+   asid_bits--;
+   dev_dbg(smmu->dev, "%d shared contexts\n", (1 << asid_bits) -
+   num_possible_cpus() - 2);
+
+   return true;
+}
+
 static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu)
 {
u32 reg;
@@ -4137,6 +4206,9 @@ static int arm_smmu_device_hw_probe(struct 
arm_smmu_device *smmu)
 
smmu->ias = max(smmu->ias, smmu->oas);
 
+   if (arm_smmu_supports_sva(smmu))
+   smmu->features |= ARM_SMMU_FEAT_SVA;
+
dev_info(smmu->dev, "ias %lu-bit, oas %lu-bit (features 0x%08x)\n",
 smmu->ias, smmu->oas, smmu->features);
return 0;
-- 
2.26.2

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v6 23/25] PCI/ATS: Add PRI stubs

2020-04-30 Thread Jean-Philippe Brucker
The SMMUv3 driver, which can be built without CONFIG_PCI, will soon gain
support for PRI.  Partially revert commit c6e9aefbf9db ("PCI/ATS: Remove
unused PRI and PASID stubs") to re-introduce the PRI stubs, and avoid
adding more #ifdefs to the SMMU driver.

Acked-by: Bjorn Helgaas 
Reviewed-by: Kuppuswamy Sathyanarayanan 

Signed-off-by: Jean-Philippe Brucker 
---
 include/linux/pci-ats.h | 8 
 1 file changed, 8 insertions(+)

diff --git a/include/linux/pci-ats.h b/include/linux/pci-ats.h
index f75c307f346de..e9e266df9b37c 100644
--- a/include/linux/pci-ats.h
+++ b/include/linux/pci-ats.h
@@ -28,6 +28,14 @@ int pci_enable_pri(struct pci_dev *pdev, u32 reqs);
 void pci_disable_pri(struct pci_dev *pdev);
 int pci_reset_pri(struct pci_dev *pdev);
 int pci_prg_resp_pasid_required(struct pci_dev *pdev);
+#else /* CONFIG_PCI_PRI */
+static inline int pci_enable_pri(struct pci_dev *pdev, u32 reqs)
+{ return -ENODEV; }
+static inline void pci_disable_pri(struct pci_dev *pdev) { }
+static inline int pci_reset_pri(struct pci_dev *pdev)
+{ return -ENODEV; }
+static inline int pci_prg_resp_pasid_required(struct pci_dev *pdev)
+{ return 0; }
 #endif /* CONFIG_PCI_PRI */
 
 #ifdef CONFIG_PCI_PASID
-- 
2.26.2

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v6 13/25] iommu/arm-smmu-v3: Add support for VHE

2020-04-30 Thread Jean-Philippe Brucker
ARMv8.1 extensions added Virtualization Host Extensions (VHE), which allow
to run a host kernel at EL2. When using normal DMA, Device and CPU address
spaces are dissociated, and do not need to implement the same
capabilities, so VHE hasn't been used in the SMMU until now.

With shared address spaces however, ASIDs are shared between MMU and SMMU,
and broadcast TLB invalidations issued by a CPU are taken into account by
the SMMU. TLB entries on both sides need to have identical exception level
in order to be cleared with a single invalidation.

When the CPU is using VHE, enable VHE in the SMMU for all STEs. Normal DMA
mappings will need to use TLBI_EL2 commands instead of TLBI_NH, but
shouldn't be otherwise affected by this change.

Signed-off-by: Jean-Philippe Brucker 
---
 drivers/iommu/arm-smmu-v3.c | 31 ++-
 1 file changed, 26 insertions(+), 5 deletions(-)

diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
index aad49d565c592..3a70d032d4e71 100644
--- a/drivers/iommu/arm-smmu-v3.c
+++ b/drivers/iommu/arm-smmu-v3.c
@@ -13,6 +13,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -480,6 +481,8 @@ struct arm_smmu_cmdq_ent {
#define CMDQ_OP_TLBI_NH_ASID0x11
#define CMDQ_OP_TLBI_NH_VA  0x12
#define CMDQ_OP_TLBI_EL2_ALL0x20
+   #define CMDQ_OP_TLBI_EL2_ASID   0x21
+   #define CMDQ_OP_TLBI_EL2_VA 0x22
#define CMDQ_OP_TLBI_S12_VMALL  0x28
#define CMDQ_OP_TLBI_S2_IPA 0x2a
#define CMDQ_OP_TLBI_NSNH_ALL   0x30
@@ -651,6 +654,7 @@ struct arm_smmu_device {
 #define ARM_SMMU_FEAT_STALL_FORCE  (1 << 13)
 #define ARM_SMMU_FEAT_VAX  (1 << 14)
 #define ARM_SMMU_FEAT_RANGE_INV(1 << 15)
+#define ARM_SMMU_FEAT_E2H  (1 << 16)
u32 features;
 
 #define ARM_SMMU_OPT_SKIP_PREFETCH (1 << 0)
@@ -924,6 +928,8 @@ static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct 
arm_smmu_cmdq_ent *ent)
cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_NUM, ent->tlbi.num);
cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_SCALE, ent->tlbi.scale);
cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_VMID, ent->tlbi.vmid);
+   /* Fallthrough */
+   case CMDQ_OP_TLBI_EL2_VA:
cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_ASID, ent->tlbi.asid);
cmd[1] |= FIELD_PREP(CMDQ_TLBI_1_LEAF, ent->tlbi.leaf);
cmd[1] |= FIELD_PREP(CMDQ_TLBI_1_TTL, ent->tlbi.ttl);
@@ -945,6 +951,9 @@ static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct 
arm_smmu_cmdq_ent *ent)
case CMDQ_OP_TLBI_S12_VMALL:
cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_VMID, ent->tlbi.vmid);
break;
+   case CMDQ_OP_TLBI_EL2_ASID:
+   cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_ASID, ent->tlbi.asid);
+   break;
case CMDQ_OP_ATC_INV:
cmd[0] |= FIELD_PREP(CMDQ_0_SSV, ent->substream_valid);
cmd[0] |= FIELD_PREP(CMDQ_ATC_0_GLOBAL, ent->atc.global);
@@ -1538,7 +1547,8 @@ static int arm_smmu_cmdq_batch_submit(struct 
arm_smmu_device *smmu,
 static void arm_smmu_tlb_inv_asid(struct arm_smmu_device *smmu, u16 asid)
 {
struct arm_smmu_cmdq_ent cmd = {
-   .opcode = CMDQ_OP_TLBI_NH_ASID,
+   .opcode = smmu->features & ARM_SMMU_FEAT_E2H ?
+   CMDQ_OP_TLBI_EL2_ASID : CMDQ_OP_TLBI_NH_ASID,
.tlbi.asid = asid,
};
 
@@ -2088,13 +2098,16 @@ static void arm_smmu_write_strtab_ent(struct 
arm_smmu_master *master, u32 sid,
}
 
if (s1_cfg) {
+   int strw = smmu->features & ARM_SMMU_FEAT_E2H ?
+   STRTAB_STE_1_STRW_EL2 : STRTAB_STE_1_STRW_NSEL1;
+
BUG_ON(ste_live);
dst[1] = cpu_to_le64(
 FIELD_PREP(STRTAB_STE_1_S1DSS, 
STRTAB_STE_1_S1DSS_SSID0) |
 FIELD_PREP(STRTAB_STE_1_S1CIR, 
STRTAB_STE_1_S1C_CACHE_WBRA) |
 FIELD_PREP(STRTAB_STE_1_S1COR, 
STRTAB_STE_1_S1C_CACHE_WBRA) |
 FIELD_PREP(STRTAB_STE_1_S1CSH, ARM_SMMU_SH_ISH) |
-FIELD_PREP(STRTAB_STE_1_STRW, 
STRTAB_STE_1_STRW_NSEL1));
+FIELD_PREP(STRTAB_STE_1_STRW, strw));
 
if (smmu->features & ARM_SMMU_FEAT_STALLS &&
   !(smmu->features & ARM_SMMU_FEAT_STALL_FORCE))
@@ -2490,7 +2503,8 @@ static void arm_smmu_tlb_inv_range(unsigned long iova, 
size_t size,
return;
 
if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
-   cmd.opcode  = CMDQ_OP_TLBI_NH_VA;
+   cmd.opcode  = smmu->features & ARM_SMMU_FEAT_E2H ?
+ CMDQ_OP_TLBI_EL2_VA : CMDQ_OP_TLBI_NH_VA;
cmd.tlbi.asid   = smmu_domain->s1_cfg.cd.asid;
} else {
 

[PATCH v6 22/25] iommu/arm-smmu-v3: Add stall support for platform devices

2020-04-30 Thread Jean-Philippe Brucker
The SMMU provides a Stall model for handling page faults in platform
devices. It is similar to PCI PRI, but doesn't require devices to have
their own translation cache. Instead, faulting transactions are parked
and the OS is given a chance to fix the page tables and retry the
transaction.

Enable stall for devices that support it (opt-in by firmware). When an
event corresponds to a translation error, call the IOMMU fault handler.
If the fault is recoverable, it will call us back to terminate or
continue the stall.

Signed-off-by: Jean-Philippe Brucker 
---
 drivers/iommu/Kconfig   |   1 +
 include/linux/iommu.h   |   2 +
 drivers/iommu/arm-smmu-v3.c | 286 ++--
 drivers/iommu/of_iommu.c|   5 +-
 4 files changed, 283 insertions(+), 11 deletions(-)

diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
index f863c4562feeb..f9307d543d3b5 100644
--- a/drivers/iommu/Kconfig
+++ b/drivers/iommu/Kconfig
@@ -433,6 +433,7 @@ config ARM_SMMU_V3
depends on ARM64
select IOMMU_API
select IOMMU_SVA
+   select IOMMU_PAGE_FAULT
select IOMMU_IO_PGTABLE_LPAE
select GENERIC_MSI_IRQ_DOMAIN
help
diff --git a/include/linux/iommu.h b/include/linux/iommu.h
index a1201c94f6ace..fbea2e80dd7d3 100644
--- a/include/linux/iommu.h
+++ b/include/linux/iommu.h
@@ -567,6 +567,7 @@ struct iommu_group *fsl_mc_device_group(struct device *dev);
  * @iommu_fwnode: firmware handle for this device's IOMMU
  * @iommu_priv: IOMMU driver private data for this device
  * @num_pasid_bits: number of PASID bits supported by this device
+ * @can_stall: the device is allowed to stall
  * @num_ids: number of associated device IDs
  * @ids: IDs which this device may present to the IOMMU
  */
@@ -574,6 +575,7 @@ struct iommu_fwspec {
const struct iommu_ops  *ops;
struct fwnode_handle*iommu_fwnode;
u32 num_pasid_bits;
+   boolcan_stall;
unsigned intnum_ids;
u32 ids[];
 };
diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
index fda62ea35dc23..eb32a7cb5e920 100644
--- a/drivers/iommu/arm-smmu-v3.c
+++ b/drivers/iommu/arm-smmu-v3.c
@@ -383,6 +383,13 @@
 #define CMDQ_PRI_1_GRPID   GENMASK_ULL(8, 0)
 #define CMDQ_PRI_1_RESPGENMASK_ULL(13, 12)
 
+#define CMDQ_RESUME_0_SID  GENMASK_ULL(63, 32)
+#define CMDQ_RESUME_0_RESP_TERM0UL
+#define CMDQ_RESUME_0_RESP_RETRY   1UL
+#define CMDQ_RESUME_0_RESP_ABORT   2UL
+#define CMDQ_RESUME_0_RESP GENMASK_ULL(13, 12)
+#define CMDQ_RESUME_1_STAG GENMASK_ULL(15, 0)
+
 #define CMDQ_SYNC_0_CS GENMASK_ULL(13, 12)
 #define CMDQ_SYNC_0_CS_NONE0
 #define CMDQ_SYNC_0_CS_IRQ 1
@@ -399,6 +406,25 @@
 
 #define EVTQ_0_ID  GENMASK_ULL(7, 0)
 
+#define EVT_ID_TRANSLATION_FAULT   0x10
+#define EVT_ID_ADDR_SIZE_FAULT 0x11
+#define EVT_ID_ACCESS_FAULT0x12
+#define EVT_ID_PERMISSION_FAULT0x13
+
+#define EVTQ_0_SSV (1UL << 11)
+#define EVTQ_0_SSIDGENMASK_ULL(31, 12)
+#define EVTQ_0_SID GENMASK_ULL(63, 32)
+#define EVTQ_1_STAGGENMASK_ULL(15, 0)
+#define EVTQ_1_STALL   (1UL << 31)
+#define EVTQ_1_PRIV(1UL << 33)
+#define EVTQ_1_EXEC(1UL << 34)
+#define EVTQ_1_READ(1UL << 35)
+#define EVTQ_1_S2  (1UL << 39)
+#define EVTQ_1_CLASS   GENMASK_ULL(41, 40)
+#define EVTQ_1_TT_READ (1UL << 44)
+#define EVTQ_2_ADDRGENMASK_ULL(63, 0)
+#define EVTQ_3_IPA GENMASK_ULL(51, 12)
+
 /* PRI queue */
 #define PRIQ_ENT_SZ_SHIFT  4
 #define PRIQ_ENT_DWORDS((1 << PRIQ_ENT_SZ_SHIFT) >> 3)
@@ -523,6 +549,13 @@ struct arm_smmu_cmdq_ent {
enum pri_resp   resp;
} pri;
 
+   #define CMDQ_OP_RESUME  0x44
+   struct {
+   u32 sid;
+   u16 stag;
+   u8  resp;
+   } resume;
+
#define CMDQ_OP_CMD_SYNC0x46
struct {
u64 msiaddr;
@@ -558,6 +591,10 @@ struct arm_smmu_queue {
 
u32 __iomem *prod_reg;
u32 __iomem *cons_reg;
+
+   /* Event and PRI */
+   u64 batch;
+   wait_queue_head_t   wq;
 };
 
 struct arm_smmu_queue_poll {
@@ -581,6 +618,7 @@ struct arm_smmu_cmdq_batch {
 
 struct arm_smmu_evtq {
struct arm_smmu_queue   q;
+   struct iopf_queue   

[PATCH v6 12/25] iommu/arm-smmu-v3: Seize private ASID

2020-04-30 Thread Jean-Philippe Brucker
The SMMU has a single ASID space, the union of shared and private ASID
sets. This means that the SMMU driver competes with the arch allocator
for ASIDs. Shared ASIDs are those of Linux processes, allocated by the
arch, and contribute in broadcast TLB maintenance. Private ASIDs are
allocated by the SMMU driver and used for "classic" map/unmap DMA. They
require command-queue TLB invalidations.

When we pin down an mm_context and get an ASID that is already in use by
the SMMU, it belongs to a private context. We used to simply abort the
bind, but this is unfair to users that would be unable to bind a few
seemingly random processes. Try to allocate a new private ASID for the
context, and make the old ASID shared.

Introduce a new lock to prevent races when rewriting context
descriptors. Unfortunately it has to be a spinlock since we take it
while holding the asid lock, which will be held in non-sleepable context
(freeing ASIDs from an RCU callback).

Signed-off-by: Jean-Philippe Brucker 
---
 drivers/iommu/arm-smmu-v3.c | 83 +
 1 file changed, 66 insertions(+), 17 deletions(-)

diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
index fb3116045df0f..aad49d565c592 100644
--- a/drivers/iommu/arm-smmu-v3.c
+++ b/drivers/iommu/arm-smmu-v3.c
@@ -730,6 +730,7 @@ struct arm_smmu_option_prop {
 };
 
 static DEFINE_XARRAY_ALLOC1(asid_xa);
+static DEFINE_SPINLOCK(contexts_lock);
 
 static struct arm_smmu_option_prop arm_smmu_options[] = {
{ ARM_SMMU_OPT_SKIP_PREFETCH, "hisilicon,broken-prefetch-cmd" },
@@ -1534,6 +1535,17 @@ static int arm_smmu_cmdq_batch_submit(struct 
arm_smmu_device *smmu,
 }
 
 /* Context descriptor manipulation functions */
+static void arm_smmu_tlb_inv_asid(struct arm_smmu_device *smmu, u16 asid)
+{
+   struct arm_smmu_cmdq_ent cmd = {
+   .opcode = CMDQ_OP_TLBI_NH_ASID,
+   .tlbi.asid = asid,
+   };
+
+   arm_smmu_cmdq_issue_cmd(smmu, );
+   arm_smmu_cmdq_issue_sync(smmu);
+}
+
 static void arm_smmu_sync_cd(struct arm_smmu_domain *smmu_domain,
 int ssid, bool leaf)
 {
@@ -1568,7 +1580,7 @@ static int arm_smmu_alloc_cd_leaf_table(struct 
arm_smmu_device *smmu,
size_t size = CTXDESC_L2_ENTRIES * (CTXDESC_CD_DWORDS << 3);
 
l1_desc->l2ptr = dmam_alloc_coherent(smmu->dev, size,
-_desc->l2ptr_dma, GFP_KERNEL);
+_desc->l2ptr_dma, GFP_ATOMIC);
if (!l1_desc->l2ptr) {
dev_warn(smmu->dev,
 "failed to allocate context descriptor table\n");
@@ -1614,8 +1626,8 @@ static __le64 *arm_smmu_get_cd_ptr(struct arm_smmu_domain 
*smmu_domain,
return l1_desc->l2ptr + idx * CTXDESC_CD_DWORDS;
 }
 
-static int arm_smmu_write_ctx_desc(struct arm_smmu_domain *smmu_domain,
-  int ssid, struct arm_smmu_ctx_desc *cd)
+static int __arm_smmu_write_ctx_desc(struct arm_smmu_domain *smmu_domain,
+int ssid, struct arm_smmu_ctx_desc *cd)
 {
/*
 * This function handles the following cases:
@@ -1691,6 +1703,17 @@ static int arm_smmu_write_ctx_desc(struct 
arm_smmu_domain *smmu_domain,
return 0;
 }
 
+static int arm_smmu_write_ctx_desc(struct arm_smmu_domain *smmu_domain,
+  int ssid, struct arm_smmu_ctx_desc *cd)
+{
+   int ret;
+
+   spin_lock(_lock);
+   ret = __arm_smmu_write_ctx_desc(smmu_domain, ssid, cd);
+   spin_unlock(_lock);
+   return ret;
+}
+
 static int arm_smmu_alloc_cd_tables(struct arm_smmu_domain *smmu_domain)
 {
int ret;
@@ -1794,9 +1817,18 @@ static bool arm_smmu_free_asid(struct arm_smmu_ctx_desc 
*cd)
return free;
 }
 
+/*
+ * Try to reserve this ASID in the SMMU. If it is in use, try to steal it from
+ * the private entry. Careful here, we may be modifying the context tables of
+ * another SMMU!
+ */
 static struct arm_smmu_ctx_desc *arm_smmu_share_asid(u16 asid)
 {
+   int ret;
+   u32 new_asid;
struct arm_smmu_ctx_desc *cd;
+   struct arm_smmu_device *smmu;
+   struct arm_smmu_domain *smmu_domain;
 
cd = xa_load(_xa, asid);
if (!cd)
@@ -1808,11 +1840,31 @@ static struct arm_smmu_ctx_desc 
*arm_smmu_share_asid(u16 asid)
return cd;
}
 
+   smmu_domain = container_of(cd, struct arm_smmu_domain, s1_cfg.cd);
+   smmu = smmu_domain->smmu;
+
+   /*
+* Race with unmap: TLB invalidations will start targeting the new ASID,
+* which isn't assigned yet. We'll do an invalidate-all on the old ASID
+* later, so it doesn't matter.
+*/
+   ret = __xa_alloc(_xa, _asid, cd,
+XA_LIMIT(1, 1 << smmu->asid_bits), GFP_ATOMIC);
+   if (ret)
+   return ERR_PTR(-ENOSPC);
+   cd->asid = new_asid;
+
/*
-* Ouch, ASID is 

[PATCH v6 20/25] iommu/arm-smmu-v3: Maintain a SID->device structure

2020-04-30 Thread Jean-Philippe Brucker
When handling faults from the event or PRI queue, we need to find the
struct device associated to a SID. Add a rb_tree to keep track of SIDs.

Signed-off-by: Jean-Philippe Brucker 
---
 drivers/iommu/arm-smmu-v3.c | 175 +---
 1 file changed, 145 insertions(+), 30 deletions(-)

diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
index 240cd0bc00e62..fda62ea35dc23 100644
--- a/drivers/iommu/arm-smmu-v3.c
+++ b/drivers/iommu/arm-smmu-v3.c
@@ -698,6 +698,15 @@ struct arm_smmu_device {
 
/* IOMMU core code handle */
struct iommu_device iommu;
+
+   struct rb_root  streams;
+   struct mutexstreams_mutex;
+};
+
+struct arm_smmu_stream {
+   u32 id;
+   struct arm_smmu_master  *master;
+   struct rb_node  node;
 };
 
 /* SMMU private data for each master */
@@ -706,8 +715,8 @@ struct arm_smmu_master {
struct device   *dev;
struct arm_smmu_domain  *domain;
struct list_headdomain_head;
-   u32 *sids;
-   unsigned intnum_sids;
+   struct arm_smmu_stream  *streams;
+   unsigned intnum_streams;
boolats_enabled;
boolsva_enabled;
struct list_headbonds;
@@ -1619,8 +1628,8 @@ static void arm_smmu_sync_cd(struct arm_smmu_domain 
*smmu_domain,
 
spin_lock_irqsave(_domain->devices_lock, flags);
list_for_each_entry(master, _domain->devices, domain_head) {
-   for (i = 0; i < master->num_sids; i++) {
-   cmd.cfgi.sid = master->sids[i];
+   for (i = 0; i < master->num_streams; i++) {
+   cmd.cfgi.sid = master->streams[i].id;
arm_smmu_cmdq_batch_add(smmu, , );
}
}
@@ -2243,6 +2252,32 @@ static int arm_smmu_init_l2_strtab(struct 
arm_smmu_device *smmu, u32 sid)
return 0;
 }
 
+__maybe_unused
+static struct arm_smmu_master *
+arm_smmu_find_master(struct arm_smmu_device *smmu, u32 sid)
+{
+   struct rb_node *node;
+   struct arm_smmu_stream *stream;
+   struct arm_smmu_master *master = NULL;
+
+   mutex_lock(>streams_mutex);
+   node = smmu->streams.rb_node;
+   while (node) {
+   stream = rb_entry(node, struct arm_smmu_stream, node);
+   if (stream->id < sid) {
+   node = node->rb_right;
+   } else if (stream->id > sid) {
+   node = node->rb_left;
+   } else {
+   master = stream->master;
+   break;
+   }
+   }
+   mutex_unlock(>streams_mutex);
+
+   return master;
+}
+
 /* IRQ and event handlers */
 static irqreturn_t arm_smmu_evtq_thread(int irq, void *dev)
 {
@@ -2476,8 +2511,8 @@ static int arm_smmu_atc_inv_master(struct arm_smmu_master 
*master, int ssid)
 
arm_smmu_atc_inv_to_cmd(ssid, 0, 0, );
 
-   for (i = 0; i < master->num_sids; i++) {
-   cmd.atc.sid = master->sids[i];
+   for (i = 0; i < master->num_streams; i++) {
+   cmd.atc.sid = master->streams[i].id;
arm_smmu_cmdq_issue_cmd(master->smmu, );
}
 
@@ -2520,8 +2555,8 @@ static int arm_smmu_atc_inv_domain(struct arm_smmu_domain 
*smmu_domain,
if (!master->ats_enabled)
continue;
 
-   for (i = 0; i < master->num_sids; i++) {
-   cmd.atc.sid = master->sids[i];
+   for (i = 0; i < master->num_streams; i++) {
+   cmd.atc.sid = master->streams[i].id;
arm_smmu_cmdq_batch_add(smmu_domain->smmu, , );
}
}
@@ -2930,13 +2965,13 @@ static void arm_smmu_install_ste_for_dev(struct 
arm_smmu_master *master)
int i, j;
struct arm_smmu_device *smmu = master->smmu;
 
-   for (i = 0; i < master->num_sids; ++i) {
-   u32 sid = master->sids[i];
+   for (i = 0; i < master->num_streams; ++i) {
+   u32 sid = master->streams[i].id;
__le64 *step = arm_smmu_get_step_for_sid(smmu, sid);
 
/* Bridged PCI devices may end up with duplicated IDs */
for (j = 0; j < i; j++)
-   if (master->sids[j] == sid)
+   if (master->streams[j].id == sid)
break;
if (j < i)
continue;
@@ -3430,11 +3465,101 @@ static bool arm_smmu_sid_in_range(struct 
arm_smmu_device *smmu, u32 sid)
return sid < limit;
 }
 
+static int arm_smmu_insert_master(struct arm_smmu_device *smmu,
+ struct arm_smmu_master *master)
+{
+   int i;
+

[PATCH v6 11/25] iommu/arm-smmu-v3: Share process page tables

2020-04-30 Thread Jean-Philippe Brucker
With Shared Virtual Addressing (SVA), we need to mirror CPU TTBR, TCR,
MAIR and ASIDs in SMMU contexts. Each SMMU has a single ASID space split
into two sets, shared and private. Shared ASIDs correspond to those
obtained from the arch ASID allocator, and private ASIDs are used for
"classic" map/unmap DMA.

Cc: Suzuki K Poulose 
Signed-off-by: Jean-Philippe Brucker 
---
 drivers/iommu/arm-smmu-v3.c | 156 +++-
 1 file changed, 152 insertions(+), 4 deletions(-)

diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
index 96ee60002e85e..fb3116045df0f 100644
--- a/drivers/iommu/arm-smmu-v3.c
+++ b/drivers/iommu/arm-smmu-v3.c
@@ -22,6 +22,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -33,6 +34,8 @@
 
 #include 
 
+#include "io-pgtable-arm.h"
+
 /* MMIO registers */
 #define ARM_SMMU_IDR0  0x0
 #define IDR0_ST_LVLGENMASK(28, 27)
@@ -587,6 +590,9 @@ struct arm_smmu_ctx_desc {
u64 ttbr;
u64 tcr;
u64 mair;
+
+   refcount_t  refs;
+   struct mm_struct*mm;
 };
 
 struct arm_smmu_l1_ctx_desc {
@@ -1660,7 +1666,8 @@ static int arm_smmu_write_ctx_desc(struct arm_smmu_domain 
*smmu_domain,
 #ifdef __BIG_ENDIAN
CTXDESC_CD_0_ENDI |
 #endif
-   CTXDESC_CD_0_R | CTXDESC_CD_0_A | CTXDESC_CD_0_ASET |
+   CTXDESC_CD_0_R | CTXDESC_CD_0_A |
+   (cd->mm ? 0 : CTXDESC_CD_0_ASET) |
CTXDESC_CD_0_AA64 |
FIELD_PREP(CTXDESC_CD_0_ASID, cd->asid) |
CTXDESC_CD_0_V;
@@ -1764,12 +1771,151 @@ static void arm_smmu_free_cd_tables(struct 
arm_smmu_domain *smmu_domain)
cdcfg->cdtab = NULL;
 }
 
-static void arm_smmu_free_asid(struct arm_smmu_ctx_desc *cd)
+static void arm_smmu_init_cd(struct arm_smmu_ctx_desc *cd)
 {
+   refcount_set(>refs, 1);
+}
+
+static bool arm_smmu_free_asid(struct arm_smmu_ctx_desc *cd)
+{
+   bool free;
+   struct arm_smmu_ctx_desc *old_cd;
+
if (!cd->asid)
-   return;
+   return false;
+
+   xa_lock(_xa);
+   free = refcount_dec_and_test(>refs);
+   if (free) {
+   old_cd = __xa_erase(_xa, cd->asid);
+   WARN_ON(old_cd != cd);
+   }
+   xa_unlock(_xa);
+   return free;
+}
+
+static struct arm_smmu_ctx_desc *arm_smmu_share_asid(u16 asid)
+{
+   struct arm_smmu_ctx_desc *cd;
 
-   xa_erase(_xa, cd->asid);
+   cd = xa_load(_xa, asid);
+   if (!cd)
+   return NULL;
+
+   if (cd->mm) {
+   /* All devices bound to this mm use the same cd struct. */
+   refcount_inc(>refs);
+   return cd;
+   }
+
+   /*
+* Ouch, ASID is already in use for a private cd.
+* TODO: seize it.
+*/
+   return ERR_PTR(-EEXIST);
+}
+
+__maybe_unused
+static struct arm_smmu_ctx_desc *arm_smmu_alloc_shared_cd(struct mm_struct *mm)
+{
+   u16 asid;
+   int ret = 0;
+   u64 tcr, par, reg;
+   struct arm_smmu_ctx_desc *cd;
+   struct arm_smmu_ctx_desc *old_cd = NULL;
+
+   asid = mm_context_get(mm);
+   if (!asid)
+   return ERR_PTR(-ESRCH);
+
+   cd = kzalloc(sizeof(*cd), GFP_KERNEL);
+   if (!cd) {
+   ret = -ENOMEM;
+   goto err_put_context;
+   }
+
+   arm_smmu_init_cd(cd);
+
+   xa_lock(_xa);
+   old_cd = arm_smmu_share_asid(asid);
+   if (!old_cd) {
+   old_cd = __xa_store(_xa, asid, cd, GFP_ATOMIC);
+   /*
+* Keep error, clear valid pointers. If there was an old entry
+* it has been moved already by arm_smmu_share_asid().
+*/
+   old_cd = ERR_PTR(xa_err(old_cd));
+   cd->asid = asid;
+   }
+   xa_unlock(_xa);
+
+   if (IS_ERR(old_cd)) {
+   ret = PTR_ERR(old_cd);
+   goto err_free_cd;
+   } else if (old_cd) {
+   if (WARN_ON(old_cd->mm != mm)) {
+   ret = -EINVAL;
+   goto err_free_cd;
+   }
+   kfree(cd);
+   mm_context_put(mm);
+   return old_cd;
+   }
+
+   tcr = FIELD_PREP(CTXDESC_CD_0_TCR_T0SZ, 64ULL - VA_BITS) |
+ FIELD_PREP(CTXDESC_CD_0_TCR_IRGN0, ARM_LPAE_TCR_RGN_WBWA) |
+ FIELD_PREP(CTXDESC_CD_0_TCR_ORGN0, ARM_LPAE_TCR_RGN_WBWA) |
+ FIELD_PREP(CTXDESC_CD_0_TCR_SH0, ARM_LPAE_TCR_SH_IS) |
+ CTXDESC_CD_0_TCR_EPD1 | CTXDESC_CD_0_AA64;
+
+   switch (PAGE_SIZE) {
+   case SZ_4K:
+   tcr |= FIELD_PREP(CTXDESC_CD_0_TCR_TG0, ARM_LPAE_TCR_TG0_4K);
+   break;
+   case SZ_16K:
+   tcr |= 

[PATCH v6 02/25] iommu/ioasid: Add ioasid references

2020-04-30 Thread Jean-Philippe Brucker
Let IOASID users take references to existing ioasids with ioasid_get().
ioasid_free() drops a reference and only frees the ioasid when its
reference number is zero. It returns whether the ioasid was freed.

Signed-off-by: Jean-Philippe Brucker 
---
 include/linux/ioasid.h | 10 --
 drivers/iommu/ioasid.c | 30 +-
 2 files changed, 37 insertions(+), 3 deletions(-)

diff --git a/include/linux/ioasid.h b/include/linux/ioasid.h
index 6f000d7a0ddcd..609ba6f15b9e3 100644
--- a/include/linux/ioasid.h
+++ b/include/linux/ioasid.h
@@ -34,7 +34,8 @@ struct ioasid_allocator_ops {
 #if IS_ENABLED(CONFIG_IOASID)
 ioasid_t ioasid_alloc(struct ioasid_set *set, ioasid_t min, ioasid_t max,
  void *private);
-void ioasid_free(ioasid_t ioasid);
+void ioasid_get(ioasid_t ioasid);
+bool ioasid_free(ioasid_t ioasid);
 void *ioasid_find(struct ioasid_set *set, ioasid_t ioasid,
  bool (*getter)(void *));
 int ioasid_register_allocator(struct ioasid_allocator_ops *allocator);
@@ -48,10 +49,15 @@ static inline ioasid_t ioasid_alloc(struct ioasid_set *set, 
ioasid_t min,
return INVALID_IOASID;
 }
 
-static inline void ioasid_free(ioasid_t ioasid)
+static inline void ioasid_get(ioasid_t ioasid)
 {
 }
 
+static inline bool ioasid_free(ioasid_t ioasid)
+{
+   return false;
+}
+
 static inline void *ioasid_find(struct ioasid_set *set, ioasid_t ioasid,
bool (*getter)(void *))
 {
diff --git a/drivers/iommu/ioasid.c b/drivers/iommu/ioasid.c
index 0f8dd377aada3..46511ac53e0c8 100644
--- a/drivers/iommu/ioasid.c
+++ b/drivers/iommu/ioasid.c
@@ -15,6 +15,7 @@ struct ioasid_data {
struct ioasid_set *set;
void *private;
struct rcu_head rcu;
+   refcount_t refs;
 };
 
 /*
@@ -314,6 +315,7 @@ ioasid_t ioasid_alloc(struct ioasid_set *set, ioasid_t min, 
ioasid_t max,
 
data->set = set;
data->private = private;
+   refcount_set(>refs, 1);
 
/*
 * Custom allocator needs allocator data to perform platform specific
@@ -345,12 +347,33 @@ ioasid_t ioasid_alloc(struct ioasid_set *set, ioasid_t 
min, ioasid_t max,
 }
 EXPORT_SYMBOL_GPL(ioasid_alloc);
 
+/**
+ * ioasid_get - obtain a reference to the IOASID
+ */
+void ioasid_get(ioasid_t ioasid)
+{
+   struct ioasid_data *ioasid_data;
+
+   spin_lock(_allocator_lock);
+   ioasid_data = xa_load(_allocator->xa, ioasid);
+   if (ioasid_data)
+   refcount_inc(_data->refs);
+   spin_unlock(_allocator_lock);
+}
+EXPORT_SYMBOL_GPL(ioasid_get);
+
 /**
  * ioasid_free - Free an IOASID
  * @ioasid: the ID to remove
+ *
+ * Put a reference to the IOASID, free it when the number of references drops 
to
+ * zero.
+ *
+ * Return: %true if the IOASID was freed, %false otherwise.
  */
-void ioasid_free(ioasid_t ioasid)
+bool ioasid_free(ioasid_t ioasid)
 {
+   bool free = false;
struct ioasid_data *ioasid_data;
 
spin_lock(_allocator_lock);
@@ -360,6 +383,10 @@ void ioasid_free(ioasid_t ioasid)
goto exit_unlock;
}
 
+   free = refcount_dec_and_test(_data->refs);
+   if (!free)
+   goto exit_unlock;
+
active_allocator->ops->free(ioasid, active_allocator->ops->pdata);
/* Custom allocator needs additional steps to free the xa element */
if (active_allocator->flags & IOASID_ALLOCATOR_CUSTOM) {
@@ -369,6 +396,7 @@ void ioasid_free(ioasid_t ioasid)
 
 exit_unlock:
spin_unlock(_allocator_lock);
+   return free;
 }
 EXPORT_SYMBOL_GPL(ioasid_free);
 
-- 
2.26.2

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v6 10/25] arm64: cpufeature: Export symbol read_sanitised_ftr_reg()

2020-04-30 Thread Jean-Philippe Brucker
The SMMUv3 driver would like to read the MMFR0 PARANGE field in order to
share CPU page tables with devices. Allow the driver to be built as
module by exporting the read_sanitized_ftr_reg() cpufeature symbol.

Cc: Suzuki K Poulose 
Signed-off-by: Jean-Philippe Brucker 
---
 arch/arm64/kernel/cpufeature.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 9fac745aa7bb2..5f6adbf4ae893 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -841,6 +841,7 @@ u64 read_sanitised_ftr_reg(u32 id)
BUG_ON(!regp);
return regp->sys_val;
 }
+EXPORT_SYMBOL_GPL(read_sanitised_ftr_reg);
 
 #define read_sysreg_case(r)\
case r: return read_sysreg_s(r)
-- 
2.26.2

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v6 14/25] iommu/arm-smmu-v3: Enable broadcast TLB maintenance

2020-04-30 Thread Jean-Philippe Brucker
The SMMUv3 can handle invalidation targeted at TLB entries with shared
ASIDs. If the implementation supports broadcast TLB maintenance, enable it
and keep track of it in a feature bit. The SMMU will then be affected by
inner-shareable TLB invalidations from other agents.

A major side-effect of this change is that stage-2 translation contexts
are now affected by all invalidations by VMID. VMIDs are all shared and
the only ways to prevent over-invalidation, since the stage-2 page tables
are not shared between CPU and SMMU, are to either disable BTM or allocate
different VMIDs. This patch does not address the problem.

Signed-off-by: Jean-Philippe Brucker 
---
 drivers/iommu/arm-smmu-v3.c | 19 +--
 1 file changed, 17 insertions(+), 2 deletions(-)

diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
index 3a70d032d4e71..a562c4b243292 100644
--- a/drivers/iommu/arm-smmu-v3.c
+++ b/drivers/iommu/arm-smmu-v3.c
@@ -56,6 +56,7 @@
 #define IDR0_ASID16(1 << 12)
 #define IDR0_ATS   (1 << 10)
 #define IDR0_HYP   (1 << 9)
+#define IDR0_BTM   (1 << 5)
 #define IDR0_COHACC(1 << 4)
 #define IDR0_TTF   GENMASK(3, 2)
 #define IDR0_TTF_AARCH64   2
@@ -655,6 +656,7 @@ struct arm_smmu_device {
 #define ARM_SMMU_FEAT_VAX  (1 << 14)
 #define ARM_SMMU_FEAT_RANGE_INV(1 << 15)
 #define ARM_SMMU_FEAT_E2H  (1 << 16)
+#define ARM_SMMU_FEAT_BTM  (1 << 17)
u32 features;
 
 #define ARM_SMMU_OPT_SKIP_PREFETCH (1 << 0)
@@ -3809,11 +3811,14 @@ static int arm_smmu_device_reset(struct arm_smmu_device 
*smmu, bool bypass)
writel_relaxed(reg, smmu->base + ARM_SMMU_CR1);
 
/* CR2 (random crap) */
-   reg = CR2_PTM | CR2_RECINVSID;
+   reg = CR2_RECINVSID;
 
if (smmu->features & ARM_SMMU_FEAT_E2H)
reg |= CR2_E2H;
 
+   if (!(smmu->features & ARM_SMMU_FEAT_BTM))
+   reg |= CR2_PTM;
+
writel_relaxed(reg, smmu->base + ARM_SMMU_CR2);
 
/* Stream table */
@@ -3924,6 +3929,7 @@ static int arm_smmu_device_hw_probe(struct 
arm_smmu_device *smmu)
 {
u32 reg;
bool coherent = smmu->features & ARM_SMMU_FEAT_COHERENCY;
+   bool vhe = cpus_have_cap(ARM64_HAS_VIRT_HOST_EXTN);
 
/* IDR0 */
reg = readl_relaxed(smmu->base + ARM_SMMU_IDR0);
@@ -3973,10 +3979,19 @@ static int arm_smmu_device_hw_probe(struct 
arm_smmu_device *smmu)
 
if (reg & IDR0_HYP) {
smmu->features |= ARM_SMMU_FEAT_HYP;
-   if (cpus_have_cap(ARM64_HAS_VIRT_HOST_EXTN))
+   if (vhe)
smmu->features |= ARM_SMMU_FEAT_E2H;
}
 
+   /*
+* If the CPU is using VHE, but the SMMU doesn't support it, the SMMU
+* will create TLB entries for NH-EL1 world and will miss the
+* broadcasted TLB invalidations that target EL2-E2H world. Don't enable
+* BTM in that case.
+*/
+   if (reg & IDR0_BTM && (!vhe || reg & IDR0_HYP))
+   smmu->features |= ARM_SMMU_FEAT_BTM;
+
/*
 * The coherency feature as set by FW is used in preference to the ID
 * register, but warn on mismatch.
-- 
2.26.2

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v6 25/25] iommu/arm-smmu-v3: Add support for PRI

2020-04-30 Thread Jean-Philippe Brucker
For PCI devices that support it, enable the PRI capability and handle PRI
Page Requests with the generic fault handler. It is enabled on demand by
iommu_sva_device_init().

Signed-off-by: Jean-Philippe Brucker 
---
 drivers/iommu/arm-smmu-v3.c | 286 +---
 1 file changed, 236 insertions(+), 50 deletions(-)

diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
index eb32a7cb5e920..306c58ae90900 100644
--- a/drivers/iommu/arm-smmu-v3.c
+++ b/drivers/iommu/arm-smmu-v3.c
@@ -252,6 +252,7 @@
 #define STRTAB_STE_1_S1COR GENMASK_ULL(5, 4)
 #define STRTAB_STE_1_S1CSH GENMASK_ULL(7, 6)
 
+#define STRTAB_STE_1_PPAR  (1UL << 18)
 #define STRTAB_STE_1_S1STALLD  (1UL << 27)
 
 #define STRTAB_STE_1_EATS  GENMASK_ULL(29, 28)
@@ -382,6 +383,9 @@
 #define CMDQ_PRI_0_SID GENMASK_ULL(63, 32)
 #define CMDQ_PRI_1_GRPID   GENMASK_ULL(8, 0)
 #define CMDQ_PRI_1_RESPGENMASK_ULL(13, 12)
+#define CMDQ_PRI_1_RESP_FAILURE0UL
+#define CMDQ_PRI_1_RESP_INVALID1UL
+#define CMDQ_PRI_1_RESP_SUCCESS2UL
 
 #define CMDQ_RESUME_0_SID  GENMASK_ULL(63, 32)
 #define CMDQ_RESUME_0_RESP_TERM0UL
@@ -454,12 +458,6 @@ module_param_named(disable_bypass, disable_bypass, bool, 
S_IRUGO);
 MODULE_PARM_DESC(disable_bypass,
"Disable bypass streams such that incoming transactions from devices 
that are not attached to an iommu domain will report an abort back to the 
device and will not be allowed to pass through the SMMU.");
 
-enum pri_resp {
-   PRI_RESP_DENY = 0,
-   PRI_RESP_FAIL = 1,
-   PRI_RESP_SUCC = 2,
-};
-
 enum arm_smmu_msi_index {
EVTQ_MSI_INDEX,
GERROR_MSI_INDEX,
@@ -546,7 +544,7 @@ struct arm_smmu_cmdq_ent {
u32 sid;
u32 ssid;
u16 grpid;
-   enum pri_resp   resp;
+   u8  resp;
} pri;
 
#define CMDQ_OP_RESUME  0x44
@@ -624,6 +622,7 @@ struct arm_smmu_evtq {
 
 struct arm_smmu_priq {
struct arm_smmu_queue   q;
+   struct iopf_queue   *iopf;
 };
 
 /* High-level stream table and context descriptor structures */
@@ -757,6 +756,8 @@ struct arm_smmu_master {
unsigned intnum_streams;
boolats_enabled;
boolstall_enabled;
+   boolpri_supported;
+   boolprg_resp_needs_ssid;
boolsva_enabled;
struct list_headbonds;
unsigned intssid_bits;
@@ -1061,14 +1062,6 @@ static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct 
arm_smmu_cmdq_ent *ent)
cmd[0] |= FIELD_PREP(CMDQ_PRI_0_SSID, ent->pri.ssid);
cmd[0] |= FIELD_PREP(CMDQ_PRI_0_SID, ent->pri.sid);
cmd[1] |= FIELD_PREP(CMDQ_PRI_1_GRPID, ent->pri.grpid);
-   switch (ent->pri.resp) {
-   case PRI_RESP_DENY:
-   case PRI_RESP_FAIL:
-   case PRI_RESP_SUCC:
-   break;
-   default:
-   return -EINVAL;
-   }
cmd[1] |= FIELD_PREP(CMDQ_PRI_1_RESP, ent->pri.resp);
break;
case CMDQ_OP_RESUME:
@@ -1648,6 +1641,7 @@ static int arm_smmu_page_response(struct device *dev,
 {
struct arm_smmu_cmdq_ent cmd = {0};
struct arm_smmu_master *master = dev_iommu_priv_get(dev);
+   bool pasid_valid = resp->flags & IOMMU_PAGE_RESP_PASID_VALID;
int sid = master->streams[0].id;
 
if (master->stall_enabled) {
@@ -1665,8 +1659,27 @@ static int arm_smmu_page_response(struct device *dev,
default:
return -EINVAL;
}
+   } else if (master->pri_supported) {
+   cmd.opcode  = CMDQ_OP_PRI_RESP;
+   cmd.substream_valid = pasid_valid &&
+ master->prg_resp_needs_ssid;
+   cmd.pri.sid = sid;
+   cmd.pri.ssid= resp->pasid;
+   cmd.pri.grpid   = resp->grpid;
+   switch (resp->code) {
+   case IOMMU_PAGE_RESP_FAILURE:
+   cmd.pri.resp = CMDQ_PRI_1_RESP_FAILURE;
+   break;
+   case IOMMU_PAGE_RESP_INVALID:
+   cmd.pri.resp = CMDQ_PRI_1_RESP_INVALID;
+   break;
+   case IOMMU_PAGE_RESP_SUCCESS:
+   cmd.pri.resp = CMDQ_PRI_1_RESP_SUCCESS;
+   break;
+   default:
+   

[PATCH v6 01/25] mm: Add a PASID field to mm_struct

2020-04-30 Thread Jean-Philippe Brucker
Some devices can tag their DMA requests with a 20-bit Process Address
Space ID (PASID), allowing them to access multiple address spaces. In
combination with recoverable I/O page faults (for example PCIe PRI),
PASID allows the IOMMU to share page tables with the MMU.

To make sure that a single PASID is allocated for each address space, as
required by Intel ENQCMD, store the PASID in the mm_struct. The IOMMU
driver is in charge of serializing modifications to the PASID field.

Signed-off-by: Jean-Philippe Brucker 
---
For the field's validity I'm thinking invalid PASID = 0. In ioasid.h we
define INVALID_IOASID as ~0U, but I think we can now change it to 0,
since Intel is now also reserving PASID #0 for Transactions without
PASID and AMD IOMMU uses GIoV for this too.
---
 include/linux/mm_types.h | 4 
 1 file changed, 4 insertions(+)

diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 4aba6c0c2ba80..8db6472758175 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -534,6 +534,10 @@ struct mm_struct {
atomic_long_t hugetlb_usage;
 #endif
struct work_struct async_put_work;
+#ifdef CONFIG_IOMMU_SUPPORT
+   /* Address space ID used by device DMA */
+   unsigned int pasid;
+#endif
} __randomize_layout;
 
/*
-- 
2.26.2

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v6 17/25] iommu/arm-smmu-v3: Implement iommu_sva_bind/unbind()

2020-04-30 Thread Jean-Philippe Brucker
The sva_bind() function allows devices to access process address spaces
using a PASID (aka SSID).

(1) bind() allocates or gets an existing MMU notifier tied to the
(domain, mm) pair. Each mm gets one PASID.

(2) Any change to the address space calls invalidate_range() which sends
ATC invalidations (in a subsequent patch).

(3) When the process address space dies, the release() notifier disables
the CD to allow reclaiming the page tables. Since release() has to
be light we do not instruct device drivers to stop DMA here, we just
ignore incoming page faults.

To avoid any event 0x0a print (C_BAD_CD) we disable translation
without clearing CD.V. PCIe Translation Requests and Page Requests
are silently denied. Don't clear the R bit because the S bit can't
be cleared when STALL_MODEL==0b10 (forced), and clearing R without
clearing S is useless. Faulting transactions will stall and will be
aborted by the IOPF handler.

(4) After stopping DMA, the device driver releases the bond by calling
unbind(). We release the MMU notifier, free the PASID and the bond.

Three structures keep track of bonds:
* arm_smmu_bond: one per (device, mm) pair, the handle returned to the
  device driver for a bind() request.
* arm_smmu_mmu_notifier: one per (domain, mm) pair, deals with ATS/TLB
  invalidations and clearing the context descriptor on mm exit.
* arm_smmu_ctx_desc: one per mm, holds the pinned ASID and pgd.

Signed-off-by: Jean-Philippe Brucker 
---
v5->v6:
* Implement bind() directly instead of going through io_mm_ops
* Don't clear S and R bits in step (3), it doesn't work with
  STALL_FORCE.
---
 drivers/iommu/Kconfig   |   1 +
 drivers/iommu/arm-smmu-v3.c | 256 +++-
 2 files changed, 253 insertions(+), 4 deletions(-)

diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
index 1e64ee6592e16..f863c4562feeb 100644
--- a/drivers/iommu/Kconfig
+++ b/drivers/iommu/Kconfig
@@ -432,6 +432,7 @@ config ARM_SMMU_V3
tristate "ARM Ltd. System MMU Version 3 (SMMUv3) Support"
depends on ARM64
select IOMMU_API
+   select IOMMU_SVA
select IOMMU_IO_PGTABLE_LPAE
select GENERIC_MSI_IRQ_DOMAIN
help
diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
index c7942d0540599..00e5b69bb81a5 100644
--- a/drivers/iommu/arm-smmu-v3.c
+++ b/drivers/iommu/arm-smmu-v3.c
@@ -24,6 +24,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -36,6 +37,7 @@
 #include 
 
 #include "io-pgtable-arm.h"
+#include "iommu-sva.h"
 
 /* MMIO registers */
 #define ARM_SMMU_IDR0  0x0
@@ -731,8 +733,31 @@ struct arm_smmu_domain {
 
struct list_headdevices;
spinlock_t  devices_lock;
+
+   struct mmu_notifier_ops mn_ops;
 };
 
+struct arm_smmu_mmu_notifier {
+   struct mmu_notifier mn;
+   struct arm_smmu_ctx_desc*cd;
+   boolcleared;
+   refcount_t  refs;
+   struct arm_smmu_domain  *domain;
+};
+
+#define mn_to_smmu(mn) container_of(mn, struct arm_smmu_mmu_notifier, mn)
+
+struct arm_smmu_bond {
+   struct iommu_svasva;
+   struct mm_struct*mm;
+   struct arm_smmu_mmu_notifier*smmu_mn;
+   struct list_headlist;
+   refcount_t  refs;
+};
+
+#define sva_to_bond(handle) \
+   container_of(handle, struct arm_smmu_bond, sva)
+
 struct arm_smmu_option_prop {
u32 opt;
const char *prop;
@@ -742,6 +767,13 @@ static DEFINE_XARRAY_ALLOC1(asid_xa);
 static DEFINE_SPINLOCK(contexts_lock);
 static DEFINE_MUTEX(arm_smmu_sva_lock);
 
+/*
+ * When a process dies, DMA is still running but we need to clear the pgd. If 
we
+ * simply cleared the valid bit from the context descriptor, we'd get event 
0x0a
+ * which are not recoverable.
+ */
+static struct arm_smmu_ctx_desc invalid_cd = { 0 };
+
 static struct arm_smmu_option_prop arm_smmu_options[] = {
{ ARM_SMMU_OPT_SKIP_PREFETCH, "hisilicon,broken-prefetch-cmd" },
{ ARM_SMMU_OPT_PAGE0_REGS_ONLY, "cavium,cn9900-broken-page1-regspace"},
@@ -1652,7 +1684,9 @@ static int __arm_smmu_write_ctx_desc(struct 
arm_smmu_domain *smmu_domain,
 * (2) Install a secondary CD, for SID+SSID traffic.
 * (3) Update ASID of a CD. Atomically write the first 64 bits of the
 * CD, then invalidate the old entry and mappings.
-* (4) Remove a secondary CD.
+* (4) Quiesce the context without clearing the valid bit. Disable
+* translation, and ignore any translation fault.
+* (5) Remove a secondary CD.
 */
u64 val;
bool cd_live;
@@ -1669,8 +1703,10 @@ static int __arm_smmu_write_ctx_desc(struct 
arm_smmu_domain *smmu_domain,
val = le64_to_cpu(cdptr[0]);
cd_live = !!(val & 

[PATCH v6 21/25] dt-bindings: document stall property for IOMMU masters

2020-04-30 Thread Jean-Philippe Brucker
On ARM systems, some platform devices behind an IOMMU may support stall,
which is the ability to recover from page faults. Let the firmware tell us
when a device supports stall.

Reviewed-by: Rob Herring 
Signed-off-by: Jean-Philippe Brucker 
---
 .../devicetree/bindings/iommu/iommu.txt| 18 ++
 1 file changed, 18 insertions(+)

diff --git a/Documentation/devicetree/bindings/iommu/iommu.txt 
b/Documentation/devicetree/bindings/iommu/iommu.txt
index 3c36334e4f942..26ba9e530f138 100644
--- a/Documentation/devicetree/bindings/iommu/iommu.txt
+++ b/Documentation/devicetree/bindings/iommu/iommu.txt
@@ -92,6 +92,24 @@ Optional properties:
   tagging DMA transactions with an address space identifier. By default,
   this is 0, which means that the device only has one address space.
 
+- dma-can-stall: When present, the master can wait for a transaction to
+  complete for an indefinite amount of time. Upon translation fault some
+  IOMMUs, instead of aborting the translation immediately, may first
+  notify the driver and keep the transaction in flight. This allows the OS
+  to inspect the fault and, for example, make physical pages resident
+  before updating the mappings and completing the transaction. Such IOMMU
+  accepts a limited number of simultaneous stalled transactions before
+  having to either put back-pressure on the master, or abort new faulting
+  transactions.
+
+  Firmware has to opt-in stalling, because most buses and masters don't
+  support it. In particular it isn't compatible with PCI, where
+  transactions have to complete before a time limit. More generally it
+  won't work in systems and masters that haven't been designed for
+  stalling. For example the OS, in order to handle a stalled transaction,
+  may attempt to retrieve pages from secondary storage in a stalled
+  domain, leading to a deadlock.
+
 
 Notes:
 ==
-- 
2.26.2

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v6 18/25] iommu/arm-smmu-v3: Hook up ATC invalidation to mm ops

2020-04-30 Thread Jean-Philippe Brucker
The invalidate_range() notifier is called for any change to the address
space. Perform the required ATC invalidations.

Signed-off-by: Jean-Philippe Brucker 
---
 drivers/iommu/arm-smmu-v3.c | 56 ++---
 1 file changed, 46 insertions(+), 10 deletions(-)

diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
index 00e5b69bb81a5..c65937d953b5f 100644
--- a/drivers/iommu/arm-smmu-v3.c
+++ b/drivers/iommu/arm-smmu-v3.c
@@ -742,7 +742,7 @@ struct arm_smmu_mmu_notifier {
struct arm_smmu_ctx_desc*cd;
boolcleared;
refcount_t  refs;
-   struct arm_smmu_domain  *domain;
+   struct arm_smmu_domain __rcu*domain;
 };
 
 #define mn_to_smmu(mn) container_of(mn, struct arm_smmu_mmu_notifier, mn)
@@ -2396,6 +2396,20 @@ arm_smmu_atc_inv_to_cmd(int ssid, unsigned long iova, 
size_t size,
size_t inval_grain_shift = 12;
unsigned long page_start, page_end;
 
+   /*
+* ATS and PASID:
+*
+* If substream_valid is clear, the PCIe TLP is sent without a PASID
+* prefix. In that case all ATC entries within the address range are
+* invalidated, including those that were requested with a PASID! There
+* is no way to invalidate only entries without PASID.
+*
+* When using STRTAB_STE_1_S1DSS_SSID0 (reserving CD 0 for non-PASID
+* traffic), translation requests without PASID create ATC entries
+* without PASID, which must be invalidated with substream_valid clear.
+* This has the unpleasant side-effect of invalidating all PASID-tagged
+* ATC entries within the address range.
+*/
*cmd = (struct arm_smmu_cmdq_ent) {
.opcode = CMDQ_OP_ATC_INV,
.substream_valid= !!ssid,
@@ -2439,12 +2453,12 @@ arm_smmu_atc_inv_to_cmd(int ssid, unsigned long iova, 
size_t size,
cmd->atc.size   = log2_span;
 }
 
-static int arm_smmu_atc_inv_master(struct arm_smmu_master *master)
+static int arm_smmu_atc_inv_master(struct arm_smmu_master *master, int ssid)
 {
int i;
struct arm_smmu_cmdq_ent cmd;
 
-   arm_smmu_atc_inv_to_cmd(0, 0, 0, );
+   arm_smmu_atc_inv_to_cmd(ssid, 0, 0, );
 
for (i = 0; i < master->num_sids; i++) {
cmd.atc.sid = master->sids[i];
@@ -2958,7 +2972,7 @@ static void arm_smmu_disable_ats(struct arm_smmu_master 
*master)
 * ATC invalidation via the SMMU.
 */
wmb();
-   arm_smmu_atc_inv_master(master);
+   arm_smmu_atc_inv_master(master, 0);
atomic_dec(_domain->nr_ats_masters);
 }
 
@@ -3187,7 +3201,22 @@ static void arm_smmu_mm_invalidate_range(struct 
mmu_notifier *mn,
 struct mm_struct *mm,
 unsigned long start, unsigned long end)
 {
-   /* TODO: invalidate ATS */
+   struct arm_smmu_mmu_notifier *smmu_mn = mn_to_smmu(mn);
+   struct arm_smmu_domain *smmu_domain;
+
+   rcu_read_lock();
+   smmu_domain = rcu_dereference(smmu_mn->domain);
+   if (smmu_domain) {
+   /*
+* Ensure that mm->pasid is valid. Pairs with the
+* smp_store_release() from rcu_assign_pointer() in
+* __arm_smmu_sva_bind()
+*/
+   smp_rmb();
+   arm_smmu_atc_inv_domain(smmu_domain, mm->pasid, start,
+   end - start + 1);
+   }
+   rcu_read_unlock();
 }
 
 static void arm_smmu_mm_release(struct mmu_notifier *mn, struct mm_struct *mm)
@@ -3201,7 +3230,8 @@ static void arm_smmu_mm_release(struct mmu_notifier *mn, 
struct mm_struct *mm)
return;
}
 
-   smmu_domain = smmu_mn->domain;
+   smmu_domain = rcu_dereference_protected(smmu_mn->domain,
+   lockdep_is_held(_smmu_sva_lock));
 
/*
 * DMA may still be running. Keep the cd valid but disable
@@ -3210,7 +3240,7 @@ static void arm_smmu_mm_release(struct mmu_notifier *mn, 
struct mm_struct *mm)
arm_smmu_write_ctx_desc(smmu_domain, mm->pasid, _cd);
 
arm_smmu_tlb_inv_asid(smmu_domain->smmu, smmu_mn->cd->asid);
-   /* TODO: invalidate ATS */
+   arm_smmu_atc_inv_domain(smmu_domain, mm->pasid, 0, 0);
 
smmu_mn->cleared = true;
mutex_unlock(_smmu_sva_lock);
@@ -3251,7 +3281,8 @@ __arm_smmu_sva_bind(struct device *dev, struct mm_struct 
*mm)
return ERR_CAST(mn);
 
smmu_mn = mn_to_smmu(mn);
-   if (smmu_mn->domain)
+   if (rcu_dereference_protected(smmu_mn->domain,
+ lockdep_is_held(_smmu_sva_lock)))
refcount_inc(_mn->refs);
 
bond = kzalloc(sizeof(*bond), GFP_KERNEL);
@@ -3277,7 +3308,11 @@ __arm_smmu_sva_bind(struct device *dev, struct mm_struct 
*mm)
 

[PATCH v6 24/25] PCI/ATS: Export PRI functions

2020-04-30 Thread Jean-Philippe Brucker
The SMMUv3 driver uses pci_{enable,disable}_pri() and related
functions. Export those functions to allow the driver to be built as a
module.

Acked-by: Bjorn Helgaas 
Reviewed-by: Kuppuswamy Sathyanarayanan 

Signed-off-by: Jean-Philippe Brucker 
---
 drivers/pci/ats.c | 4 
 1 file changed, 4 insertions(+)

diff --git a/drivers/pci/ats.c b/drivers/pci/ats.c
index bbfd0d42b8b97..fc8fc6fc8bd55 100644
--- a/drivers/pci/ats.c
+++ b/drivers/pci/ats.c
@@ -197,6 +197,7 @@ void pci_pri_init(struct pci_dev *pdev)
if (status & PCI_PRI_STATUS_PASID)
pdev->pasid_required = 1;
 }
+EXPORT_SYMBOL_GPL(pci_pri_init);
 
 /**
  * pci_enable_pri - Enable PRI capability
@@ -243,6 +244,7 @@ int pci_enable_pri(struct pci_dev *pdev, u32 reqs)
 
return 0;
 }
+EXPORT_SYMBOL_GPL(pci_enable_pri);
 
 /**
  * pci_disable_pri - Disable PRI capability
@@ -322,6 +324,7 @@ int pci_reset_pri(struct pci_dev *pdev)
 
return 0;
 }
+EXPORT_SYMBOL_GPL(pci_reset_pri);
 
 /**
  * pci_prg_resp_pasid_required - Return PRG Response PASID Required bit
@@ -337,6 +340,7 @@ int pci_prg_resp_pasid_required(struct pci_dev *pdev)
 
return pdev->pasid_required;
 }
+EXPORT_SYMBOL_GPL(pci_prg_resp_pasid_required);
 #endif /* CONFIG_PCI_PRI */
 
 #ifdef CONFIG_PCI_PASID
-- 
2.26.2

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v6 09/25] iommu/arm-smmu-v3: Manage ASIDs with xarray

2020-04-30 Thread Jean-Philippe Brucker
In preparation for sharing some ASIDs with the CPU, use a global xarray to
store ASIDs and their context. ASID#0 is now reserved, and the ASID
space is global.

Signed-off-by: Jean-Philippe Brucker 
---
 drivers/iommu/arm-smmu-v3.c | 27 ++-
 1 file changed, 18 insertions(+), 9 deletions(-)

diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
index 60a415e8e2b6f..96ee60002e85e 100644
--- a/drivers/iommu/arm-smmu-v3.c
+++ b/drivers/iommu/arm-smmu-v3.c
@@ -664,7 +664,6 @@ struct arm_smmu_device {
 
 #define ARM_SMMU_MAX_ASIDS (1 << 16)
unsigned intasid_bits;
-   DECLARE_BITMAP(asid_map, ARM_SMMU_MAX_ASIDS);
 
 #define ARM_SMMU_MAX_VMIDS (1 << 16)
unsigned intvmid_bits;
@@ -724,6 +723,8 @@ struct arm_smmu_option_prop {
const char *prop;
 };
 
+static DEFINE_XARRAY_ALLOC1(asid_xa);
+
 static struct arm_smmu_option_prop arm_smmu_options[] = {
{ ARM_SMMU_OPT_SKIP_PREFETCH, "hisilicon,broken-prefetch-cmd" },
{ ARM_SMMU_OPT_PAGE0_REGS_ONLY, "cavium,cn9900-broken-page1-regspace"},
@@ -1763,6 +1764,14 @@ static void arm_smmu_free_cd_tables(struct 
arm_smmu_domain *smmu_domain)
cdcfg->cdtab = NULL;
 }
 
+static void arm_smmu_free_asid(struct arm_smmu_ctx_desc *cd)
+{
+   if (!cd->asid)
+   return;
+
+   xa_erase(_xa, cd->asid);
+}
+
 /* Stream table manipulation functions */
 static void
 arm_smmu_write_strtab_l1_desc(__le64 *dst, struct arm_smmu_strtab_l1_desc 
*desc)
@@ -2448,10 +2457,9 @@ static void arm_smmu_domain_free(struct iommu_domain 
*domain)
if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
struct arm_smmu_s1_cfg *cfg = _domain->s1_cfg;
 
-   if (cfg->cdcfg.cdtab) {
+   if (cfg->cdcfg.cdtab)
arm_smmu_free_cd_tables(smmu_domain);
-   arm_smmu_bitmap_free(smmu->asid_map, cfg->cd.asid);
-   }
+   arm_smmu_free_asid(>cd);
} else {
struct arm_smmu_s2_cfg *cfg = _domain->s2_cfg;
if (cfg->vmid)
@@ -2466,14 +2474,15 @@ static int arm_smmu_domain_finalise_s1(struct 
arm_smmu_domain *smmu_domain,
   struct io_pgtable_cfg *pgtbl_cfg)
 {
int ret;
-   int asid;
+   u32 asid;
struct arm_smmu_device *smmu = smmu_domain->smmu;
struct arm_smmu_s1_cfg *cfg = _domain->s1_cfg;
typeof(_cfg->arm_lpae_s1_cfg.tcr) tcr = 
_cfg->arm_lpae_s1_cfg.tcr;
 
-   asid = arm_smmu_bitmap_alloc(smmu->asid_map, smmu->asid_bits);
-   if (asid < 0)
-   return asid;
+   ret = xa_alloc(_xa, , >cd,
+  XA_LIMIT(1, (1 << smmu->asid_bits) - 1), GFP_KERNEL);
+   if (ret)
+   return ret;
 
cfg->s1cdmax = master->ssid_bits;
 
@@ -2506,7 +2515,7 @@ static int arm_smmu_domain_finalise_s1(struct 
arm_smmu_domain *smmu_domain,
 out_free_cd_tables:
arm_smmu_free_cd_tables(smmu_domain);
 out_free_asid:
-   arm_smmu_bitmap_free(smmu->asid_map, asid);
+   arm_smmu_free_asid(>cd);
return ret;
 }
 
-- 
2.26.2

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v6 19/25] iommu/arm-smmu-v3: Add support for Hardware Translation Table Update

2020-04-30 Thread Jean-Philippe Brucker
If the SMMU supports it and the kernel was built with HTTU support, enable
hardware update of access and dirty flags. This is essential for shared
page tables, to reduce the number of access faults on the fault queue.

We can enable HTTU even if CPUs don't support it, because the kernel
always checks for HW dirty bit and updates the PTE flags atomically.

Signed-off-by: Jean-Philippe Brucker 
---
 drivers/iommu/arm-smmu-v3.c | 24 +++-
 1 file changed, 23 insertions(+), 1 deletion(-)

diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
index c65937d953b5f..240cd0bc00e62 100644
--- a/drivers/iommu/arm-smmu-v3.c
+++ b/drivers/iommu/arm-smmu-v3.c
@@ -58,6 +58,8 @@
 #define IDR0_ASID16(1 << 12)
 #define IDR0_ATS   (1 << 10)
 #define IDR0_HYP   (1 << 9)
+#define IDR0_HD(1 << 7)
+#define IDR0_HA(1 << 6)
 #define IDR0_BTM   (1 << 5)
 #define IDR0_COHACC(1 << 4)
 #define IDR0_TTF   GENMASK(3, 2)
@@ -309,6 +311,9 @@
 #define CTXDESC_CD_0_TCR_IPS   GENMASK_ULL(34, 32)
 #define CTXDESC_CD_0_TCR_TBI0  (1ULL << 38)
 
+#define CTXDESC_CD_0_TCR_HA(1UL << 43)
+#define CTXDESC_CD_0_TCR_HD(1UL << 42)
+
 #define CTXDESC_CD_0_AA64  (1UL << 41)
 #define CTXDESC_CD_0_S (1UL << 44)
 #define CTXDESC_CD_0_R (1UL << 45)
@@ -660,6 +665,8 @@ struct arm_smmu_device {
 #define ARM_SMMU_FEAT_E2H  (1 << 16)
 #define ARM_SMMU_FEAT_BTM  (1 << 17)
 #define ARM_SMMU_FEAT_SVA  (1 << 18)
+#define ARM_SMMU_FEAT_HA   (1 << 19)
+#define ARM_SMMU_FEAT_HD   (1 << 20)
u32 features;
 
 #define ARM_SMMU_OPT_SKIP_PREFETCH (1 << 0)
@@ -1715,10 +1722,17 @@ static int __arm_smmu_write_ctx_desc(struct 
arm_smmu_domain *smmu_domain,
 * this substream's traffic
 */
} else { /* (1) and (2) */
+   u64 tcr = cd->tcr;
+
cdptr[1] = cpu_to_le64(cd->ttbr & CTXDESC_CD_1_TTB0_MASK);
cdptr[2] = 0;
cdptr[3] = cpu_to_le64(cd->mair);
 
+   if (!(smmu->features & ARM_SMMU_FEAT_HD))
+   tcr &= ~CTXDESC_CD_0_TCR_HD;
+   if (!(smmu->features & ARM_SMMU_FEAT_HA))
+   tcr &= ~CTXDESC_CD_0_TCR_HA;
+
/*
 * STE is live, and the SMMU might read dwords of this CD in any
 * order. Ensure that it observes valid values before reading
@@ -1726,7 +1740,7 @@ static int __arm_smmu_write_ctx_desc(struct 
arm_smmu_domain *smmu_domain,
 */
arm_smmu_sync_cd(smmu_domain, ssid, true);
 
-   val = cd->tcr |
+   val = tcr |
 #ifdef __BIG_ENDIAN
CTXDESC_CD_0_ENDI |
 #endif
@@ -1965,10 +1979,12 @@ static struct arm_smmu_ctx_desc 
*arm_smmu_alloc_shared_cd(struct mm_struct *mm)
return old_cd;
}
 
+   /* HA and HD will be filtered out later if not supported by the SMMU */
tcr = FIELD_PREP(CTXDESC_CD_0_TCR_T0SZ, 64ULL - VA_BITS) |
  FIELD_PREP(CTXDESC_CD_0_TCR_IRGN0, ARM_LPAE_TCR_RGN_WBWA) |
  FIELD_PREP(CTXDESC_CD_0_TCR_ORGN0, ARM_LPAE_TCR_RGN_WBWA) |
  FIELD_PREP(CTXDESC_CD_0_TCR_SH0, ARM_LPAE_TCR_SH_IS) |
+ CTXDESC_CD_0_TCR_HA | CTXDESC_CD_0_TCR_HD |
  CTXDESC_CD_0_TCR_EPD1 | CTXDESC_CD_0_AA64;
 
switch (PAGE_SIZE) {
@@ -4461,6 +4477,12 @@ static int arm_smmu_device_hw_probe(struct 
arm_smmu_device *smmu)
smmu->features |= ARM_SMMU_FEAT_E2H;
}
 
+   if (reg & (IDR0_HA | IDR0_HD)) {
+   smmu->features |= ARM_SMMU_FEAT_HA;
+   if (reg & IDR0_HD)
+   smmu->features |= ARM_SMMU_FEAT_HD;
+   }
+
/*
 * If the CPU is using VHE, but the SMMU doesn't support it, the SMMU
 * will create TLB entries for NH-EL1 world and will miss the
-- 
2.26.2

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v6 06/25] arm64: mm: Add asid_gen_match() helper

2020-04-30 Thread Jean-Philippe Brucker
Add a macro to check if an ASID is from the current generation, since a
subsequent patch will introduce a third user for this test.

Signed-off-by: Jean-Philippe Brucker 
---
 arch/arm64/mm/context.c | 8 +---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c
index 9b26f9a88724f..d702d60e64dab 100644
--- a/arch/arm64/mm/context.c
+++ b/arch/arm64/mm/context.c
@@ -92,6 +92,9 @@ static void set_reserved_asid_bits(void)
bitmap_clear(asid_map, 0, NUM_USER_ASIDS);
 }
 
+#define asid_gen_match(asid) \
+   (!(((asid) ^ atomic64_read(_generation)) >> asid_bits))
+
 static void flush_context(void)
 {
int i;
@@ -220,8 +223,7 @@ void check_and_switch_context(struct mm_struct *mm, 
unsigned int cpu)
 *   because atomic RmWs are totally ordered for a given location.
 */
old_active_asid = atomic64_read(_cpu(active_asids, cpu));
-   if (old_active_asid &&
-   !((asid ^ atomic64_read(_generation)) >> asid_bits) &&
+   if (old_active_asid && asid_gen_match(asid) &&
atomic64_cmpxchg_relaxed(_cpu(active_asids, cpu),
 old_active_asid, asid))
goto switch_mm_fastpath;
@@ -229,7 +231,7 @@ void check_and_switch_context(struct mm_struct *mm, 
unsigned int cpu)
raw_spin_lock_irqsave(_asid_lock, flags);
/* Check that our ASID belongs to the current generation. */
asid = atomic64_read(>context.id);
-   if ((asid ^ atomic64_read(_generation)) >> asid_bits) {
+   if (!asid_gen_match(asid)) {
asid = new_context(mm);
atomic64_set(>context.id, asid);
}
-- 
2.26.2

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v6 00/25] iommu: Shared Virtual Addressing for SMMUv3

2020-04-30 Thread Jean-Philippe Brucker
Shared Virtual Addressing (SVA) allows to share process page tables with
devices using the IOMMU, PASIDs and I/O page faults. Add SVA support to
the Arm SMMUv3 driver.

Since v5 [1]:

* Added patches 1-3. Patch 1 adds a PASID field to mm_struct as
  discussed in [1] and [2]. This is also needed for Intel ENQCMD. Patch
  2 adds refcounts to IOASID and patch 3 adds a couple of helpers to
  allocate the PASID.

* Dropped most of iommu-sva.c. After getting rid of io_mm following
  review of v5, there wasn't enough generic code left to justify the
  indirect branch overhead of io_mm_ops in the MMU notifiers. I ended up
  with more glue than useful code, and couldn't find an easy way to deal
  with domains in the SMMU driver (we keep PASID tables per domain,
  while x86 keeps them per device). The direct approach in patch 17 is
  nicer and a little easier to read. The SMMU driver only gained 160
  lines, while iommu-sva lost 470 lines.

  As a result I dropped the MMU notifier patch.

  Jacob, one upside of this rework is that we now free ioasids in
  blocking context, which might help with your addition of notifiers to
  ioasid.c

* Simplified io-pgfault a bit, since flush() isn't called from mm exit
  path anymore.

* Fixed a bug in patch 17 (don't clear the stall bit when stall is
  forced).

You can find the latest version on https://jpbrucker.net/git/linux
branch sva/current, and sva/zip-devel for the Hisilicon zip accelerator.

[1] 
https://lore.kernel.org/linux-iommu/20200414170252.714402-1-jean-phili...@linaro.org/
[2] 
https://lore.kernel.org/linux-iommu/1585596788-193989-6-git-send-email-fenghua...@intel.com/

Jean-Philippe Brucker (25):
  mm: Add a PASID field to mm_struct
  iommu/ioasid: Add ioasid references
  iommu/sva: Add PASID helpers
  iommu: Add a page fault handler
  iommu/iopf: Handle mm faults
  arm64: mm: Add asid_gen_match() helper
  arm64: mm: Pin down ASIDs for sharing mm with devices
  iommu/io-pgtable-arm: Move some definitions to a header
  iommu/arm-smmu-v3: Manage ASIDs with xarray
  arm64: cpufeature: Export symbol read_sanitised_ftr_reg()
  iommu/arm-smmu-v3: Share process page tables
  iommu/arm-smmu-v3: Seize private ASID
  iommu/arm-smmu-v3: Add support for VHE
  iommu/arm-smmu-v3: Enable broadcast TLB maintenance
  iommu/arm-smmu-v3: Add SVA feature checking
  iommu/arm-smmu-v3: Add SVA device feature
  iommu/arm-smmu-v3: Implement iommu_sva_bind/unbind()
  iommu/arm-smmu-v3: Hook up ATC invalidation to mm ops
  iommu/arm-smmu-v3: Add support for Hardware Translation Table Update
  iommu/arm-smmu-v3: Maintain a SID->device structure
  dt-bindings: document stall property for IOMMU masters
  iommu/arm-smmu-v3: Add stall support for platform devices
  PCI/ATS: Add PRI stubs
  PCI/ATS: Export PRI functions
  iommu/arm-smmu-v3: Add support for PRI

 drivers/iommu/Kconfig |   11 +
 drivers/iommu/Makefile|2 +
 .../devicetree/bindings/iommu/iommu.txt   |   18 +
 arch/arm64/include/asm/mmu.h  |1 +
 arch/arm64/include/asm/mmu_context.h  |   11 +-
 drivers/iommu/io-pgtable-arm.h|   30 +
 drivers/iommu/iommu-sva.h |   15 +
 include/linux/ioasid.h|   10 +-
 include/linux/iommu.h |   53 +
 include/linux/mm_types.h  |4 +
 include/linux/pci-ats.h   |8 +
 arch/arm64/kernel/cpufeature.c|1 +
 arch/arm64/mm/context.c   |  103 +-
 drivers/iommu/arm-smmu-v3.c   | 1554 +++--
 drivers/iommu/io-pgfault.c|  458 +
 drivers/iommu/io-pgtable-arm.c|   27 +-
 drivers/iommu/ioasid.c|   30 +-
 drivers/iommu/iommu-sva.c |   85 +
 drivers/iommu/of_iommu.c  |5 +-
 drivers/pci/ats.c |4 +
 MAINTAINERS   |3 +-
 21 files changed, 2275 insertions(+), 158 deletions(-)
 create mode 100644 drivers/iommu/io-pgtable-arm.h
 create mode 100644 drivers/iommu/iommu-sva.h
 create mode 100644 drivers/iommu/io-pgfault.c
 create mode 100644 drivers/iommu/iommu-sva.c

-- 
2.26.2

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v6 03/25] iommu/sva: Add PASID helpers

2020-04-30 Thread Jean-Philippe Brucker
Let IOMMU drivers allocate a single PASID per mm. Store the mm in the
IOASID set to allow refcounting and searching mm by PASID, when handling
an I/O page fault.

Signed-off-by: Jean-Philippe Brucker 
---
 drivers/iommu/Kconfig |  5 +++
 drivers/iommu/Makefile|  1 +
 drivers/iommu/iommu-sva.h | 15 +++
 drivers/iommu/iommu-sva.c | 85 +++
 4 files changed, 106 insertions(+)
 create mode 100644 drivers/iommu/iommu-sva.h
 create mode 100644 drivers/iommu/iommu-sva.c

diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
index 58b4a4dbfc78b..5327ec663dea1 100644
--- a/drivers/iommu/Kconfig
+++ b/drivers/iommu/Kconfig
@@ -102,6 +102,11 @@ config IOMMU_DMA
select IRQ_MSI_IOMMU
select NEED_SG_DMA_LENGTH
 
+# Shared Virtual Addressing library
+config IOMMU_SVA
+   bool
+   select IOASID
+
 config FSL_PAMU
bool "Freescale IOMMU support"
depends on PCI
diff --git a/drivers/iommu/Makefile b/drivers/iommu/Makefile
index 9f33fdb3bb051..40c800dd4e3ef 100644
--- a/drivers/iommu/Makefile
+++ b/drivers/iommu/Makefile
@@ -37,3 +37,4 @@ obj-$(CONFIG_S390_IOMMU) += s390-iommu.o
 obj-$(CONFIG_QCOM_IOMMU) += qcom_iommu.o
 obj-$(CONFIG_HYPERV_IOMMU) += hyperv-iommu.o
 obj-$(CONFIG_VIRTIO_IOMMU) += virtio-iommu.o
+obj-$(CONFIG_IOMMU_SVA) += iommu-sva.o
diff --git a/drivers/iommu/iommu-sva.h b/drivers/iommu/iommu-sva.h
new file mode 100644
index 0..78f806fcacbe3
--- /dev/null
+++ b/drivers/iommu/iommu-sva.h
@@ -0,0 +1,15 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * SVA library for IOMMU drivers
+ */
+#ifndef _IOMMU_SVA_H
+#define _IOMMU_SVA_H
+
+#include 
+#include 
+
+int iommu_sva_alloc_pasid(struct mm_struct *mm, ioasid_t min, ioasid_t max);
+void iommu_sva_free_pasid(struct mm_struct *mm);
+struct mm_struct *iommu_sva_find(ioasid_t pasid);
+
+#endif /* _IOMMU_SVA_H */
diff --git a/drivers/iommu/iommu-sva.c b/drivers/iommu/iommu-sva.c
new file mode 100644
index 0..3e07b71bde918
--- /dev/null
+++ b/drivers/iommu/iommu-sva.c
@@ -0,0 +1,85 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Helpers for IOMMU drivers implementing SVA
+ */
+#include 
+#include 
+
+#include "iommu-sva.h"
+
+static DEFINE_MUTEX(iommu_sva_lock);
+static DECLARE_IOASID_SET(shared_pasid);
+
+/**
+ * iommu_sva_alloc_pasid - Allocate a PASID for the mm
+ * @mm: the mm
+ * @min: minimum PASID value (inclusive)
+ * @max: maximum PASID value (inclusive)
+ *
+ * Try to allocate a PASID for this mm, or take a reference to the existing one
+ * provided it fits within the [min, max] range. On success the PASID is
+ * available in mm->pasid, and must be released with iommu_sva_free_pasid().
+ *
+ * Returns 0 on success and < 0 on error.
+ */
+int iommu_sva_alloc_pasid(struct mm_struct *mm, ioasid_t min, ioasid_t max)
+{
+   int ret = 0;
+   ioasid_t pasid;
+
+   if (min == INVALID_IOASID || max == INVALID_IOASID ||
+   min == 0 || max < min)
+   return -EINVAL;
+
+   mutex_lock(_sva_lock);
+   if (mm->pasid) {
+   if (mm->pasid >= min && mm->pasid <= max)
+   ioasid_get(mm->pasid);
+   else
+   ret = -EOVERFLOW;
+   } else {
+   pasid = ioasid_alloc(_pasid, min, max, mm);
+   if (pasid == INVALID_IOASID)
+   ret = -ENOMEM;
+   else
+   mm->pasid = pasid;
+   }
+   mutex_unlock(_sva_lock);
+   return ret;
+}
+EXPORT_SYMBOL_GPL(iommu_sva_alloc_pasid);
+
+/**
+ * iommu_sva_free_pasid - Release the mm's PASID
+ * @mm: the mm.
+ *
+ * Drop one reference to a PASID allocated with iommu_sva_alloc_pasid()
+ */
+void iommu_sva_free_pasid(struct mm_struct *mm)
+{
+   mutex_lock(_sva_lock);
+   if (ioasid_free(mm->pasid))
+   mm->pasid = 0;
+   mutex_unlock(_sva_lock);
+}
+EXPORT_SYMBOL_GPL(iommu_sva_free_pasid);
+
+/* ioasid wants a void * argument */
+static bool __mmget_not_zero(void *mm)
+{
+   return mmget_not_zero(mm);
+}
+
+/**
+ * iommu_sva_find() - Find mm associated to the given PASID
+ * @pasid: Process Address Space ID assigned to the mm
+ *
+ * On success a reference to the mm is taken, and must be released with 
mmput().
+ *
+ * Returns the mm corresponding to this PASID, or an error if not found.
+ */
+struct mm_struct *iommu_sva_find(ioasid_t pasid)
+{
+   return ioasid_find(_pasid, pasid, __mmget_not_zero);
+}
+EXPORT_SYMBOL_GPL(iommu_sva_find);
-- 
2.26.2

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [Intel-gfx] [RFC 06/17] drm: i915: fix sg_table nents vs. orig_nents misuse

2020-04-30 Thread Marek Szyprowski
Hi

On 28.04.2020 16:27, Tvrtko Ursulin wrote:
>
> On 28/04/2020 14:19, Marek Szyprowski wrote:
>> The Documentation/DMA-API-HOWTO.txt states that dma_map_sg returns the
>> numer of the created entries in the DMA address space. However the
>> subsequent calls to dma_sync_sg_for_{device,cpu} and dma_unmap_sg 
>> must be
>> called with the original number of entries passed to dma_map_sg. The
>> sg_table->nents in turn holds the result of the dma_map_sg call as 
>> stated
>> in include/linux/scatterlist.h. Adapt the code to obey those rules.
>>
>> Signed-off-by: Marek Szyprowski 
>> ---
>>   drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c   | 13 +++--
>>   drivers/gpu/drm/i915/gem/i915_gem_internal.c |  4 ++--
>>   drivers/gpu/drm/i915/gem/i915_gem_region.c   |  4 ++--
>>   drivers/gpu/drm/i915/gem/i915_gem_shmem.c    |  5 +++--
>>   drivers/gpu/drm/i915/gem/selftests/huge_pages.c  | 10 +-
>>   drivers/gpu/drm/i915/gem/selftests/mock_dmabuf.c |  5 +++--
>>   drivers/gpu/drm/i915/gt/intel_ggtt.c | 12 ++--
>>   drivers/gpu/drm/i915/i915_gem_gtt.c  | 12 +++-
>>   drivers/gpu/drm/i915/i915_scatterlist.c  |  4 ++--
>>   drivers/gpu/drm/i915/selftests/scatterlist.c |  8 
>>   10 files changed, 41 insertions(+), 36 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c 
>> b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>> index 7db5a79..d829852 100644
>> --- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>> @@ -36,21 +36,22 @@ static struct sg_table 
>> *i915_gem_map_dma_buf(struct dma_buf_attachment *attachme
>>   goto err_unpin_pages;
>>   }
>>   -    ret = sg_alloc_table(st, obj->mm.pages->nents, GFP_KERNEL);
>> +    ret = sg_alloc_table(st, obj->mm.pages->orig_nents, GFP_KERNEL);
>>   if (ret)
>>   goto err_free;
>>     src = obj->mm.pages->sgl;
>>   dst = st->sgl;
>> -    for (i = 0; i < obj->mm.pages->nents; i++) {
>> +    for (i = 0; i < obj->mm.pages->orig_nents; i++) {
>>   sg_set_page(dst, sg_page(src), src->length, 0);
>>   dst = sg_next(dst);
>>   src = sg_next(src);
>>   }
>>   -    if (!dma_map_sg_attrs(attachment->dev,
>> -  st->sgl, st->nents, dir,
>> -  DMA_ATTR_SKIP_CPU_SYNC)) {
>> +    st->nents = dma_map_sg_attrs(attachment->dev,
>> + st->sgl, st->orig_nents, dir,
>> + DMA_ATTR_SKIP_CPU_SYNC);
>> +    if (!st->nents) {
>>   ret = -ENOMEM;
>>   goto err_free_sg;
>>   }
>> @@ -74,7 +75,7 @@ static void i915_gem_unmap_dma_buf(struct 
>> dma_buf_attachment *attachment,
>>   struct drm_i915_gem_object *obj = 
>> dma_buf_to_obj(attachment->dmabuf);
>>     dma_unmap_sg_attrs(attachment->dev,
>> -   sg->sgl, sg->nents, dir,
>> +   sg->sgl, sg->orig_nents, dir,
>>  DMA_ATTR_SKIP_CPU_SYNC);
>>   sg_free_table(sg);
>>   kfree(sg);
>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_internal.c 
>> b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
>> index cbbff81..a8ebfdd 100644
>> --- a/drivers/gpu/drm/i915/gem/i915_gem_internal.c
>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
>> @@ -73,7 +73,7 @@ static int 
>> i915_gem_object_get_pages_internal(struct drm_i915_gem_object *obj)
>>   }
>>     sg = st->sgl;
>> -    st->nents = 0;
>> +    st->nents = st->orig_nents = 0;
>>   sg_page_sizes = 0;
>>     do {
>> @@ -94,7 +94,7 @@ static int 
>> i915_gem_object_get_pages_internal(struct drm_i915_gem_object *obj)
>>     sg_set_page(sg, page, PAGE_SIZE << order, 0);
>>   sg_page_sizes |= PAGE_SIZE << order;
>> -    st->nents++;
>> +    st->nents = st->orig_nents = st->nents + 1;
>>     npages -= 1 << order;
>>   if (!npages) {
>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_region.c 
>> b/drivers/gpu/drm/i915/gem/i915_gem_region.c
>> index 1515384..58ca560 100644
>> --- a/drivers/gpu/drm/i915/gem/i915_gem_region.c
>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_region.c
>> @@ -53,7 +53,7 @@
>>   GEM_BUG_ON(list_empty(blocks));
>>     sg = st->sgl;
>> -    st->nents = 0;
>> +    st->nents = st->orig_nents = 0;
>>   sg_page_sizes = 0;
>>   prev_end = (resource_size_t)-1;
>>   @@ -78,7 +78,7 @@
>>     sg->length = block_size;
>>   -    st->nents++;
>> +    st->nents = st->orig_nents = st->nents + 1;
>>   } else {
>>   sg->length += block_size;
>>   sg_dma_len(sg) += block_size;
>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c 
>> b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
>> index 5d5d7ee..851a732 100644
>> --- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
>> @@ -80,7 +80,7 @@ static int shmem_get_pages(struct 
>> drm_i915_gem_object *obj)
>>   noreclaim |= __GFP_NORETRY 

Re: [RFC/PATCH 1/1] virtio: Introduce MMIO ops

2020-04-30 Thread Srivatsa Vaddagiri
* Jan Kiszka  [2020-04-30 14:59:50]:

> >I believe ivshmem2_virtio requires hypervisor to support PCI device emulation
> >(for life-cycle management of VMs), which our hypervisor may not support. A
> >simple shared memory and doorbell or message-queue based transport will work 
> >for
> >us.
> 
> As written in our private conversation, a mapping of the ivshmem2 device
> discovery to platform mechanism (device tree etc.) and maybe even the
> register access for doorbell and life-cycle management to something
> hypercall-like would be imaginable. What would count more from virtio
> perspective is a common mapping on a shared memory transport.

Yes that sounds simpler for us.

> That said, I also warned about all the features that PCI already defined
> (such as message-based interrupts) which you may have to add when going a
> different way for the shared memory device.

Is it really required to present this shared memory as belonging to a PCI
device? I would expect the device-tree to indicate the presence of this shared
memory region, which we should be able to present to ivshmem2 as shared memory
region to use (along with some handles for doorbell or message queue use).

I understand the usefulness of modeling the shared memory as part of device so
that hypervisor can send events related to peers going down or coming up. In our
case, there will be other means to discover those events and avoiding this
requirement on hypervisor (to emulate PCI) will simplify the solution for us.

Any idea when we can expect virtio over ivshmem2 to become available?!
 
-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member
of Code Aurora Forum, hosted by The Linux Foundation
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RFC/PATCH 1/1] virtio: Introduce MMIO ops

2020-04-30 Thread Jan Kiszka

On 30.04.20 13:11, Srivatsa Vaddagiri wrote:

* Will Deacon  [2020-04-30 11:41:50]:


On Thu, Apr 30, 2020 at 04:04:46PM +0530, Srivatsa Vaddagiri wrote:

If CONFIG_VIRTIO_MMIO_OPS is defined, then I expect this to be unconditionally
set to 'magic_qcom_ops' that uses hypervisor-supported interface for IO (for
example: message_queue_send() and message_queue_recevie() hypercalls).


Hmm, but then how would such a kernel work as a guest under all the
spec-compliant hypervisors out there?


Ok I see your point and yes for better binary compatibility, the ops have to be
set based on runtime detection of hypervisor capabilities.


Ok. I guess the other option is to standardize on a new virtio transport (like
ivshmem2-virtio)?


I haven't looked at that, but I suppose it depends on what your hypervisor
folks are willing to accomodate.


I believe ivshmem2_virtio requires hypervisor to support PCI device emulation
(for life-cycle management of VMs), which our hypervisor may not support. A
simple shared memory and doorbell or message-queue based transport will work for
us.


As written in our private conversation, a mapping of the ivshmem2 device 
discovery to platform mechanism (device tree etc.) and maybe even the 
register access for doorbell and life-cycle management to something 
hypercall-like would be imaginable. What would count more from virtio 
perspective is a common mapping on a shared memory transport.


That said, I also warned about all the features that PCI already defined 
(such as message-based interrupts) which you may have to add when going 
a different way for the shared memory device.


Jan

--
Siemens AG, Corporate Technology, CT RDA IOT SES-DE
Corporate Competence Center Embedded Linux
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH] drivers/iommu: properly export iommu_group_get_for_dev

2020-04-30 Thread Joerg Roedel
On Thu, Apr 30, 2020 at 01:17:53PM +0100, Will Deacon wrote:
> Thanks, not sure how I managed to screw this up in the original patch!
> 
> Acked-by: Will Deacon 
> 
> Joerg -- can you pick this one up please?

Yes, will send it as a fix for 5.7, but note that this function will be
unexported in 5.8.


Regards,

Joerg

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH] drivers/iommu: properly export iommu_group_get_for_dev

2020-04-30 Thread Will Deacon
On Thu, Apr 30, 2020 at 02:01:20PM +0200, Greg Kroah-Hartman wrote:
> In commit a7ba5c3d008d ("drivers/iommu: Export core IOMMU API symbols to
> permit modular drivers") a bunch of iommu symbols were exported, all
> with _GPL markings except iommu_group_get_for_dev().  That export should
> also be _GPL like the others.
> 
> Cc: Will Deacon 
> Cc: Joerg Roedel 
> Cc: John Garry 
> Fixes: a7ba5c3d008d ("drivers/iommu: Export core IOMMU API symbols to permit 
> modular drivers")
> Signed-off-by: Greg Kroah-Hartman 
> ---
>  drivers/iommu/iommu.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
> index 2b471419e26c..1ecbc8788fe7 100644
> --- a/drivers/iommu/iommu.c
> +++ b/drivers/iommu/iommu.c
> @@ -1428,7 +1428,7 @@ struct iommu_group *iommu_group_get_for_dev(struct 
> device *dev)
>  
>   return group;
>  }
> -EXPORT_SYMBOL(iommu_group_get_for_dev);
> +EXPORT_SYMBOL_GPL(iommu_group_get_for_dev);
>  
>  struct iommu_domain *iommu_group_default_domain(struct iommu_group *group)
>  {

Thanks, not sure how I managed to screw this up in the original patch!

Acked-by: Will Deacon 

Joerg -- can you pick this one up please?

Cheers,

Will
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH] drivers/iommu: properly export iommu_group_get_for_dev

2020-04-30 Thread Greg Kroah-Hartman
In commit a7ba5c3d008d ("drivers/iommu: Export core IOMMU API symbols to
permit modular drivers") a bunch of iommu symbols were exported, all
with _GPL markings except iommu_group_get_for_dev().  That export should
also be _GPL like the others.

Cc: Will Deacon 
Cc: Joerg Roedel 
Cc: John Garry 
Fixes: a7ba5c3d008d ("drivers/iommu: Export core IOMMU API symbols to permit 
modular drivers")
Signed-off-by: Greg Kroah-Hartman 
---
 drivers/iommu/iommu.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 2b471419e26c..1ecbc8788fe7 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -1428,7 +1428,7 @@ struct iommu_group *iommu_group_get_for_dev(struct device 
*dev)
 
return group;
 }
-EXPORT_SYMBOL(iommu_group_get_for_dev);
+EXPORT_SYMBOL_GPL(iommu_group_get_for_dev);
 
 struct iommu_domain *iommu_group_default_domain(struct iommu_group *group)
 {
-- 
2.26.2

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RFC/PATCH 1/1] virtio: Introduce MMIO ops

2020-04-30 Thread Srivatsa Vaddagiri
* Will Deacon  [2020-04-30 11:41:50]:

> On Thu, Apr 30, 2020 at 04:04:46PM +0530, Srivatsa Vaddagiri wrote:
> > If CONFIG_VIRTIO_MMIO_OPS is defined, then I expect this to be 
> > unconditionally
> > set to 'magic_qcom_ops' that uses hypervisor-supported interface for IO (for
> > example: message_queue_send() and message_queue_recevie() hypercalls).
> 
> Hmm, but then how would such a kernel work as a guest under all the
> spec-compliant hypervisors out there?

Ok I see your point and yes for better binary compatibility, the ops have to be
set based on runtime detection of hypervisor capabilities.

> > Ok. I guess the other option is to standardize on a new virtio transport 
> > (like
> > ivshmem2-virtio)?
> 
> I haven't looked at that, but I suppose it depends on what your hypervisor
> folks are willing to accomodate.

I believe ivshmem2_virtio requires hypervisor to support PCI device emulation
(for life-cycle management of VMs), which our hypervisor may not support. A
simple shared memory and doorbell or message-queue based transport will work for
us.

-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member
of Code Aurora Forum, hosted by The Linux Foundation
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RFC/PATCH 0/1] virtio_mmio: hypervisor specific interfaces for MMIO

2020-04-30 Thread Srivatsa Vaddagiri
* Will Deacon  [2020-04-30 11:39:19]:

> Hi Vatsa,
> 
> On Thu, Apr 30, 2020 at 03:59:39PM +0530, Srivatsa Vaddagiri wrote:
> > > What's stopping you from implementing the trapping support in the
> > > hypervisor? Unlike the other patches you sent out, where the guest memory
> > > is not accessible to the host, there doesn't seem to be any advantage to
> > > not having trapping support, or am I missing something here?
> > 
> > I have had this discussion with hypervisor folks. They seem to be
> > concerned about complexity of having a VM's fault be handled in another
> > untrusted VM. They are not keen to add MMIO support.
> 
> Right, but I'm concerned about forking the implementation from the spec
> and I'm not keen to add these hooks ;)
> 
> What does your hook actually do? I'm assuming an HVC? 

Yes, it will issue message-queue related hypercalls

> If so, then where the
> fault is handled seems to be unrelated and whether the guest exit is due to
> an HVC or a stage-2 fault should be immaterial. 

A stage-2 fault would be considered fatal normally and result in termination of
guest. Modifying that behavior to allow resumption in case of virtio config
space access, especially including the untrusted VM in this flow, is
perhaps the concern. HVC calls OTOH are more vetted interfaces that the
hypervisor has to do nothing additional to handle.

> In other words, I don't
> follow why the trapping mechanism necessitates the way in which the fault is
> handled.

Let me check with our hypervisor folks again. Thanks for your inputs.

-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member
of Code Aurora Forum, hosted by The Linux Foundation
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RFC/PATCH 0/1] virtio_mmio: hypervisor specific interfaces for MMIO

2020-04-30 Thread Jason Wang


On 2020/4/30 下午6:07, Michael S. Tsirkin wrote:

On Thu, Apr 30, 2020 at 03:32:55PM +0530, Srivatsa Vaddagiri wrote:

The Type-1 hypervisor we are dealing with does not allow for MMIO transport.

How about PCI then?



Or maybe you can use virtio-vdpa.

Thanks


___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: [RFC/PATCH 1/1] virtio: Introduce MMIO ops

2020-04-30 Thread Will Deacon
On Thu, Apr 30, 2020 at 04:04:46PM +0530, Srivatsa Vaddagiri wrote:
> * Will Deacon  [2020-04-30 11:14:32]:
> 
> > > +#ifdef CONFIG_VIRTIO_MMIO_OPS
> > >  
> > > +static struct virtio_mmio_ops *mmio_ops;
> > > +
> > > +#define virtio_readb(a)  mmio_ops->mmio_readl((a))
> > > +#define virtio_readw(a)  mmio_ops->mmio_readl((a))
> > > +#define virtio_readl(a)  mmio_ops->mmio_readl((a))
> > > +#define virtio_writeb(val, a)mmio_ops->mmio_writeb((val), (a))
> > > +#define virtio_writew(val, a)mmio_ops->mmio_writew((val), (a))
> > > +#define virtio_writel(val, a)mmio_ops->mmio_writel((val), (a))
> > 
> > How exactly are these ops hooked up? I'm envisaging something like:
> > 
> > ops = spec_compliant_ops;
> > [...]
> > if (firmware_says_hypervisor_is_buggy())
> > ops = magic_qcom_ops;
> > 
> > am I wrong?
> 
> If CONFIG_VIRTIO_MMIO_OPS is defined, then I expect this to be unconditionally
> set to 'magic_qcom_ops' that uses hypervisor-supported interface for IO (for
> example: message_queue_send() and message_queue_recevie() hypercalls).

Hmm, but then how would such a kernel work as a guest under all the
spec-compliant hypervisors out there?

> > > +int register_virtio_mmio_ops(struct virtio_mmio_ops *ops)
> > > +{
> > > + pr_info("Registered %s as mmio ops\n", ops->name);
> > > + mmio_ops = ops;
> > 
> > Not looking good, and really defeats the point of standardising this stuff
> > imo.
> 
> Ok. I guess the other option is to standardize on a new virtio transport (like
> ivshmem2-virtio)?

I haven't looked at that, but I suppose it depends on what your hypervisor
folks are willing to accomodate.

Will
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RFC/PATCH 0/1] virtio_mmio: hypervisor specific interfaces for MMIO

2020-04-30 Thread Srivatsa Vaddagiri
* Michael S. Tsirkin  [2020-04-30 06:07:56]:

> On Thu, Apr 30, 2020 at 03:32:55PM +0530, Srivatsa Vaddagiri wrote:
> > The Type-1 hypervisor we are dealing with does not allow for MMIO 
> > transport. 
> 
> How about PCI then?

Correct me if I am wrong, but basically virtio_pci uses the same low-level
primitive as readl/writel on a platform such as ARM64? So similar issues
there also.

-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member
of Code Aurora Forum, hosted by The Linux Foundation
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RFC/PATCH 0/1] virtio_mmio: hypervisor specific interfaces for MMIO

2020-04-30 Thread Will Deacon
Hi Vatsa,

On Thu, Apr 30, 2020 at 03:59:39PM +0530, Srivatsa Vaddagiri wrote:
> * Will Deacon  [2020-04-30 11:08:22]:
> 
> > > This patch is meant to seek comments. If its considered to be in right
> > > direction, will work on making it more complete and send the next version!
> > 
> > What's stopping you from implementing the trapping support in the
> > hypervisor? Unlike the other patches you sent out, where the guest memory
> > is not accessible to the host, there doesn't seem to be any advantage to
> > not having trapping support, or am I missing something here?
> 
>   I have had this discussion with hypervisor folks. They seem to be
> concerned about complexity of having a VM's fault be handled in another
> untrusted VM. They are not keen to add MMIO support.

Right, but I'm concerned about forking the implementation from the spec
and I'm not keen to add these hooks ;)

What does your hook actually do? I'm assuming an HVC? If so, then where the
fault is handled seems to be unrelated and whether the guest exit is due to
an HVC or a stage-2 fault should be immaterial. In other words, I don't
follow why the trapping mechanism necessitates the way in which the fault is
handled.

Will
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RFC/PATCH 1/1] virtio: Introduce MMIO ops

2020-04-30 Thread Srivatsa Vaddagiri
* Will Deacon  [2020-04-30 11:14:32]:

> > +#ifdef CONFIG_VIRTIO_MMIO_OPS
> >  
> > +static struct virtio_mmio_ops *mmio_ops;
> > +
> > +#define virtio_readb(a)mmio_ops->mmio_readl((a))
> > +#define virtio_readw(a)mmio_ops->mmio_readl((a))
> > +#define virtio_readl(a)mmio_ops->mmio_readl((a))
> > +#define virtio_writeb(val, a)  mmio_ops->mmio_writeb((val), (a))
> > +#define virtio_writew(val, a)  mmio_ops->mmio_writew((val), (a))
> > +#define virtio_writel(val, a)  mmio_ops->mmio_writel((val), (a))
> 
> How exactly are these ops hooked up? I'm envisaging something like:
> 
>   ops = spec_compliant_ops;
>   [...]
>   if (firmware_says_hypervisor_is_buggy())
>   ops = magic_qcom_ops;
> 
> am I wrong?

If CONFIG_VIRTIO_MMIO_OPS is defined, then I expect this to be unconditionally
set to 'magic_qcom_ops' that uses hypervisor-supported interface for IO (for
example: message_queue_send() and message_queue_recevie() hypercalls).

> > +int register_virtio_mmio_ops(struct virtio_mmio_ops *ops)
> > +{
> > +   pr_info("Registered %s as mmio ops\n", ops->name);
> > +   mmio_ops = ops;
> 
> Not looking good, and really defeats the point of standardising this stuff
> imo.

Ok. I guess the other option is to standardize on a new virtio transport (like
ivshmem2-virtio)?

-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member
of Code Aurora Forum, hosted by The Linux Foundation
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RFC/PATCH 0/1] virtio_mmio: hypervisor specific interfaces for MMIO

2020-04-30 Thread Srivatsa Vaddagiri
* Will Deacon  [2020-04-30 11:08:22]:

> > This patch is meant to seek comments. If its considered to be in right
> > direction, will work on making it more complete and send the next version!
> 
> What's stopping you from implementing the trapping support in the
> hypervisor? Unlike the other patches you sent out, where the guest memory
> is not accessible to the host, there doesn't seem to be any advantage to
> not having trapping support, or am I missing something here?

Hi Will,
I have had this discussion with hypervisor folks. They seem to be
concerned about complexity of having a VM's fault be handled in another
untrusted VM. They are not keen to add MMIO support.

-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member
of Code Aurora Forum, hosted by The Linux Foundation
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RFC/PATCH 1/1] virtio: Introduce MMIO ops

2020-04-30 Thread Will Deacon
On Thu, Apr 30, 2020 at 03:32:56PM +0530, Srivatsa Vaddagiri wrote:
> Some hypervisors may not support MMIO transport i.e trap config
> space access and have it be handled by backend driver. They may
> allow other ways to interact with backend such as message-queue
> or doorbell API. This patch allows for hypervisor specific
> methods for config space IO.
> 
> Signed-off-by: Srivatsa Vaddagiri 
> ---
>  drivers/virtio/virtio_mmio.c | 131 
> ++-
>  include/linux/virtio.h   |  14 +
>  2 files changed, 94 insertions(+), 51 deletions(-)
> 
> diff --git a/drivers/virtio/virtio_mmio.c b/drivers/virtio/virtio_mmio.c
> index 97d5725..69bfa35 100644
> --- a/drivers/virtio/virtio_mmio.c
> +++ b/drivers/virtio/virtio_mmio.c
> @@ -100,7 +100,35 @@ struct virtio_mmio_vq_info {
>   struct list_head node;
>  };
>  
> +#ifdef CONFIG_VIRTIO_MMIO_OPS
>  
> +static struct virtio_mmio_ops *mmio_ops;
> +
> +#define virtio_readb(a)  mmio_ops->mmio_readl((a))
> +#define virtio_readw(a)  mmio_ops->mmio_readl((a))
> +#define virtio_readl(a)  mmio_ops->mmio_readl((a))
> +#define virtio_writeb(val, a)mmio_ops->mmio_writeb((val), (a))
> +#define virtio_writew(val, a)mmio_ops->mmio_writew((val), (a))
> +#define virtio_writel(val, a)mmio_ops->mmio_writel((val), (a))

How exactly are these ops hooked up? I'm envisaging something like:

ops = spec_compliant_ops;
[...]
if (firmware_says_hypervisor_is_buggy())
ops = magic_qcom_ops;

am I wrong?

> +int register_virtio_mmio_ops(struct virtio_mmio_ops *ops)
> +{
> + pr_info("Registered %s as mmio ops\n", ops->name);
> + mmio_ops = ops;

Not looking good, and really defeats the point of standardising this stuff
imo.

Will
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RFC/PATCH 0/1] virtio_mmio: hypervisor specific interfaces for MMIO

2020-04-30 Thread Will Deacon
On Thu, Apr 30, 2020 at 03:32:55PM +0530, Srivatsa Vaddagiri wrote:
> The Type-1 hypervisor we are dealing with does not allow for MMIO transport. 
> [1] summarizes some of the problems we have in making virtio work on such
> hypervisors. This patch proposes a solution for transport problem viz how we 
> can
> do config space IO on such a hypervisor. Hypervisor specific methods
> introduced allows for seamless IO of config space.

Seamless huh? You'd hope that might obviate the need for extra patches...

> This patch is meant to seek comments. If its considered to be in right
> direction, will work on making it more complete and send the next version!

What's stopping you from implementing the trapping support in the
hypervisor? Unlike the other patches you sent out, where the guest memory
is not accessible to the host, there doesn't seem to be any advantage to
not having trapping support, or am I missing something here?

Will
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RFC/PATCH 0/1] virtio_mmio: hypervisor specific interfaces for MMIO

2020-04-30 Thread Michael S. Tsirkin
On Thu, Apr 30, 2020 at 03:32:55PM +0530, Srivatsa Vaddagiri wrote:
> The Type-1 hypervisor we are dealing with does not allow for MMIO transport. 

How about PCI then?

-- 
MST

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[RFC/PATCH 1/1] virtio: Introduce MMIO ops

2020-04-30 Thread Srivatsa Vaddagiri
Some hypervisors may not support MMIO transport i.e trap config
space access and have it be handled by backend driver. They may
allow other ways to interact with backend such as message-queue
or doorbell API. This patch allows for hypervisor specific
methods for config space IO.

Signed-off-by: Srivatsa Vaddagiri 
---
 drivers/virtio/virtio_mmio.c | 131 ++-
 include/linux/virtio.h   |  14 +
 2 files changed, 94 insertions(+), 51 deletions(-)

diff --git a/drivers/virtio/virtio_mmio.c b/drivers/virtio/virtio_mmio.c
index 97d5725..69bfa35 100644
--- a/drivers/virtio/virtio_mmio.c
+++ b/drivers/virtio/virtio_mmio.c
@@ -100,7 +100,35 @@ struct virtio_mmio_vq_info {
struct list_head node;
 };
 
+#ifdef CONFIG_VIRTIO_MMIO_OPS
 
+static struct virtio_mmio_ops *mmio_ops;
+
+#define virtio_readb(a)mmio_ops->mmio_readl((a))
+#define virtio_readw(a)mmio_ops->mmio_readl((a))
+#define virtio_readl(a)mmio_ops->mmio_readl((a))
+#define virtio_writeb(val, a)  mmio_ops->mmio_writeb((val), (a))
+#define virtio_writew(val, a)  mmio_ops->mmio_writew((val), (a))
+#define virtio_writel(val, a)  mmio_ops->mmio_writel((val), (a))
+
+int register_virtio_mmio_ops(struct virtio_mmio_ops *ops)
+{
+   pr_info("Registered %s as mmio ops\n", ops->name);
+   mmio_ops = ops;
+
+   return 0;
+}
+
+#else  /* CONFIG_VIRTIO_MMIO_OPS */
+
+#define virtio_readb(a)readb((a))
+#define virtio_readw(a)readw((a))
+#define virtio_readl(a)readl((a))
+#define virtio_writeb(val, a)  writeb((val), (a))
+#define virtio_writew(val, a)  writew((val), (a))
+#define virtio_writel(val, a)  writel((val), (a))
+
+#endif /* CONFIG_VIRTIO_MMIO_OPS */
 
 /* Configuration interface */
 
@@ -109,12 +137,12 @@ static u64 vm_get_features(struct virtio_device *vdev)
struct virtio_mmio_device *vm_dev = to_virtio_mmio_device(vdev);
u64 features;
 
-   writel(1, vm_dev->base + VIRTIO_MMIO_DEVICE_FEATURES_SEL);
-   features = readl(vm_dev->base + VIRTIO_MMIO_DEVICE_FEATURES);
+   virtio_writel(1, vm_dev->base + VIRTIO_MMIO_DEVICE_FEATURES_SEL);
+   features = virtio_readl(vm_dev->base + VIRTIO_MMIO_DEVICE_FEATURES);
features <<= 32;
 
-   writel(0, vm_dev->base + VIRTIO_MMIO_DEVICE_FEATURES_SEL);
-   features |= readl(vm_dev->base + VIRTIO_MMIO_DEVICE_FEATURES);
+   virtio_writel(0, vm_dev->base + VIRTIO_MMIO_DEVICE_FEATURES_SEL);
+   features |= virtio_readl(vm_dev->base + VIRTIO_MMIO_DEVICE_FEATURES);
 
return features;
 }
@@ -133,12 +161,12 @@ static int vm_finalize_features(struct virtio_device 
*vdev)
return -EINVAL;
}
 
-   writel(1, vm_dev->base + VIRTIO_MMIO_DRIVER_FEATURES_SEL);
-   writel((u32)(vdev->features >> 32),
+   virtio_writel(1, vm_dev->base + VIRTIO_MMIO_DRIVER_FEATURES_SEL);
+   virtio_writel((u32)(vdev->features >> 32),
vm_dev->base + VIRTIO_MMIO_DRIVER_FEATURES);
 
-   writel(0, vm_dev->base + VIRTIO_MMIO_DRIVER_FEATURES_SEL);
-   writel((u32)vdev->features,
+   virtio_writel(0, vm_dev->base + VIRTIO_MMIO_DRIVER_FEATURES_SEL);
+   virtio_writel((u32)vdev->features,
vm_dev->base + VIRTIO_MMIO_DRIVER_FEATURES);
 
return 0;
@@ -158,25 +186,25 @@ static void vm_get(struct virtio_device *vdev, unsigned 
offset,
int i;
 
for (i = 0; i < len; i++)
-   ptr[i] = readb(base + offset + i);
+   ptr[i] = virtio_readb(base + offset + i);
return;
}
 
switch (len) {
case 1:
-   b = readb(base + offset);
+   b = virtio_readb(base + offset);
memcpy(buf, , sizeof b);
break;
case 2:
-   w = cpu_to_le16(readw(base + offset));
+   w = cpu_to_le16(virtio_readw(base + offset));
memcpy(buf, , sizeof w);
break;
case 4:
-   l = cpu_to_le32(readl(base + offset));
+   l = cpu_to_le32(virtio_readl(base + offset));
memcpy(buf, , sizeof l);
break;
case 8:
-   l = cpu_to_le32(readl(base + offset));
+   l = cpu_to_le32(virtio_readl(base + offset));
memcpy(buf, , sizeof l);
l = cpu_to_le32(ioread32(base + offset + sizeof l));
memcpy(buf + sizeof l, , sizeof l);
@@ -200,7 +228,7 @@ static void vm_set(struct virtio_device *vdev, unsigned 
offset,
int i;
 
for (i = 0; i < len; i++)
-   writeb(ptr[i], base + offset + i);
+   virtio_writeb(ptr[i], base + offset + i);
 
return;
}
@@ -208,21 +236,21 @@ static void vm_set(struct virtio_device *vdev, unsigned 
offset,
switch (len) {
   

[RFC/PATCH 0/1] virtio_mmio: hypervisor specific interfaces for MMIO

2020-04-30 Thread Srivatsa Vaddagiri
The Type-1 hypervisor we are dealing with does not allow for MMIO transport. 
[1] summarizes some of the problems we have in making virtio work on such
hypervisors. This patch proposes a solution for transport problem viz how we can
do config space IO on such a hypervisor. Hypervisor specific methods
introduced allows for seamless IO of config space.

This patch is meant to seek comments. If its considered to be in right
direction, will work on making it more complete and send the next version!

1. https://lkml.org/lkml/2020/4/28/427

Srivatsa Vaddagiri (1):
  virtio: Introduce MMIO ops

 drivers/virtio/virtio_mmio.c | 131 ++-
 include/linux/virtio.h   |  14 +
 2 files changed, 94 insertions(+), 51 deletions(-)

-- 
2.7.4

-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member
of Code Aurora Forum, hosted by The Linux Foundation
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


RE: [PATCH v11 00/13] SMMUv3 Nested Stage Setup (IOMMU part)

2020-04-30 Thread Shameerali Kolothum Thodi
Hi Eric,

> -Original Message-
> From: Auger Eric [mailto:eric.au...@redhat.com]
> Sent: 16 April 2020 08:45
> To: Zhangfei Gao ; eric.auger@gmail.com;
> iommu@lists.linux-foundation.org; linux-ker...@vger.kernel.org;
> k...@vger.kernel.org; kvm...@lists.cs.columbia.edu; w...@kernel.org;
> j...@8bytes.org; m...@kernel.org; robin.mur...@arm.com
> Cc: jean-phili...@linaro.org; Shameerali Kolothum Thodi
> ; alex.william...@redhat.com;
> jacob.jun@linux.intel.com; yi.l@intel.com; peter.mayd...@linaro.org;
> t...@semihalf.com; bbhush...@marvell.com
> Subject: Re: [PATCH v11 00/13] SMMUv3 Nested Stage Setup (IOMMU part)
> 
> Hi Zhangfei,
> 
> On 4/16/20 6:25 AM, Zhangfei Gao wrote:
> >
> >
> > On 2020/4/14 下午11:05, Eric Auger wrote:
> >> This version fixes an issue observed by Shameer on an SMMU 3.2,
> >> when moving from dual stage config to stage 1 only config.
> >> The 2 high 64b of the STE now get reset. Otherwise, leaving the
> >> S2TTB set may cause a C_BAD_STE error.
> >>
> >> This series can be found at:
> >> https://github.com/eauger/linux/tree/v5.6-2stage-v11_10.1
> >> (including the VFIO part)
> >> The QEMU fellow series still can be found at:
> >> https://github.com/eauger/qemu/tree/v4.2.0-2stage-rfcv6
> >>
> >> Users have expressed interest in that work and tested v9/v10:
> >> - https://patchwork.kernel.org/cover/11039995/#23012381
> >> - https://patchwork.kernel.org/cover/11039995/#23197235
> >>
> >> Background:
> >>
> >> This series brings the IOMMU part of HW nested paging support
> >> in the SMMUv3. The VFIO part is submitted separately.
> >>
> >> The IOMMU API is extended to support 2 new API functionalities:
> >> 1) pass the guest stage 1 configuration
> >> 2) pass stage 1 MSI bindings
> >>
> >> Then those capabilities gets implemented in the SMMUv3 driver.
> >>
> >> The virtualizer passes information through the VFIO user API
> >> which cascades them to the iommu subsystem. This allows the guest
> >> to own stage 1 tables and context descriptors (so-called PASID
> >> table) while the host owns stage 2 tables and main configuration
> >> structures (STE).
> >>
> >>
> >
> > Thanks Eric
> >
> > Tested v11 on Hisilicon kunpeng920 board via hardware zip accelerator.
> > 1. no-sva works, where guest app directly use physical address via ioctl.
> Thank you for the testing. Glad it works for you.
> > 2. vSVA still not work, same as v10,
> Yes that's normal this series is not meant to support vSVM at this stage.
> 
> I intend to add the missing pieces during the next weeks.

Thanks for that. I have made an attempt to add the vSVA based on 
your v10 + JPBs sva patches. The host kernel and Qemu changes can 
be found here[1][2].

This basically adds multiple pasid support on top of your changes.
I have done some basic sanity testing and we have some initial success
with the zip vf dev on our D06 platform. Please note that the STALL event is
not yet supported though, but works fine if we mlock() guest usr mem.

I also noted that Intel patches for vSVA has couple of changes in the vfio 
interfaces
and hope there will be a convergence soon. Please let me know your plans
of a respin of this series and see whether incorporating the changes for 
multiple
pasid make sense or not for now.

Thanks,
Shameer

[1]https://github.com/hisilicon/qemu/tree/v4.2.0-2stage-rfcv6-vsva-prototype-v1
[2]https://github.com/hisilicon/kernel-dev/tree/vsva-prototype-host-v1

> Thanks
> 
> Eric
> > 3.  the v10 issue reported by Shameer has been solved,  first start qemu
> > with  iommu=smmuv3, then start qemu without  iommu=smmuv3
> > 4. no-sva also works without  iommu=smmuv3
> >
> > Test details in https://docs.qq.com/doc/DRU5oR1NtUERseFNL
> >
> > Thanks
> >

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu