Re: xen-swiotlb vs phys_to_dma

2020-10-06 Thread Stefano Stabellini
On Tue, 6 Oct 2020, Christoph Hellwig wrote:
> On Fri, Oct 02, 2020 at 01:21:25PM -0700, Stefano Stabellini wrote:
> > On Fri, 2 Oct 2020, Christoph Hellwig wrote:
> > > Hi Stefano,
> > > 
> > > I've looked over xen-swiotlb in linux-next, that is with your recent
> > > changes to take dma offsets into account.  One thing that puzzles me
> > > is that xen_swiotlb_map_page passes virt_to_phys(xen_io_tlb_start) as
> > > the tbl_dma_addr argument to swiotlb_tbl_map_single, despite the fact
> > > that the argument is a dma_addr_t and both other callers translate
> > > from a physical to the dma address.  Was this an oversight?
> > 
> > Hi Christoph,
> > 
> > It was not an oversight, it was done on purpose, although maybe I could
> > have been wrong. There was a brief discussion on this topic here: 
> > 
> > https://marc.info/?l=linux-kernel=159011972107683=2
> > https://marc.info/?l=linux-kernel=159018047129198=2
> > 
> > I'll repeat and summarize here for convenience. 
> > 
> > swiotlb_init_with_tbl is called by xen_swiotlb_init, passing a virtual
> > address (xen_io_tlb_start), which gets converted to phys and stored in
> > io_tlb_start as a physical address at the beginning of 
> > swiotlb_init_with_tbl.
> 
> Yes.
> 
> > Afterwards, xen_swiotlb_map_page calls swiotlb_tbl_map_single. The
> > second parameter, dma_addr_t tbl_dma_addr, is used to calculate the
> > right slot in the swiotlb buffer to use, comparing it against
> > io_tlb_start.
> 
> It is not compared against io_tlb_start.  It is just used to pick
> a slot that fits the dma_get_seg_boundary limitation in a somewhat
> awkward way.
> 
> > Thus, I think it makes sense for xen_swiotlb_map_page to call
> > swiotlb_tbl_map_single passing an address meant to be compared with
> > io_tlb_start, which is __pa(xen_io_tlb_start), so
> > virt_to_phys(xen_io_tlb_start) seems to be what we want.
> 
> No, it doesn't.  tlb_addr is used to ensure the picked slots satisfies
> the segment boundary, and for that you need a dma_addr_t.
> 
> The index variable in swiotlb_tbl_map_single is derived from
> io_tlb_index, not io_tlb_start.
> 
> > However, you are right that it is strange that tbl_dma_addr is a
> > dma_addr_t, and maybe it shouldn't be? Maybe the tbl_dma_addr parameter
> > to swiotlb_tbl_map_single should be a phys address instead?
> > Or it could be swiotlb_init_with_tbl to be wrong and it should take a
> > dma address to initialize the swiotlb buffer.
> 
> No, it must be a dma_addr_t so that the dma_get_seg_boundary check works.
>
> I think we need something like this (against linux-next):
> 
> ---
> >From 07b39a62b235ed2d4b2215700d99968998fbf6c0 Mon Sep 17 00:00:00 2001
> From: Christoph Hellwig 
> Date: Tue, 6 Oct 2020 10:22:19 +0200
> Subject: swiotlb: remove the tlb_addr argument to swiotlb_tbl_map_single
> 
> The tlb_addr always must be the dma view of io_tlb_start so that the
> segment boundary checks work.  Remove the argument and do the right
> thing inside swiotlb_tbl_map_single.  This fixes the swiotlb-xen case
> that failed to take DMA offset into account.  The issue probably did
> not show up very much in practice as the typical dma offsets are
> large enough to not affect the segment boundaries for most devices.

OK, this makes a lot of sense, and I like the patch because it makes the
swiotlb interface clearer.

Just one comment below.


> Signed-off-by: Christoph Hellwig 
> ---
>  drivers/iommu/intel/iommu.c |  5 ++---
>  drivers/xen/swiotlb-xen.c   |  3 +--
>  include/linux/swiotlb.h | 10 +++---
>  kernel/dma/swiotlb.c| 16 ++--
>  4 files changed, 12 insertions(+), 22 deletions(-)
> 
> diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
> index 5ee0b7921b0b37..d473811fcfacd5 100644
> --- a/drivers/iommu/intel/iommu.c
> +++ b/drivers/iommu/intel/iommu.c
> @@ -3815,9 +3815,8 @@ bounce_map_single(struct device *dev, phys_addr_t 
> paddr, size_t size,
>* page aligned, we don't need to use a bounce page.
>*/
>   if (!IS_ALIGNED(paddr | size, VTD_PAGE_SIZE)) {
> - tlb_addr = swiotlb_tbl_map_single(dev,
> - phys_to_dma_unencrypted(dev, io_tlb_start),
> - paddr, size, aligned_size, dir, attrs);
> + tlb_addr = swiotlb_tbl_map_single(dev, paddr, size,
> +   aligned_size, dir, attrs);
>   if (tlb_addr == DMA_MAPPING_ERROR) {
>   goto swiotlb_error;
>   } else {
> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> index 030a225624b060..953186f6d7d222 100644
> --- a/drivers/xen/swiotlb-xen.c
> +++ b/drivers/xen/swiotlb-xen.c
> @@ -395,8 +395,7 @@ static dma_addr_t xen_swiotlb_map_page(struct device 
> *dev, struct page *page,
>*/
>   trace_swiotlb_bounced(dev, dev_addr, size, swiotlb_force);
>  
> - map = swiotlb_tbl_map_single(dev, virt_to_phys(xen_io_tlb_start),
> -   

Re: [PATCH 10/13] x86/irq: Limit IOAPIC and MSI domains' affinity without IR

2020-10-06 Thread Thomas Gleixner
On Mon, Oct 05 2020 at 16:28, David Woodhouse wrote:

> From: David Woodhouse 
>
> When interrupt remapping isn't enabled, only the first 255 CPUs can

No, only CPUs with an APICid < 255 

> receive external interrupts. Set the appropriate max affinity for
> the IOAPIC and MSI IRQ domains accordingly.
>
> This also fixes the case where interrupt remapping is enabled but some
> devices are not within the scope of any active IOMMU.

What? If this fixes an pre-existing problem then

  1) Explain the problem proper
  2) Have a patch at the beginning of the series which fixes it
 independently of this pile

If it's fixing a problem in your pile, then you got the ordering wrong.

You didn't start kernel programming as of yesterday, so you really know
how that works.

>   ip->irqdomain->parent = parent;
> + if (parent == x86_vector_domain)
> + irq_domain_set_affinity(ip->irqdomain, _non_ir_cpumask);

OMG

>   if (cfg->type == IOAPIC_DOMAIN_LEGACY ||
>   cfg->type == IOAPIC_DOMAIN_STRICT)
> diff --git a/arch/x86/kernel/apic/msi.c b/arch/x86/kernel/apic/msi.c
> index 4d891967bea4..af5ce5c4da02 100644
> --- a/arch/x86/kernel/apic/msi.c
> +++ b/arch/x86/kernel/apic/msi.c
> @@ -259,6 +259,7 @@ struct irq_domain * __init 
> native_create_pci_msi_domain(void)
>   pr_warn("Failed to initialize PCI-MSI irqdomain.\n");
>   } else {
>   d->flags |= IRQ_DOMAIN_MSI_NOMASK_QUIRK;
> + irq_domain_set_affinity(d, _non_ir_cpumask);

So here it's unconditional

>   }
>   return d;
>  }
> @@ -479,6 +480,8 @@ struct irq_domain *hpet_create_irq_domain(int hpet_id)
>   irq_domain_free_fwnode(fn);
>   kfree(domain_info);
>   }
> + if (parent == x86_vector_domain)
> + irq_domain_set_affinity(d, _non_ir_cpumask);

And here we need a condition again. Completely obvious and reviewable - NOT.

Thanks,

tglx
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 09/13] x86/irq: Add x86_non_ir_cpumask

2020-10-06 Thread Thomas Gleixner
On Mon, Oct 05 2020 at 16:28, David Woodhouse wrote:
> From: David Woodhouse 
>
> This is the mask of CPUs to which IRQs can be delivered without interrupt
> remapping.
>  
> +/* Mask of CPUs which can be targeted by non-remapped interrupts. */
> +cpumask_t x86_non_ir_cpumask = { CPU_BITS_ALL };

What?

>  #ifdef CONFIG_X86_32
>  
>  /*
> @@ -1838,6 +1841,7 @@ static __init void x2apic_enable(void)
>  static __init void try_to_enable_x2apic(int remap_mode)
>  {
>   u32 apic_limit = 0;
> + int i;
>  
>   if (x2apic_state == X2APIC_DISABLED)
>   return;
> @@ -1880,6 +1884,14 @@ static __init void try_to_enable_x2apic(int remap_mode)
>   if (apic_limit)
>   x2apic_set_max_apicid(apic_limit);
>  
> + /* Build the affinity mask for interrupts that can't be remapped. */
> + cpumask_clear(_non_ir_cpumask);
> + i = min_t(unsigned int, num_possible_cpus() - 1, apic_limit);
> + for ( ; i >= 0; i--) {
> + if (cpu_physical_id(i) <= apic_limit)
> + cpumask_set_cpu(i, _non_ir_cpumask);
> + }

Blink. If the APIC id is not linear with the cpu numbers then this
results in a reduced addressable set of CPUs. WHY?

> diff --git a/arch/x86/kernel/apic/io_apic.c b/arch/x86/kernel/apic/io_apic.c
> index aa9a3b54a96c..4d0ef46fedb9 100644
> --- a/arch/x86/kernel/apic/io_apic.c
> +++ b/arch/x86/kernel/apic/io_apic.c
> @@ -2098,6 +2098,8 @@ static int mp_alloc_timer_irq(int ioapic, int pin)
>   struct irq_alloc_info info;
>  
>   ioapic_set_alloc_attr(, NUMA_NO_NODE, 0, 0);
> + if (domain->parent == x86_vector_domain)
> + info.mask = _non_ir_cpumask;

We are not going to sprinkle such domain checks all over the
place. Again, the mask is a property of the interrupt domain.

Thanks,

tglx
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 08/13] genirq: Add irq_domain_set_affinity()

2020-10-06 Thread Thomas Gleixner
On Mon, Oct 05 2020 at 16:28, David Woodhouse wrote:
> +/**
> + * irq_domain_set_affinity - Set maximum CPU affinity for domain
> + * @parent:  Domain to set affinity for
> + * @affinity:Pointer to cpumask, consumed by domain
> + *
> + * Sets the maximal set of CPUs to which interrupts in this domain may
> + * be delivered. Must only be called after creation, before any interrupts
> + * have been in the domain.
> + *
> + * This function retains a pointer to the cpumask which is passed in.
> + */
> +int irq_domain_set_affinity(struct irq_domain *domain,
> + const struct cpumask *affinity)
> +{
> + if (cpumask_empty(affinity))
> + return -EINVAL;
> + domain->max_affinity = affinity;
> + return 0;
> +}
> +EXPORT_SYMBOL_GPL(irq_domain_set_affinity);

What the heck? Why does this need a setter function which is exported?
So that random driver writers can fiddle with it?

The affinity mask restriction of an irq domain is already known when the
domain is created.

Thanks,

tglx
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 07/13] irqdomain: Add max_affinity argument to irq_domain_alloc_descs()

2020-10-06 Thread Thomas Gleixner
On Mon, Oct 05 2020 at 16:28, David Woodhouse wrote:
> From: David Woodhouse 
>
> This is the maximum possible set of CPUs which can be used. Use it
> to calculate the default affinity requested from __irq_alloc_descs()
> by first attempting to find the intersection with irq_default_affinity,
> or falling back to using just the max_affinity if the intersection
> would be empty.

And why do we need that as yet another argument?

This is an optional property of the irq domain, really and no caller has
any business with that. 

>  int irq_domain_alloc_descs(int virq, unsigned int cnt, irq_hw_number_t hwirq,
> -int node, const struct irq_affinity_desc *affinity)
> +int node, const struct irq_affinity_desc *affinity,
> +const struct cpumask *max_affinity)
>  {
> + cpumask_var_t default_affinity;
>   unsigned int hint;
> + int i;
> +
> + /* Check requested per-IRQ affinities are in the possible range */
> + if (affinity && max_affinity) {
> + for (i = 0; i < cnt; i++)
> + if (!cpumask_subset([i].mask, max_affinity))
> + return -EINVAL;

https://lore.kernel.org/r/alpine.DEB.2.20.1701171956290.3645@nanos

What is preventing the affinity spreading code from spreading the masks
out to unusable CPUs? The changelog is silent about that part.

> + /*
> +  * Generate default affinity. Either the possible subset of
> +  * irq_default_affinity if such a subset is non-empty, or fall
> +  * back to the provided max_affinity if there is no intersection.
..
> +  * And just a copy of irq_default_affinity in the
> +  * !CONFIG_CPUMASK_OFFSTACK case.

We know that already...

> +  */
> + memset(_affinity, 0, sizeof(default_affinity));

Right, memset() before allocating is useful.

> + if ((max_affinity &&
> +  !cpumask_subset(irq_default_affinity, max_affinity))) {
> + if (!alloc_cpumask_var(_affinity, GFP_KERNEL))
> + return -ENOMEM;
> + cpumask_and(default_affinity, max_affinity,
> + irq_default_affinity);
> + if (cpumask_empty(default_affinity))
> + cpumask_copy(default_affinity, max_affinity);
> + } else if (cpumask_available(default_affinity))
> + cpumask_copy(default_affinity, irq_default_affinity);

That's garbage and unreadable.

Thanks,

tglx
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 05/13] genirq: Prepare for default affinity to be passed to __irq_alloc_descs()

2020-10-06 Thread David Woodhouse



On 6 October 2020 22:01:18 BST, Thomas Gleixner  wrote:
>On Mon, Oct 05 2020 at 16:28, David Woodhouse wrote:
>>  
>>  #else /* CONFIG_SMP */
>>  
>> +#define irq_default_affinity (NULL)
>
>...
>
>>  static int alloc_descs(unsigned int start, unsigned int cnt, int
>node,
>> const struct irq_affinity_desc *affinity,
>> +   const struct cpumask *default_affinity,
>> struct module *owner)
>>  {
>>  struct irq_desc *desc;
>>  int i;
>>  
>>  /* Validate affinity mask(s) */
>> +if (!default_affinity || cpumask_empty(default_affinity))
>> +return -EINVAL;
>
>How is that supposed to work on UP?

Hm, good point.

>Aside of that I really hate to have yet another argument just
>because.

Yeah, I was trying to avoid having to allocate a whole array of 
irq_affinity_desc just to fill them all in with the same default.

But perhaps I need to treat the "affinity_max" like we do cpu_online_mask, and 
allow affinity to be set even to offline/unreachable CPUs at this point. Then 
we can be more relaxed about the default affinities.

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 06/13] genirq: Add default_affinity argument to __irq_alloc_descs()

2020-10-06 Thread Thomas Gleixner
On Mon, Oct 05 2020 at 16:28, David Woodhouse wrote:
> From: David Woodhouse 
> It already takes an array of affinities for each individual irq being
> allocated but that's awkward to allocate and populate in the case
> where they're all the same and inherited from the irqdomain, so pass
> the default in separately as a simple cpumask.

So we need another cpumask argument exposed to the world just because
it's so hard to extend struct irq_affinity_desc so it supports that use
case as well. It's not written in stone that this struct can only
support arrays.

> Signed-off-by: David Woodhouse 
> ---
>  include/linux/irq.h| 10 ++
>  kernel/irq/devres.c|  8 ++--
>  kernel/irq/irqdesc.c   | 10 --
>  kernel/irq/irqdomain.c |  6 +++---

git grep __irq_alloc_descs() might help you to find _all_ instances ...

Thanks,

tglx
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 05/13] genirq: Prepare for default affinity to be passed to __irq_alloc_descs()

2020-10-06 Thread Thomas Gleixner
On Mon, Oct 05 2020 at 16:28, David Woodhouse wrote:
>  
>  #else /* CONFIG_SMP */
>  
> +#define irq_default_affinity (NULL)

...

>  static int alloc_descs(unsigned int start, unsigned int cnt, int node,
>  const struct irq_affinity_desc *affinity,
> +const struct cpumask *default_affinity,
>  struct module *owner)
>  {
>   struct irq_desc *desc;
>   int i;
>  
>   /* Validate affinity mask(s) */
> + if (!default_affinity || cpumask_empty(default_affinity))
> + return -EINVAL;

How is that supposed to work on UP?

Aside of that I really hate to have yet another argument just
because.

Thanks,

tglx
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 8/8] WIP: add a dma_alloc_contiguous API

2020-10-06 Thread Tomasz Figa
On Mon, Oct 5, 2020 at 10:26 AM Christoph Hellwig  wrote:
>
> On Fri, Oct 02, 2020 at 05:50:40PM +, Tomasz Figa wrote:
> > Hi Christoph,
> >
> > On Wed, Sep 30, 2020 at 06:09:17PM +0200, Christoph Hellwig wrote:
> > > Add a new API that returns a virtually non-contigous array of pages
> > > and dma address.  This API is only implemented for dma-iommu and will
> > > not be implemented for non-iommu DMA API instances that have to allocate
> > > contiguous memory.  It is up to the caller to check if the API is
> > > available.
> >
> > Would you mind scheding some more light on what made the previous attempt
> > not work well? I liked the previous API because it was more consistent with
> > the regular dma_alloc_coherent().
>
> The problem is that with a dma_alloc_noncoherent that can return pages
> not in the kernel mapping we can't just use virt_to_page to fill in
> scatterlists or mmap the buffer to userspace, but would need new helpers
> and another two methods.
>
> > >  - no kernel mapping or only temporary kernel mappings are required.
> > >That is as a better replacement for DMA_ATTR_NO_KERNEL_MAPPING
> > >  - a kernel mapping is required for cached and DMA mapped pages, but
> > >the driver also needs the pages to e.g. map them to userspace.
> > >In that sense it is a replacement for some aspects of the recently
> > >removed and never fully implemented DMA_ATTR_NON_CONSISTENT
> >
> > What's the expected allocation and mapping flow with the latter? Would that 
> > be
> >
> > pages = dma_alloc_noncoherent(...)
> > vaddr = vmap(pages, ...);
> >
> > ?
>
> Yes.  Witht the vmap step optional for replacements of
> DMA_ATTR_NO_KERNEL_MAPPING, which is another nightmare to deal with.
>
> > Would one just use the usual dma_sync_for_{cpu,device}() for cache
> > invallidate/clean, while keeping the mapping in place?
>
> Yes.  And make sure the API isn't implemented when VIVT caches are
> used, but that isn't really different from the current interface.

Okay, thanks. Let's see if we can make necessary changes to the videobuf2.

+Sergey Senozhatsky for awareness too.

Best regrards,
Tomasz
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 02/13] x86/msi: Only use high bits of MSI address for DMAR unit

2020-10-06 Thread Thomas Gleixner
On Mon, Oct 05 2020 at 16:28, David Woodhouse wrote:
> -static void __irq_msi_compose_msg(struct irq_cfg *cfg, struct msi_msg *msg)
> +static void __irq_msi_compose_msg(struct irq_cfg *cfg, struct msi_msg *msg, 
> int dmar)

bool dmar?

> +/*
> + * The Intel IOMMU (ab)uses the high bits of the MSI address to contain the
> + * high bits of the destination APIC ID. This can't be done in the general
> + * case for MSIs as it would be targeting real memory above 4GiB not the
> + * APIC.
> + */
> +static void dmar_msi_compose_msg(struct irq_data *data, struct msi_msg *msg)
> +{
> + __irq_msi_compose_msg(irqd_cfg(data), msg, 1);
> +
> +
> +

Lots of stray newlines...

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v4 0/4] Add system mmu support for Armada-806

2020-10-06 Thread Denis Odintsov
Hi,

> Am 15.07.2020 um 09:06 schrieb Tomasz Nowicki :
> 
> The series is meant to support SMMU for AP806 and a workaround
> for accessing ARM SMMU 64bit registers is the gist of it.
> 
> For the record, AP-806 can't access SMMU registers with 64bit width.
> This patches split the readq/writeq into two 32bit accesses instead
> and update DT bindings.
> 
> The series was successfully tested on a vanilla v5.8-rc3 kernel and
> Intel e1000e PCIe NIC. The same for platform devices like SATA and USB.
> 
> For reference, previous versions are listed below:
> V1: https://lkml.org/lkml/2018/10/15/373
> V2: https://lkml.org/lkml/2019/7/11/426
> V3: https://lkml.org/lkml/2020/7/2/1114
> 

1) After enabling SMMU on Armada 8040, and ARM_SMMU_DISABLE_BYPASS_BY_DEFAUL=y 
by default in kernel since 954a03be033c7cef80ddc232e7cbdb17df735663,
internal eMMC is prevented from being initialised (as there is no iommus 
property for ap_sdhci0)
Disabling "Disable bypass by default" make it work, but the patch highly 
suggest doing it properly.
I wasn't able to find correct path for ap_sdhci for iommus in any publicly 
available documentation,
would be highly appreciated addressed properly, thank you!

2) Second issue I got (btw I have ClearFog GT 8k armada-8040 based board) is 
mpci ath10k card.
It is found, it is enumerated, it is visible in lspci, but it fails to be 
initialised. Here is the log:

[1.743754] armada8k-pcie f260.pcie: host bridge /cp0/pcie@f260 
ranges:
[1.751116] armada8k-pcie f260.pcie:  MEM 0x00f600..0x00f6ef 
-> 0x00f600
[1.964690] armada8k-pcie f260.pcie: Link up
[1.969379] armada8k-pcie f260.pcie: PCI host bridge to bus :00
[1.976026] pci_bus :00: root bus resource [bus 00-ff]
[1.981537] pci_bus :00: root bus resource [mem 0xf600-0xf6ef]
[1.988462] pci :00:00.0: [11ab:0110] type 01 class 0x060400
[1.994504] pci :00:00.0: reg 0x10: [mem 0x-0x000f]
[2.000843] pci :00:00.0: supports D1 D2
[2.005132] pci :00:00.0: PME# supported from D0 D1 D3hot
[2.011853] pci :01:00.0: [168c:003c] type 00 class 0x028000
[2.018001] pci :01:00.0: reg 0x10: [mem 0x-0x001f 64bit]
[2.025002] pci :01:00.0: reg 0x30: [mem 0x-0x pref]
[2.032111] pci :01:00.0: supports D1 D2
[2.049409] pci :00:00.0: BAR 14: assigned [mem 0xf600-0xf61f]
[2.056322] pci :00:00.0: BAR 0: assigned [mem 0xf620-0xf62f]
[2.063142] pci :00:00.0: BAR 15: assigned [mem 0xf630-0xf63f 
pref]
[2.070484] pci :01:00.0: BAR 0: assigned [mem 0xf600-0xf61f 
64bit]
[2.077880] pci :01:00.0: BAR 6: assigned [mem 0xf630-0xf630 
pref]
[2.085135] pci :00:00.0: PCI bridge to [bus 01-ff]
[2.090384] pci :00:00.0:   bridge window [mem 0xf600-0xf61f]
[2.097202] pci :00:00.0:   bridge window [mem 0xf630-0xf63f 
pref]
[2.104539] pcieport :00:00.0: Adding to iommu group 4
[2.110232] pcieport :00:00.0: PME: Signaling with IRQ 38
[2.116141] pcieport :00:00.0: AER: enabled with IRQ 38
[8.131135] ath10k_pci :01:00.0: Adding to iommu group 4
[8.131874] ath10k_pci :01:00.0: enabling device ( -> 0002)
[8.132203] ath10k_pci :01:00.0: pci irq msi oper_irq_mode 2 irq_mode 0 
reset_mode 0

up to that point the log is the same as without SMMU enabled, except "Adding to 
iommu group N" lines, and IRQ being 37

[8.221328] ath10k_pci :01:00.0: failed to poke copy engine: -16
[8.313362] ath10k_pci :01:00.0: failed to poke copy engine: -16
[8.409373] ath10k_pci :01:00.0: failed to poke copy engine: -16
[8.553433] ath10k_pci :01:00.0: failed to poke copy engine: -16
[8.641370] ath10k_pci :01:00.0: failed to poke copy engine: -16
[8.737979] ath10k_pci :01:00.0: failed to poke copy engine: -16
[8.807356] ath10k_pci :01:00.0: Failed to get pcie state addr: -16
[8.814032] ath10k_pci :01:00.0: failed to setup init config: -16
[8.820605] ath10k_pci :01:00.0: could not power on hif bus (-16)
[8.827111] ath10k_pci :01:00.0: could not probe fw (-16)

Thank you!

> v3 -> v4
> - call cfg_probe() impl hook a bit earlier which simplifies errata handling
> - use hi_lo_readq_relaxed() and hi_lo_writeq_relaxed() for register accessors
> - keep SMMU status disabled by default and enable where possible (DTS changes)
> - commit logs improvements and other minor fixes
> 
> Hanna Hawa (1):
>  iommu/arm-smmu: Workaround for Marvell Armada-AP806 SoC erratum
>#582743
> 
> Marcin Wojtas (1):
>  arm64: dts: marvell: add SMMU support
> 
> Tomasz Nowicki (2):
>  iommu/arm-smmu: Call configuration impl hook before consuming features
>  dt-bindings: arm-smmu: add compatible string for Marvell Armada-AP806
>SMMU-500
> 
> Documentation/arm64/silicon-errata.rst|  3 ++
> 

Re: [RFC 0/3] iommu: Reserved regions for IOVAs beyond dma_mask and iommu aperture

2020-10-06 Thread Auger Eric
Hi all,

On 10/5/20 3:08 PM, Christoph Hellwig wrote:
> On Mon, Oct 05, 2020 at 11:44:10AM +0100, Lorenzo Pieralisi wrote:
>>> I see that there are both OF and ACPI hooks in pci_dma_configure() and
>>> both modify dev->dma_mask, which is what pci-sysfs is exposing here,
>>> but I'm not convinced this even does what it's intended to do.  The
>>> driver core calls this via the bus->dma_configure callback before
>>> probing a driver, but then what happens when the driver calls
>>> pci_set_dma_mask()?  This is just a wrapper for dma_set_mask() and I
>>> don't see anywhere that would take into account the existing
>>> dev->dma_mask.  It seems for example that pci_dma_configure() could
>>> produce a 42 bit mask as we have here, then the driver could override
>>> that with anything that the dma_ops.dma_supported() callback finds
>>> acceptable, and I don't see any instances where the current
>>> dev->dma_mask is considered.  Am I overlooking something? 
>>
>> I don't think so but Christoph and Robin can provide more input on
>> this - it is a long story.
>>
>> ACPI and OF bindings set a default dma_mask (and dev->bus_dma_limit),
>> this does not prevent a driver from overriding the dev->dma_mask but DMA
>> mapping code still takes into account the dev->bus_dma_limit.
>>
>> This may help:
>>
>> git log -p 03bfdc31176c

Thank you Lorenzo for the pointer.
> 
> This is at best a historic artefact.  Bus drivers have no business
> messing with the DMA mask, dev->bus_dma_limit is the way to communicate
> addressing limits on the bus (or another interconnect closer to the CPU).
> 
Then could I envision to use the dev->bus_dma_limit instead of the
dev->dma_mask?

Nevertheless, I would need a way to let the userspace know that the
usable IOVA ranges reported by VFIO_IOMMU_TYPE1_INFO_CAP_IOVA_RANGE
takes into account the addressing limits of the bus.

Thanks

Eric

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v10 11/11] vfio: Document nested stage control

2020-10-06 Thread Auger Eric
Hi Zenghui,

On 9/24/20 3:42 PM, Zenghui Yu wrote:
> Hi Eric,
> 
> On 2020/3/21 0:19, Eric Auger wrote:
>> The VFIO API was enhanced to support nested stage control: a bunch of
>> new iotcls, one DMA FAULT region and an associated specific IRQ.
>>
>> Let's document the process to follow to set up nested mode.
>>
>> Signed-off-by: Eric Auger 
> 
> [...]
> 
>> +The userspace must be prepared to receive faults. The VFIO-PCI device
>> +exposes one dedicated DMA FAULT region: it contains a ring buffer and
>> +its header that allows to manage the head/tail indices. The region is
>> +identified by the following index/subindex:
>> +- VFIO_REGION_TYPE_NESTED/VFIO_REGION_SUBTYPE_NESTED_DMA_FAULT
>> +
>> +The DMA FAULT region exposes a VFIO_REGION_INFO_CAP_PRODUCER_FAULT
>> +region capability that allows the userspace to retrieve the ABI version
>> +of the fault records filled by the host.
> 
> Nit: I don't see this capability in the code.

Thank you very much for the review. I am late doing the respin but I
will take into account all your comments.

Thanks!

Eric
> 
> 
> Thanks,
> Zenghui
> 

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v3 0/6] Add virtio-iommu built-in topology

2020-10-06 Thread Auger Eric
Hello Al,

On 10/2/20 8:23 PM, Al Stone wrote:
> On 24 Sep 2020 11:54, Auger Eric wrote:
>> Hi,
>>
>> Adding Al in the loop
>>
>> On 9/24/20 11:38 AM, Michael S. Tsirkin wrote:
>>> On Thu, Sep 24, 2020 at 11:21:29AM +0200, Joerg Roedel wrote:
 On Thu, Sep 24, 2020 at 05:00:35AM -0400, Michael S. Tsirkin wrote:
> OK so this looks good. Can you pls repost with the minor tweak
> suggested and all acks included, and I will queue this?

 My NACK still stands, as long as a few questions are open:

1) The format used here will be the same as in the ACPI table? I
   think the answer to this questions must be Yes, so this leads
   to the real question:
>>>
>>> I am not sure it's a must.
>>> We can always tweak the parser if there are slight differences
>>> between ACPI and virtio formats.
>>>
>>> But we do want the virtio format used here to be approved by the virtio
>>> TC, so it won't change.
>>>
>>> Eric, Jean-Philippe, does one of you intend to create a github issue
>>> and request a ballot for the TC? It's been posted end of August with no
>>> changes ...
>> Jean-Philippe, would you?
>>>
2) Has the ACPI table format stabalized already? If and only if
   the answer is Yes I will Ack these patches. We don't need to
   wait until the ACPI table format is published in a
   specification update, but at least some certainty that it
   will not change in incompatible ways anymore is needed.

>>
>> Al, do you have any news about the the VIOT definition submission to
>> the UEFI ASWG?
>>
>> Thank you in advance
>>
>> Best Regards
>>
>> Eric
> 
> A follow-up to my earlier post 
> 
> Hearing no objection, I've submitted the VIOT table description to
> the ASWG for consideration under what they call the "code first"
> process.  The "first reading" -- a brief discussion on what the
> table is and why we would like to add it -- was held yesterday.
> No concerns have been raised as yet.  Given the discussions that
> have already occurred, I don't expect any, either.  I have been
> wrong at least once before, however.
> 
> At this point, ASWG will revisit the request to add VIOT each
> week.  If there have been no comments in the prior week, and no
> further discussion during the meeting, then a vote will be taken.
> Otherwise, there will be discussion and we try again the next
> week.
> 
> The ASWG was also told that the likelihood of this definition of
> the table changing is pretty low, and that it has been thought out
> pretty well already.  ASWG's consideration will therefore start
> from the assumption that it would be best _not_ to make changes.
> 
> So, I'll let you know what happens next week.

Thank you very much for the updates and for your support backing the
proposal in the best delays.

Best Regards

Eric
> 
>>
>>>
>>> Not that I know, but I don't see why it's a must.
>>>
 So what progress has been made with the ACPI table specification, is it
 just a matter of time to get it approved or are there concerns?

 Regards,

Joerg
>>>
>>
> 

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 2/3] drm/msm: add DRM_MSM_GEM_SYNC_CACHE for non-coherent cache maintenance

2020-10-06 Thread Jonathan Marek

On 10/6/20 3:23 AM, Christoph Hellwig wrote:

On Mon, Oct 05, 2020 at 10:35:43AM -0400, Jonathan Marek wrote:

The cache synchronization doesn't have anything to do with IOMMU (for
example: cache synchronization would be useful in cases where drm/msm
doesn't use IOMMU).


It has to do with doing DMA.  And we have two frameworks for doing DMA:
either the DMA API which is for general driver use, and which as part of
the design includes cache maintainance hidden behind the concept of
ownership transfers.  And we have the much more bare bones IOMMU API.

If people want to use the "raw" IOMMU API with not cache coherent
devices we'll need a cache maintainance API that goes along with it.
It could either be formally part of the IOMMU API or be separate.


What is needed is to call arch_sync_dma_for_{cpu,device} (which is what I
went with initially, but then decided to re-use drm/msm's
sync_for_{cpu,device}). But you are also saying those functions aren't for
driver use, and I doubt IOMMU maintainers will want to add wrappers for
these functions just to satisfy this "not for driver use" requirement.


arch_sync_dma_for_{cpu,device} are low-level helpers (and not very
great ones at that).  The definitively should not be used by drivers.
They would be very useful buildblocks for a IOMMU cache maintainance
API.

Of course the best outcome would be if we could find a way for the MSM
drm driver to just use DMA API and not deal with the lower level
abstractions.  Do you remember why the driver went for use of the IOMMU
API?



One example why drm/msm can't use DMA API is multiple page table support 
(that is landing in 5.10), which is something that definitely couldn't 
work with DMA API.


Another one is being able to choose the address for mappings, which 
AFAIK DMA API can't do (somewhat related to this: qcom hardware often 
has ranges of allowed addresses, which the dma_mask mechanism fails to 
represent, what I see is drivers using dma_mask as a "maximum address", 
and since addresses are allocated from the top it generally works)


But let us imagine drm/msm switches to using DMA API. a2xx GPUs have 
their own very basic MMU (implemented by msm_gpummu.c), that will need 
to implement dma_map_ops, which will have to call 
arch_sync_dma_for_{cpu,device}. So drm/msm still needs to call 
arch_sync_dma_for_{cpu,device} in that scenario.








___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: xen-swiotlb vs phys_to_dma

2020-10-06 Thread Christoph Hellwig
On Fri, Oct 02, 2020 at 01:21:25PM -0700, Stefano Stabellini wrote:
> On Fri, 2 Oct 2020, Christoph Hellwig wrote:
> > Hi Stefano,
> > 
> > I've looked over xen-swiotlb in linux-next, that is with your recent
> > changes to take dma offsets into account.  One thing that puzzles me
> > is that xen_swiotlb_map_page passes virt_to_phys(xen_io_tlb_start) as
> > the tbl_dma_addr argument to swiotlb_tbl_map_single, despite the fact
> > that the argument is a dma_addr_t and both other callers translate
> > from a physical to the dma address.  Was this an oversight?
> 
> Hi Christoph,
> 
> It was not an oversight, it was done on purpose, although maybe I could
> have been wrong. There was a brief discussion on this topic here: 
> 
> https://marc.info/?l=linux-kernel=159011972107683=2
> https://marc.info/?l=linux-kernel=159018047129198=2
> 
> I'll repeat and summarize here for convenience. 
> 
> swiotlb_init_with_tbl is called by xen_swiotlb_init, passing a virtual
> address (xen_io_tlb_start), which gets converted to phys and stored in
> io_tlb_start as a physical address at the beginning of swiotlb_init_with_tbl.

Yes.

> Afterwards, xen_swiotlb_map_page calls swiotlb_tbl_map_single. The
> second parameter, dma_addr_t tbl_dma_addr, is used to calculate the
> right slot in the swiotlb buffer to use, comparing it against
> io_tlb_start.

It is not compared against io_tlb_start.  It is just used to pick
a slot that fits the dma_get_seg_boundary limitation in a somewhat
awkward way.

> Thus, I think it makes sense for xen_swiotlb_map_page to call
> swiotlb_tbl_map_single passing an address meant to be compared with
> io_tlb_start, which is __pa(xen_io_tlb_start), so
> virt_to_phys(xen_io_tlb_start) seems to be what we want.

No, it doesn't.  tlb_addr is used to ensure the picked slots satisfies
the segment boundary, and for that you need a dma_addr_t.

The index variable in swiotlb_tbl_map_single is derived from
io_tlb_index, not io_tlb_start.

> However, you are right that it is strange that tbl_dma_addr is a
> dma_addr_t, and maybe it shouldn't be? Maybe the tbl_dma_addr parameter
> to swiotlb_tbl_map_single should be a phys address instead?
> Or it could be swiotlb_init_with_tbl to be wrong and it should take a
> dma address to initialize the swiotlb buffer.

No, it must be a dma_addr_t so that the dma_get_seg_boundary check works.

I think we need something like this (against linux-next):

---
>From 07b39a62b235ed2d4b2215700d99968998fbf6c0 Mon Sep 17 00:00:00 2001
From: Christoph Hellwig 
Date: Tue, 6 Oct 2020 10:22:19 +0200
Subject: swiotlb: remove the tlb_addr argument to swiotlb_tbl_map_single

The tlb_addr always must be the dma view of io_tlb_start so that the
segment boundary checks work.  Remove the argument and do the right
thing inside swiotlb_tbl_map_single.  This fixes the swiotlb-xen case
that failed to take DMA offset into account.  The issue probably did
not show up very much in practice as the typical dma offsets are
large enough to not affect the segment boundaries for most devices.

Signed-off-by: Christoph Hellwig 
---
 drivers/iommu/intel/iommu.c |  5 ++---
 drivers/xen/swiotlb-xen.c   |  3 +--
 include/linux/swiotlb.h | 10 +++---
 kernel/dma/swiotlb.c| 16 ++--
 4 files changed, 12 insertions(+), 22 deletions(-)

diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
index 5ee0b7921b0b37..d473811fcfacd5 100644
--- a/drivers/iommu/intel/iommu.c
+++ b/drivers/iommu/intel/iommu.c
@@ -3815,9 +3815,8 @@ bounce_map_single(struct device *dev, phys_addr_t paddr, 
size_t size,
 * page aligned, we don't need to use a bounce page.
 */
if (!IS_ALIGNED(paddr | size, VTD_PAGE_SIZE)) {
-   tlb_addr = swiotlb_tbl_map_single(dev,
-   phys_to_dma_unencrypted(dev, io_tlb_start),
-   paddr, size, aligned_size, dir, attrs);
+   tlb_addr = swiotlb_tbl_map_single(dev, paddr, size,
+ aligned_size, dir, attrs);
if (tlb_addr == DMA_MAPPING_ERROR) {
goto swiotlb_error;
} else {
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 030a225624b060..953186f6d7d222 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -395,8 +395,7 @@ static dma_addr_t xen_swiotlb_map_page(struct device *dev, 
struct page *page,
 */
trace_swiotlb_bounced(dev, dev_addr, size, swiotlb_force);
 
-   map = swiotlb_tbl_map_single(dev, virt_to_phys(xen_io_tlb_start),
-phys, size, size, dir, attrs);
+   map = swiotlb_tbl_map_single(dev, phys, size, size, dir, attrs);
if (map == (phys_addr_t)DMA_MAPPING_ERROR)
return DMA_MAPPING_ERROR;
 
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 513913ff748626..3bb72266a75a1d 100644
--- a/include/linux/swiotlb.h
+++ 

Re: [PATCH 4/8] dma-direct: use __GFP_ZERO in dma_direct_alloc_pages

2020-10-06 Thread Hillf Danton


On Wed, 30 Sep 2020 18:09:13 Christoph Hellwig wrote:
> 
> Prepare for supporting the DMA_ATTR_NO_KERNEL_MAPPING flag in
> dma_alloc_pages.
> 
> Signed-off-by: Christoph Hellwig 
> ---
>  kernel/dma/direct.c | 9 -
>  1 file changed, 4 insertions(+), 5 deletions(-)
> 
> diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
> index b5f20781d3a96f..b5d56810130b22 100644
> --- a/kernel/dma/direct.c
> +++ b/kernel/dma/direct.c
> @@ -296,9 +296,10 @@ struct page *dma_direct_alloc_pages(struct device *dev, 
> size_t size,
>   dma_addr_t *dma_handle, enum dma_data_direction dir, gfp_t gfp)
>  {
>   struct page *page;
> - void *ret;
>  
>   if (dma_should_alloc_from_pool(dev, gfp, 0)) {
> + void *ret;
> +
>   page = dma_alloc_from_pool(dev, size, , gfp,
>   dma_coherent_ok);
>   if (!page)
> @@ -306,7 +307,7 @@ struct page *dma_direct_alloc_pages(struct device *dev, 
> size_t size,
>   goto done;
>   }
>  
> - page = __dma_direct_alloc_pages(dev, size, gfp);
> + page = __dma_direct_alloc_pages(dev, size, gfp | __GFP_ZERO);
>   if (!page)
>   return NULL;
>   if (PageHighMem(page)) {
> @@ -320,13 +321,11 @@ struct page *dma_direct_alloc_pages(struct device *dev, 
> size_t size,
>   goto out_free_pages;
>   }
>  
> - ret = page_address(page);
>   if (force_dma_unencrypted(dev)) {
> - if (set_memory_decrypted((unsigned long)ret,
> + if (set_memory_decrypted((unsigned long)page_address(page),
>   1 << get_order(size)))
>   goto out_free_pages;
>   }
> - memset(ret, 0, size);

Not sure this works without changes in cma_alloc().

>  done:
>   *dma_handle = phys_to_dma_direct(dev, page_to_phys(page));
>   return page;
> -- 
> 2.28.0

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 2/3] drm/msm: add DRM_MSM_GEM_SYNC_CACHE for non-coherent cache maintenance

2020-10-06 Thread Christoph Hellwig
On Mon, Oct 05, 2020 at 10:35:43AM -0400, Jonathan Marek wrote:
> The cache synchronization doesn't have anything to do with IOMMU (for
> example: cache synchronization would be useful in cases where drm/msm
> doesn't use IOMMU).

It has to do with doing DMA.  And we have two frameworks for doing DMA:
either the DMA API which is for general driver use, and which as part of
the design includes cache maintainance hidden behind the concept of
ownership transfers.  And we have the much more bare bones IOMMU API.

If people want to use the "raw" IOMMU API with not cache coherent
devices we'll need a cache maintainance API that goes along with it.
It could either be formally part of the IOMMU API or be separate.

> What is needed is to call arch_sync_dma_for_{cpu,device} (which is what I
> went with initially, but then decided to re-use drm/msm's
> sync_for_{cpu,device}). But you are also saying those functions aren't for
> driver use, and I doubt IOMMU maintainers will want to add wrappers for
> these functions just to satisfy this "not for driver use" requirement.

arch_sync_dma_for_{cpu,device} are low-level helpers (and not very
great ones at that).  The definitively should not be used by drivers.
They would be very useful buildblocks for a IOMMU cache maintainance
API.

Of course the best outcome would be if we could find a way for the MSM
drm driver to just use DMA API and not deal with the lower level
abstractions.  Do you remember why the driver went for use of the IOMMU
API?
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v3 06/24] dt-bindings: mediatek: Add binding for mt8192 IOMMU

2020-10-06 Thread Krzysztof Kozlowski
On Tue, Oct 06, 2020 at 12:26:45PM +0800, Yong Wu wrote:
> Hi Krzysztof,
> 
> On Fri, 2020-10-02 at 13:10 +0200, Krzysztof Kozlowski wrote:
> > On Wed, Sep 30, 2020 at 03:06:29PM +0800, Yong Wu wrote:
> > > This patch adds decriptions for mt8192 IOMMU and SMI.
> > > 
> > > mt8192 also is MTK IOMMU gen2 which uses ARM Short-Descriptor translation
> > > table format. The M4U-SMI HW diagram is as below:
> > > 
> > >   EMI
> > >|
> > >   M4U
> > >|
> > >   
> > >SMI Common
> > >   
> > >|
> > >   +---+--+--+--+---+
> > >   |   |  |  |   .. |   |
> > >   |   |  |  |  |   |
> > > larb0   larb1  larb2  larb4 ..  larb19   larb20
> > > disp0   disp1   mdpvdec   IPE  IPE
> > > 
> > > All the connections are HW fixed, SW can NOT adjust it.
> > > 
> > > mt8192 M4U support 0~16GB iova range. we preassign different engines
> > > into different iova ranges:
> > > 
> > > domain-id  module iova-range  larbs
> > >0   disp0 ~ 4G  larb0/1
> > >1   vcodec  4G ~ 8G larb4/5/7
> > >2   cam/mdp 8G ~ 12G 
> > > larb2/9/11/13/14/16/17/18/19/20
> > >3   CCU00x4000_ ~ 0x43ff_ larb13: port 9/10
> > >4   CCU10x4400_ ~ 0x47ff_ larb14: port 4/5
> > > 
> > > The iova range for CCU0/1(camera control unit) is HW requirement.
> > > 
> > > Signed-off-by: Yong Wu 
> > > Reviewed-by: Rob Herring 
> > > ---
> > >  .../bindings/iommu/mediatek,iommu.yaml|   9 +-
> > >  .../mediatek,smi-common.yaml  |   5 +-
> > >  .../memory-controllers/mediatek,smi-larb.yaml |   3 +-
> > >  include/dt-bindings/memory/mt8192-larb-port.h | 239 ++
> > >  4 files changed, 251 insertions(+), 5 deletions(-)
> > >  create mode 100644 include/dt-bindings/memory/mt8192-larb-port.h
> > 
> > I see it depends on previous patches but does it have to be within one
> > commit? Is it not bisectable? The memory changes/bindings could go via
> > memory tree if this is split.
> 
> Thanks for the view.
> 
> I can split this into two patchset in next version, one is for iommu and
> the other is for smi.
> 
> Only the patch [18/24] change both the code(iommu and smi). I don't plan
> to split it, and the smi patch[24/24] don't depend on it(won't
> conflict).

It got too late in the cycle, so I am not going to take the 24/24 now.

> 
> since 18/24 also touch the smi code, I expect it could get Acked-by from
> you or Matthias, then Joerg could take it.

Sure. I acked it.

Best regards,
Krzysztof
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v3 18/24] iommu/mediatek: Support master use iova over 32bit

2020-10-06 Thread Krzysztof Kozlowski
On Wed, Sep 30, 2020 at 03:06:41PM +0800, Yong Wu wrote:
> After extending v7s, our pagetable already support iova reach
> 16GB(34bit). the master got the iova via dma_alloc_attrs may reach
> 34bits, but its HW register still is 32bit. then how to set the
> bit32/bit33 iova? this depend on a SMI larb setting(bank_sel).
> 
> we separate whole 16GB iova to four banks:
> bank: 0: 0~4G; 1: 4~8G; 2: 8-12G; 3: 12-16G;
> The bank number is (iova >> 32).
> 
> We will preassign which bank the larbs belong to. currently we don't
> have a interface for master to adjust its bank number.
> 
> Each a bank is a iova_region which is a independent iommu-domain.
> the iova range for each iommu-domain can't cross 4G.
> 
> Signed-off-by: Yong Wu 
> ---
>  drivers/iommu/mtk_iommu.c  | 12 +---
>  drivers/memory/mtk-smi.c   |  7 +++
>  include/soc/mediatek/smi.h |  1 +
>  3 files changed, 17 insertions(+), 3 deletions(-)


For the memory part:
Acked-by: Krzysztof Kozlowski 

Best regards,
Krzysztof
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v3 02/24] dt-bindings: memory: mediatek: Convert SMI to DT schema

2020-10-06 Thread Krzysztof Kozlowski
On Tue, 6 Oct 2020 at 06:27, Yong Wu  wrote:
>
> On Fri, 2020-10-02 at 13:08 +0200, Krzysztof Kozlowski wrote:
> > On Wed, Sep 30, 2020 at 03:06:25PM +0800, Yong Wu wrote:
> > > Convert MediaTek SMI to DT schema.
> > >
> > > Signed-off-by: Yong Wu 
> > > ---
> > >  .../mediatek,smi-common.txt   |  49 -
> > >  .../mediatek,smi-common.yaml  | 100 ++
> > >  .../memory-controllers/mediatek,smi-larb.txt  |  49 -
> > >  .../memory-controllers/mediatek,smi-larb.yaml |  91 
> > >  4 files changed, 191 insertions(+), 98 deletions(-)
> > >  delete mode 100644 
> > > Documentation/devicetree/bindings/memory-controllers/mediatek,smi-common.txt
> > >  create mode 100644 
> > > Documentation/devicetree/bindings/memory-controllers/mediatek,smi-common.yaml
> > >  delete mode 100644 
> > > Documentation/devicetree/bindings/memory-controllers/mediatek,smi-larb.txt
> > >  create mode 100644 
> > > Documentation/devicetree/bindings/memory-controllers/mediatek,smi-larb.yaml
> ...
> > > +properties:
> > > +  compatible:
> > > +oneOf:
> > > +  - enum:
> > > +  - mediatek,mt2701-smi-common
> > > +  - mediatek,mt2712-smi-common
> > > +  - mediatek,mt6779-smi-common
> > > +  - mediatek,mt8173-smi-common
> > > +  - mediatek,mt8183-smi-common
> > > +
> > > +  - description: for mt7623
> > > +items:
> > > +  - const: mediatek,mt7623-smi-common
> > > +  - const: mediatek,mt2701-smi-common
> > > +
> > > +  reg:
> > > +maxItems: 1
> > > +
> > > +  clocks:
> > > +description: |
> > > +  apb and smi are mandatory. the async is only for generation 1 smi 
> > > HW.
> > > +  gals(global async local sync) also is optional, here is the list 
> > > which
> > > +  require gals: mt6779 and mt8183.
> > > +minItems: 2
> > > +maxItems: 4
> > > +items:
> > > +  - description: apb is Advanced Peripheral Bus clock, It's the 
> > > clock for
> > > +  setting the register.
> > > +  - description: smi is the clock for transfer data and command.
> > > +  - description: async is asynchronous clock, it help transform the 
> > > smi clock
> > > +  into the emi clock domain.
> > > +  - description: gals0 is the path0 clock of gals.
> > > +  - description: gals1 is the path1 clock of gals.
> > > +
> > > +  clock-names:
> > > +oneOf:
> > > +  - items:
> > > +  - const: apb
> > > +  - const: smi
> > > +  - items:
> > > +  - const: apb
> > > +  - const: smi
> > > +  - const: async
> > > +  - items:
> > > +  - const: apb
> > > +  - const: smi
> > > +  - const: gals0
> > > +  - const: gals1
> >
> > Similarly to my comment to other properties, this requirement per
> > compatible should be part of the schema within 'if-then'.
>
> I'm not so familiar with this format. Do this has "if-then-'else
> if'-then-else"?

These are mutually exclusive conditions, so you can skip else:
 - if-then
 - if-then
 - if-then
It will be more readable then stacking 'if' under 'else'

>
> I tried below instead of the clocks segment above:
>
> ===
> if:
>   properties:
> compatible:

Missing contains. Just take an example from some existing schema.

>   enum:
> - mediatek,mt6779-smi-common
> - mediatek,mt8183-smi-common
>
> then:
>   properties:
> clock:
>   items:
> - description: apb is the clock for setting the register..
> - description: smi is the clock for transfer data and command.
> - description: gals0 is the path0 clock of gals(global async
> local sync).
> - description: gals1 is the path1 clock of gals.
> clock-names:
>   items:
> - const: apb
> - const: smi
> - const: gals0
> - const: gals1
> else:
>   if:
> properties:
>   compatible:
> contains:
>   enum:
> - mediatek,mt2701-smi-common
>
>   then:
> properties:
>   clocks:
> items:
>   - description: apb is the clock for setting the register.
>   - description: smi is the clock for transfer data and command.
>   - description: async is asynchronous clock, it help transform
> the smi clock
>   into the emi clock domain.
>   clock-names:
> items:
>   - const: apb
>   - const: smi
>   - const: async
>   else:
> properties:
>   clocks:
> items:
>   - description: apb is the clock for setting the register.
>   - description: smi is the clock for transfer data and
> command.
>   clock-names:
> items:
>   - const: apb
>   - const: smi
> 
>
> But I got a warning when dt_binding_check:
>
> CHKDT
> Documentation/devicetree/bindings/memory-controllers/mediatek,smi-common.yaml
>