PCIe transfer issue with fsl_pamu

2021-07-17 Thread Thomas JOURDAN
Hi

I am trying to run the radeon and/or amdgpu drm driver on Freescale
T2080RDB with an AMD e8860 GPU. I am using the linux-qoirq 5.10 kernel.

On modprobe, the driver tests the GPU ring command. It will fill a command
buffer in system memory, then the GPU fetches the buffer and executes it.
As a result a scratch register of the GPU is updated and the GPU increases
its read pointer into the ring.

Without fsl_pamu enabled, this test works fine without triggering any
access error, which indicates all translation addresses are set up properly
(outbound, inbound and law).

But with fsl_pamu enabled, this test fails. The command buffer isn't
executed, neither the scratch value nor the read pointer into the ring are
updated. Moreover, the fsl_pamu driver doesn't trigger any access error.

My guess is a cache coherency issue. The GPU doesn't fetch from
system memory the proper values hence nothing is executed. However it's
only a guess as I have no expertise on the PAMU setup for this processor.

Any suggestions?

Regards
Thomas
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: [PATCH v2 2/5] iommu: Implement of_iommu_get_resv_regions()

2021-07-17 Thread Dmitry Osipenko
16.07.2021 17:41, Rob Herring пишет:
> On Fri, Jul 2, 2021 at 8:05 AM Dmitry Osipenko  wrote:
>>
>> 23.04.2021 19:32, Thierry Reding пишет:
>>> +void of_iommu_get_resv_regions(struct device *dev, struct list_head *list)
>>> +{
>>> + struct of_phandle_iterator it;
>>> + int err;
>>> +
>>> + of_for_each_phandle(, err, dev->of_node, "memory-region", 
>>> "#memory-region-cells", 0) {
>>> + struct iommu_resv_region *region;
>>> + struct of_phandle_args args;
>>> + struct resource res;
>>> +
>>> + args.args_count = of_phandle_iterator_args(, args.args, 
>>> MAX_PHANDLE_ARGS);
>>> +
>>> + err = of_address_to_resource(it.node, 0, );
>>> + if (err < 0) {
>>> + dev_err(dev, "failed to parse memory region %pOF: 
>>> %d\n",
>>> + it.node, err);
>>> + continue;
>>> + }
>>> +
>>> + if (args.args_count > 0) {
>>> + /*
>>> +  * Active memory regions are expected to be accessed 
>>> by hardware during
>>> +  * boot and must therefore have an identity mapping 
>>> created prior to the
>>> +  * driver taking control of the hardware. This 
>>> ensures that non-quiescent
>>> +  * hardware doesn't cause IOMMU faults during boot.
>>> +  */
>>> + if (args.args[0] & MEMORY_REGION_IDENTITY_MAPPING) {
>>> + region = iommu_alloc_resv_region(res.start, 
>>> resource_size(),
>>> +  IOMMU_READ | 
>>> IOMMU_WRITE,
>>> +  
>>> IOMMU_RESV_DIRECT_RELAXABLE);
>>> + if (!region)
>>> + continue;
>>> +
>>> + list_add_tail(>list, list);
>>> + }
>>> + }
>>> + }
>>> +}
>>> +EXPORT_SYMBOL(of_iommu_get_resv_regions);
>>
>> Any reason why this is not EXPORT_SYMBOL_GPL? I'm curious what is the
>> logic behind the OF symbols in general since it looks like half of them
>> are GPL.
> 
> Generally, new ones are _GPL. Old ones probably predate _GPL.
> 
> This one is up to the IOMMU maintainers.

Thank you.

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: [PATCH v2] dma-mapping: use vmalloc_to_page for vmalloc addresses

2021-07-17 Thread Roman Skakun
> We can merge this patch and create a new one for
> xen_swiotlb_free_coherent() later.
> Yeah, no worries, I didn't know that exposing dma_common_vaddr_to_page
> was problematic.
>
> This patch is fine by me.

Good. I'm agreed too. Waiting for Christoph.

пт, 16 июл. 2021 г. в 18:29, Stefano Stabellini :
>
> On Fri, 16 Jul 2021, Roman Skakun wrote:
> > > Technically this looks good.  But given that exposing a helper
> > > that does either vmalloc_to_page or virt_to_page is one of the
> > > never ending MM discussions I don't want to get into that discussion
> > > and just keep it local in the DMA code.
> > >
> > > Are you fine with me applying this version?
> >
> > Looks good to me, thanks!
> > But, Stefano asked me about using created helper in the
> > xen_swiotlb_free_coherent()
> > and I created a patch according to this mention.
> >
> > We can merge this patch and create a new one for
> > xen_swiotlb_free_coherent() later.
>
> Yeah, no worries, I didn't know that exposing dma_common_vaddr_to_page
> was problematic.
>
> This patch is fine by me.
>
>
> > пт, 16 июл. 2021 г. в 12:35, Christoph Hellwig :
> > >
> > > Technically this looks good.  But given that exposing a helper
> > > that does either vmalloc_to_page or virt_to_page is one of the
> > > never ending MM discussions I don't want to get into that discussion
> > > and just keep it local in the DMA code.
> > >
> > > Are you fine with me applying this version?
> > >
> > > ---
> > > From 40ac971eab89330d6153e7721e88acd2d98833f9 Mon Sep 17 00:00:00 2001
> > > From: Roman Skakun 
> > > Date: Fri, 16 Jul 2021 11:39:34 +0300
> > > Subject: dma-mapping: handle vmalloc addresses in
> > >  dma_common_{mmap,get_sgtable}
> > >
> > > xen-swiotlb can use vmalloc backed addresses for dma coherent allocations
> > > and uses the common helpers.  Properly handle them to unbreak Xen on
> > > ARM platforms.
> > >
> > > Fixes: 1b65c4e5a9af ("swiotlb-xen: use xen_alloc/free_coherent_pages")
> > > Signed-off-by: Roman Skakun 
> > > Reviewed-by: Andrii Anisov 
> > > [hch: split the patch, renamed the helpers]
> > > Signed-off-by: Christoph Hellwig 
> > > ---
> > >  kernel/dma/ops_helpers.c | 12 ++--
> > >  1 file changed, 10 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/kernel/dma/ops_helpers.c b/kernel/dma/ops_helpers.c
> > > index 910ae69cae77..af4a6ef48ce0 100644
> > > --- a/kernel/dma/ops_helpers.c
> > > +++ b/kernel/dma/ops_helpers.c
> > > @@ -5,6 +5,13 @@
> > >   */
> > >  #include 
> > >
> > > +static struct page *dma_common_vaddr_to_page(void *cpu_addr)
> > > +{
> > > +   if (is_vmalloc_addr(cpu_addr))
> > > +   return vmalloc_to_page(cpu_addr);
> > > +   return virt_to_page(cpu_addr);
> > > +}
> > > +
> > >  /*
> > >   * Create scatter-list for the already allocated DMA buffer.
> > >   */
> > > @@ -12,7 +19,7 @@ int dma_common_get_sgtable(struct device *dev, struct 
> > > sg_table *sgt,
> > >  void *cpu_addr, dma_addr_t dma_addr, size_t size,
> > >  unsigned long attrs)
> > >  {
> > > -   struct page *page = virt_to_page(cpu_addr);
> > > +   struct page *page = dma_common_vaddr_to_page(cpu_addr);
> > > int ret;
> > >
> > > ret = sg_alloc_table(sgt, 1, GFP_KERNEL);
> > > @@ -32,6 +39,7 @@ int dma_common_mmap(struct device *dev, struct 
> > > vm_area_struct *vma,
> > > unsigned long user_count = vma_pages(vma);
> > > unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT;
> > > unsigned long off = vma->vm_pgoff;
> > > +   struct page *page = dma_common_vaddr_to_page(cpu_addr);
> > > int ret = -ENXIO;
> > >
> > > vma->vm_page_prot = dma_pgprot(dev, vma->vm_page_prot, attrs);
> > > @@ -43,7 +51,7 @@ int dma_common_mmap(struct device *dev, struct 
> > > vm_area_struct *vma,
> > > return -ENXIO;
> > >
> > > return remap_pfn_range(vma, vma->vm_start,
> > > -   page_to_pfn(virt_to_page(cpu_addr)) + 
> > > vma->vm_pgoff,
> > > +   page_to_pfn(page) + vma->vm_pgoff,
> > > user_count << PAGE_SHIFT, vma->vm_page_prot);
> > >  #else
> > > return -ENXIO;
> > > --
> > > 2.30.2
> > >
> >
> >
> > --
> > Best Regards, Roman.
> >



-- 
Best Regards, Roman.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu