Hi Jacopo,

On 09/04/18 08:25, jacopo mondi wrote:
Hi Robin, Laurent,
     a long time passed, sorry about this.

On Wed, Nov 15, 2017 at 01:38:23PM +0000, Robin Murphy wrote:
On 14/11/17 17:08, Jacopo Mondi wrote:
On SH4 architecture, with SPARSEMEM memory model, translating page to
pfn hangs the CPU. Post-pone translation to pfn after
dma_mmap_from_dev_coherent() function call as it succeeds and make page
translation not necessary.

This patch was suggested by Laurent Pinchart and he's working to submit
a proper fix mainline. Not sending for inclusion at the moment.

Y'know, I think this patch does have some merit by itself - until we know
that cpu_addr *doesn't* represent some device-private memory which is not
guaranteed to be backed by a struct page, calling virt_to_page() on it is
arguably semantically incorrect, even if it might happen to be benign in
most cases.

I still need to carry this patch in my trees to have a working dma memory 
on SH4 platforms. My understanding from your comment is that there may
be a way forward for this patch, do you still think the same? Have you
got any suggestion on how to improve this eventually for inclusion?

As before, the change itself does seem reasonable; it might be worth rewording the commit message in more general terms rather than making it sound like an SH-specific workaround (which I really don't think it is), but otherwise I'd say just repost it as a non-RFC patch.




Suggested-by: Laurent Pinchart <laurent.pinch...@ideasonboard.com>
Signed-off-by: Jacopo Mondi <jacopo+rene...@jmondi.org>
  drivers/base/dma-mapping.c | 3 ++-
  1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/base/dma-mapping.c b/drivers/base/dma-mapping.c
index e584edd..73d64d3 100644
--- a/drivers/base/dma-mapping.c
+++ b/drivers/base/dma-mapping.c
@@ -227,8 +227,8 @@ int dma_common_mmap(struct device *dev, struct 
vm_area_struct *vma,
        unsigned long user_count = vma_pages(vma);
        unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT;
-       unsigned long pfn = page_to_pfn(virt_to_page(cpu_addr));
        unsigned long off = vma->vm_pgoff;
+       unsigned long pfn;

        vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);

@@ -236,6 +236,7 @@ int dma_common_mmap(struct device *dev, struct 
vm_area_struct *vma,
                return ret;

        if (off < count && user_count <= (count - off)) {
+               pfn = page_to_pfn(virt_to_page(cpu_addr));
                ret = remap_pfn_range(vma, vma->vm_start,
                                      pfn + off,
                                      user_count << PAGE_SHIFT,

iommu mailing list

Reply via email to