Re: [PATCH 5.2 073/131] dma-direct: correct the physical addr in dma_direct_sync_sg_for_cpu/device

2019-08-06 Thread Christoph Hellwig
On Tue, Aug 06, 2019 at 06:04:48PM -0400, Sasha Levin wrote:
> On Tue, Aug 06, 2019 at 01:57:56PM +0100, Robin Murphy wrote:
>> Given that the two commits touch entirely separate files I'm not sure what 
>> the imagined dependency could be :/
>
>> From the commit message of 3de433c5b38a ("drm/msm: Use the correct
> dma_sync calls in msm_gem"):
>
>Fixes the combination of two patches:
>
>Fixes: 0036bc73ccbe (drm/msm: stop abusing dma_map/unmap for cache)
>Fixes: 449fa54d6815 (dma-direct: correct the physical addr in 
> dma_direct_sync_sg_for_cpu/device)
>
>> 0036bc73ccbe is indeed not a fix (frankly I'm not convinced it's even a 
>> valid change at all) but even conceptually it bears no relation whatsoever 
>> to the genuine bug fixed by 449fa54d6815.
>
> Given that Rob Clark asked me to drop 0036bc73ccbe not because it's
> irrelevant but because it's potentially dangerous, I did not feel
> confident enough ignoring the statement in the commit message and
> dropped this patch instead.

449fa54d6815 fixes swiotlb misbehaving vs the API spec for the call,
something that real users on x86 cought.  Robs fix works around the
fact that msm is badly abusing dma API.  So even if both are genuine
bugs it is pretty clear we need to decide the match for the proper
users of the API and not the single abuser.


Re: [PATCH 5.2 073/131] dma-direct: correct the physical addr in dma_direct_sync_sg_for_cpu/device

2019-08-06 Thread Sasha Levin

On Tue, Aug 06, 2019 at 01:57:56PM +0100, Robin Murphy wrote:

On 06/08/2019 13:41, Sasha Levin wrote:

On Mon, Aug 05, 2019 at 03:02:40PM +0200, Greg Kroah-Hartman wrote:

[ Upstream commit 449fa54d6815be8c2c1f68fa9dbbae9384a7c03e ]

dma_map_sg() may use swiotlb buffer when the kernel command line includes
"swiotlb=force" or the dma_addr is out of dev->dma_mask range.  After
DMA complete the memory moving from device to memory, then user call
dma_sync_sg_for_cpu() to sync with DMA buffer, and copy the original
virtual buffer to other space.

So dma_direct_sync_sg_for_cpu() should use swiotlb physical addr, not
the original physical addr from sg_phys(sg).

dma_direct_sync_sg_for_device() also has the same issue, correct it as
well.

Fixes: 55897af63091("dma-direct: merge swiotlb_dma_ops into the 
dma_direct code")

Signed-off-by: Fugang Duan 
Reviewed-by: Robin Murphy 
Signed-off-by: Christoph Hellwig 
Signed-off-by: Sasha Levin 


I'm going to drop this one. There's a fix to it upstream, but the fix
also seems to want 0036bc73ccbe ("drm/msm: stop abusing dma_map/unmap for
cache") which we're not taking, so I'm just going to drop this one as
well.


Given that the two commits touch entirely separate files I'm not sure 
what the imagined dependency could be :/



From the commit message of 3de433c5b38a ("drm/msm: Use the correct

dma_sync calls in msm_gem"):

   Fixes the combination of two patches:

   Fixes: 0036bc73ccbe (drm/msm: stop abusing dma_map/unmap for cache)
   Fixes: 449fa54d6815 (dma-direct: correct the physical addr in 
dma_direct_sync_sg_for_cpu/device)

0036bc73ccbe is indeed not a fix (frankly I'm not convinced it's even 
a valid change at all) but even conceptually it bears no relation 
whatsoever to the genuine bug fixed by 449fa54d6815.


Given that Rob Clark asked me to drop 0036bc73ccbe not because it's
irrelevant but because it's potentially dangerous, I did not feel
confident enough ignoring the statement in the commit message and
dropped this patch instead.

If I'm  wrong here, I'd be happy to take these two patches if someone
acks it.

--
Thanks,
Sasha


Re: [PATCH 5.2 073/131] dma-direct: correct the physical addr in dma_direct_sync_sg_for_cpu/device

2019-08-06 Thread Robin Murphy

On 06/08/2019 13:41, Sasha Levin wrote:

On Mon, Aug 05, 2019 at 03:02:40PM +0200, Greg Kroah-Hartman wrote:

[ Upstream commit 449fa54d6815be8c2c1f68fa9dbbae9384a7c03e ]

dma_map_sg() may use swiotlb buffer when the kernel command line includes
"swiotlb=force" or the dma_addr is out of dev->dma_mask range.  After
DMA complete the memory moving from device to memory, then user call
dma_sync_sg_for_cpu() to sync with DMA buffer, and copy the original
virtual buffer to other space.

So dma_direct_sync_sg_for_cpu() should use swiotlb physical addr, not
the original physical addr from sg_phys(sg).

dma_direct_sync_sg_for_device() also has the same issue, correct it as
well.

Fixes: 55897af63091("dma-direct: merge swiotlb_dma_ops into the 
dma_direct code")

Signed-off-by: Fugang Duan 
Reviewed-by: Robin Murphy 
Signed-off-by: Christoph Hellwig 
Signed-off-by: Sasha Levin 


I'm going to drop this one. There's a fix to it upstream, but the fix
also seems to want 0036bc73ccbe ("drm/msm: stop abusing dma_map/unmap for
cache") which we're not taking, so I'm just going to drop this one as
well.


Given that the two commits touch entirely separate files I'm not sure 
what the imagined dependency could be :/


0036bc73ccbe is indeed not a fix (frankly I'm not convinced it's even a 
valid change at all) but even conceptually it bears no relation 
whatsoever to the genuine bug fixed by 449fa54d6815.


Robin.



If someone wants it in the stable trees, please send a tested backport.

--
Thanks,
Sasha


Re: [PATCH 5.2 073/131] dma-direct: correct the physical addr in dma_direct_sync_sg_for_cpu/device

2019-08-06 Thread Sasha Levin

On Mon, Aug 05, 2019 at 03:02:40PM +0200, Greg Kroah-Hartman wrote:

[ Upstream commit 449fa54d6815be8c2c1f68fa9dbbae9384a7c03e ]

dma_map_sg() may use swiotlb buffer when the kernel command line includes
"swiotlb=force" or the dma_addr is out of dev->dma_mask range.  After
DMA complete the memory moving from device to memory, then user call
dma_sync_sg_for_cpu() to sync with DMA buffer, and copy the original
virtual buffer to other space.

So dma_direct_sync_sg_for_cpu() should use swiotlb physical addr, not
the original physical addr from sg_phys(sg).

dma_direct_sync_sg_for_device() also has the same issue, correct it as
well.

Fixes: 55897af63091("dma-direct: merge swiotlb_dma_ops into the dma_direct 
code")
Signed-off-by: Fugang Duan 
Reviewed-by: Robin Murphy 
Signed-off-by: Christoph Hellwig 
Signed-off-by: Sasha Levin 


I'm going to drop this one. There's a fix to it upstream, but the fix
also seems to want 0036bc73ccbe ("drm/msm: stop abusing dma_map/unmap for
cache") which we're not taking, so I'm just going to drop this one as
well.

If someone wants it in the stable trees, please send a tested backport.

--
Thanks,
Sasha


[PATCH 5.2 073/131] dma-direct: correct the physical addr in dma_direct_sync_sg_for_cpu/device

2019-08-05 Thread Greg Kroah-Hartman
[ Upstream commit 449fa54d6815be8c2c1f68fa9dbbae9384a7c03e ]

dma_map_sg() may use swiotlb buffer when the kernel command line includes
"swiotlb=force" or the dma_addr is out of dev->dma_mask range.  After
DMA complete the memory moving from device to memory, then user call
dma_sync_sg_for_cpu() to sync with DMA buffer, and copy the original
virtual buffer to other space.

So dma_direct_sync_sg_for_cpu() should use swiotlb physical addr, not
the original physical addr from sg_phys(sg).

dma_direct_sync_sg_for_device() also has the same issue, correct it as
well.

Fixes: 55897af63091("dma-direct: merge swiotlb_dma_ops into the dma_direct 
code")
Signed-off-by: Fugang Duan 
Reviewed-by: Robin Murphy 
Signed-off-by: Christoph Hellwig 
Signed-off-by: Sasha Levin 
---
 kernel/dma/direct.c | 18 +++---
 1 file changed, 11 insertions(+), 7 deletions(-)

diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index 2c2772e9702ab..2c6d51a62251d 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -231,12 +231,14 @@ void dma_direct_sync_sg_for_device(struct device *dev,
int i;
 
for_each_sg(sgl, sg, nents, i) {
-   if (unlikely(is_swiotlb_buffer(sg_phys(sg
-   swiotlb_tbl_sync_single(dev, sg_phys(sg), sg->length,
+   phys_addr_t paddr = dma_to_phys(dev, sg_dma_address(sg));
+
+   if (unlikely(is_swiotlb_buffer(paddr)))
+   swiotlb_tbl_sync_single(dev, paddr, sg->length,
dir, SYNC_FOR_DEVICE);
 
if (!dev_is_dma_coherent(dev))
-   arch_sync_dma_for_device(dev, sg_phys(sg), sg->length,
+   arch_sync_dma_for_device(dev, paddr, sg->length,
dir);
}
 }
@@ -268,11 +270,13 @@ void dma_direct_sync_sg_for_cpu(struct device *dev,
int i;
 
for_each_sg(sgl, sg, nents, i) {
+   phys_addr_t paddr = dma_to_phys(dev, sg_dma_address(sg));
+
if (!dev_is_dma_coherent(dev))
-   arch_sync_dma_for_cpu(dev, sg_phys(sg), sg->length, 
dir);
-   
-   if (unlikely(is_swiotlb_buffer(sg_phys(sg
-   swiotlb_tbl_sync_single(dev, sg_phys(sg), sg->length, 
dir,
+   arch_sync_dma_for_cpu(dev, paddr, sg->length, dir);
+
+   if (unlikely(is_swiotlb_buffer(paddr)))
+   swiotlb_tbl_sync_single(dev, paddr, sg->length, dir,
SYNC_FOR_CPU);
}
 
-- 
2.20.1