Current code for dma_sync_single_for_device(), when called with dir
set to DMA_FROM_DEVICE, will first invalidate given region of memory
as a first step and then clean+invalidate it as a second. While the
second step should be harmless it seems to be an unnecessary no-op
that could probably be avoided.
Analogous code in Linux kernel (4.18) in arch/arm64/mm/cache.S:
ENTRY(__dma_map_area)
cmp w2, #DMA_FROM_DEVICE
b.eq __dma_inv_area
b __dma_clean_area
ENDPIPROC(__dma_map_area)
is written to only perform either invalidate or clean, depending on
the direction, so change dma_sync_single_for_device() to behave in the
same vein and perfom _either_ invlidate or flush of the given region.
Signed-off-by: Andrey Smirnov <[email protected]>
---
arch/arm/cpu/mmu_64.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/arm/cpu/mmu_64.c b/arch/arm/cpu/mmu_64.c
index b6287aec8..69d1b2071 100644
--- a/arch/arm/cpu/mmu_64.c
+++ b/arch/arm/cpu/mmu_64.c
@@ -297,7 +297,8 @@ void dma_sync_single_for_device(dma_addr_t address, size_t
size,
{
if (dir == DMA_FROM_DEVICE)
v8_inv_dcache_range(address, address + size - 1);
- v8_flush_dcache_range(address, address + size - 1);
+ else
+ v8_flush_dcache_range(address, address + size - 1);
}
dma_addr_t dma_map_single(struct device_d *dev, void *ptr, size_t size,
--
2.17.1
_______________________________________________
barebox mailing list
[email protected]
http://lists.infradead.org/mailman/listinfo/barebox