Right now, the BO is mapped as a cached region when ->vmap() is called
and the underlying object is not a dmabuf.
Doing that makes cache management a bit more complicated (you'd need
to call dma_map/unmap_sg() on the ->sgt field everytime the BO is about
to be passed to the GPU/CPU), so let's map the BO with writecombine
attributes instead (as done in most drivers).

Signed-off-by: Boris Brezillon <boris.brezil...@collabora.com>
---
Found this issue while working on panfrost perfcnt where the GPU dumps
perf counter values in memory and the CPU reads them back in
kernel-space. This patch seems to solve the unpredictable behavior I
had.

I can also go for the other option (call dma_map/unmap/_sg() when
needed) if you think that's more appropriate.
---
 drivers/gpu/drm/drm_gem_shmem_helper.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c 
b/drivers/gpu/drm/drm_gem_shmem_helper.c
index 1ee208c2c85e..472ea5d81f82 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -255,7 +255,8 @@ static void *drm_gem_shmem_vmap_locked(struct 
drm_gem_shmem_object *shmem)
        if (obj->import_attach)
                shmem->vaddr = dma_buf_vmap(obj->import_attach->dmabuf);
        else
-               shmem->vaddr = vmap(shmem->pages, obj->size >> PAGE_SHIFT, 
VM_MAP, PAGE_KERNEL);
+               shmem->vaddr = vmap(shmem->pages, obj->size >> PAGE_SHIFT,
+                                   VM_MAP, pgprot_writecombine(PAGE_KERNEL));
 
        if (!shmem->vaddr) {
                DRM_DEBUG_KMS("Failed to vmap pages\n");
-- 
2.20.1

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

Reply via email to