Re: drm: msm: run into issues

2015-07-23 Thread Rob Clark
On Thu, Jul 23, 2015 at 5:02 AM, Stanimir Varbanov
stanimir.varba...@linaro.org wrote:
 Hi Rob,

 I run into issues with msm drm driver while implementing a test
 application which use v4l2 vidc (venus) decoder driver to decode videos
 and msm drm driver to display the decoded frames. The v4l2 test
 application use msm drm as dmabuf exporter by DRM_IOCTL_MODE_CREATE_DUMB
 and DRM_IOCTL_PRIME_HANDLE_TO_FD from libdrm.

 So far the test app is able to decode and display using dmabuf type of
 buffers (so, to honest it is slower than using mmap  memcpy but this is
 another story). The problems start when destroying drm buffers by call
 to DRM_IOCTL_MODE_DESTROY_DUMB ioctl and also when closing dmabuf fd.
 The issues I'm seeing are:

 - the first and the major one happens in msm_gem_free_object() where we
 are trying to free sg table which table is already freed by drm_prime
 core in dmabuf .detach callback. In fact the msm_gem_free_object is
 called by dmabuf .release callback which should be happened after
 .detach. I find weird to call sg_table_free in .detach callback and did
 not understand why it is there.

so, I think in msm_gem_prime_get_sg_table() we need to create a
duplicate sgt.. since dma_buf_map_attachment() (and therefore
-gem_prime_get_sg_table()) returns ownership of the sgt.  So it is
not correct to be returning our own internal copy.

 - the second one is again in msm_gem.c::put_pages dma_unmap_sg call.
 Some background, vidc (venus) driver use dma-iommu infrastructure copped
 from qcom downstream kernel to abstract iommu map/unmap [1].
 On the other side when drm driver is exporter it use swiotbl [2] dma map
 operations, dma_map_sg will fill sg-dma_address with phy addresses. So
 when vidc call dma_map_sg the sg dma_address is filled with iova address
 then drm call dma_unmap_sg expecting phy addresses.

hmm.. so drm_gem_map_dma_buf() should dma_map_sg() using attach-dev
(which should be vidc, not drm).  Maybe something unexepected was
happening there since we were incorrectly returning our own sgt (which
had already been dma_map_sg()'d w/ the drm device for cache
maintenance?

Could you try:

-
diff --git a/drivers/gpu/drm/msm/msm_gem_prime.c
b/drivers/gpu/drm/msm/msm_gem_prime.c
index dd7a7ab..831461b 100644
--- a/drivers/gpu/drm/msm/msm_gem_prime.c
+++ b/drivers/gpu/drm/msm/msm_gem_prime.c
@@ -23,8 +23,12 @@
 struct sg_table *msm_gem_prime_get_sg_table(struct drm_gem_object *obj)
 {
 struct msm_gem_object *msm_obj = to_msm_bo(obj);
-BUG_ON(!msm_obj-sgt);  /* should have already pinned! */
-return msm_obj-sgt;
+int npages = obj-size  PAGE_SHIFT;
+
+if (WARN_ON(!msm_obj-pages))  /* should have already pinned! */
+return NULL;
+
+return drm_prime_pages_to_sg(msm_obj-pages, npages);
 }

 void *msm_gem_prime_vmap(struct drm_gem_object *obj)
-

 If I didn't express myself well, see the hack bellow which I made to
 prevent kernel crash.

 Any thoughts?

 diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
 index 7482b06cd08f..c4c0f8084b11 100644
 --- a/drivers/gpu/drm/drm_prime.c
 +++ b/drivers/gpu/drm/drm_prime.c
 @@ -155,10 +155,10 @@ static void drm_gem_map_detach(struct dma_buf
 *dma_buf,
 if (prime_attach-dir != DMA_NONE)
 dma_unmap_sg(attach-dev, sgt-sgl, sgt-nents,
 prime_attach-dir);
 -   sg_free_table(sgt);
 +/* sg_free_table(sgt);*/
 }

 -   kfree(sgt);
 +/* kfree(sgt);*/
 kfree(prime_attach);
 attach-priv = NULL;
  }
 diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
 index 9b6220140e99..a78a24ca2fda 100644
 --- a/drivers/gpu/drm/msm/msm_gem.c
 +++ b/drivers/gpu/drm/msm/msm_gem.c
 @@ -116,9 +116,10 @@ static void put_pages(struct drm_gem_object *obj)
 /* For non-cached buffers, ensure the new pages are clean
  * because display controller, GPU, etc. are not coherent:
  */
 -   if (msm_obj-flags  (MSM_BO_WC|MSM_BO_UNCACHED))
 +/* if (msm_obj-flags  (MSM_BO_WC|MSM_BO_UNCACHED))
 dma_unmap_sg(obj-dev-dev, msm_obj-sgt-sgl,
 msm_obj-sgt-nents,
 DMA_BIDIRECTIONAL);
 +*/
 sg_free_table(msm_obj-sgt);
 kfree(msm_obj-sgt);


 --
 regards,
 Stan

 [1]
 https://git.linaro.org/landing-teams/working/qualcomm/kernel.git/blob/refs/heads/release/qcomlt-4.0:/arch/arm64/mm/dma-mapping.c#l1315

 [2]
 https://git.linaro.org/landing-teams/working/qualcomm/kernel.git/blob/refs/heads/release/qcomlt-4.0:/arch/arm64/mm/dma-mapping.c#l350
--
To unsubscribe from this list: send the line unsubscribe linux-arm-msm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: drm: msm: run into issues

2015-07-23 Thread Rob Clark
On Thu, Jul 23, 2015 at 12:23 PM, Stanimir Varbanov
svarba...@mm-sol.com wrote:
 On 07/23/2015 06:59 PM, Rob Clark wrote:
 On Thu, Jul 23, 2015 at 5:02 AM, Stanimir Varbanov
 stanimir.varba...@linaro.org wrote:
 Hi Rob,

 I run into issues with msm drm driver while implementing a test
 application which use v4l2 vidc (venus) decoder driver to decode videos
 and msm drm driver to display the decoded frames. The v4l2 test
 application use msm drm as dmabuf exporter by DRM_IOCTL_MODE_CREATE_DUMB
 and DRM_IOCTL_PRIME_HANDLE_TO_FD from libdrm.

 So far the test app is able to decode and display using dmabuf type of
 buffers (so, to honest it is slower than using mmap  memcpy but this is
 another story). The problems start when destroying drm buffers by call
 to DRM_IOCTL_MODE_DESTROY_DUMB ioctl and also when closing dmabuf fd.
 The issues I'm seeing are:

 - the first and the major one happens in msm_gem_free_object() where we
 are trying to free sg table which table is already freed by drm_prime
 core in dmabuf .detach callback. In fact the msm_gem_free_object is
 called by dmabuf .release callback which should be happened after
 .detach. I find weird to call sg_table_free in .detach callback and did
 not understand why it is there.

 so, I think in msm_gem_prime_get_sg_table() we need to create a
 duplicate sgt.. since dma_buf_map_attachment() (and therefore
 -gem_prime_get_sg_table()) returns ownership of the sgt.  So it is
 not correct to be returning our own internal copy.

 - the second one is again in msm_gem.c::put_pages dma_unmap_sg call.
 Some background, vidc (venus) driver use dma-iommu infrastructure copped
 from qcom downstream kernel to abstract iommu map/unmap [1].
 On the other side when drm driver is exporter it use swiotbl [2] dma map
 operations, dma_map_sg will fill sg-dma_address with phy addresses. So
 when vidc call dma_map_sg the sg dma_address is filled with iova address
 then drm call dma_unmap_sg expecting phy addresses.

 hmm.. so drm_gem_map_dma_buf() should dma_map_sg() using attach-dev
 (which should be vidc, not drm).  Maybe something unexepected was
 happening there since we were incorrectly returning our own sgt (which
 had already been dma_map_sg()'d w/ the drm device for cache
 maintenance?

 In fact this is consequence of the first issue.


 Could you try:

 -
 diff --git a/drivers/gpu/drm/msm/msm_gem_prime.c
 b/drivers/gpu/drm/msm/msm_gem_prime.c
 index dd7a7ab..831461b 100644
 --- a/drivers/gpu/drm/msm/msm_gem_prime.c
 +++ b/drivers/gpu/drm/msm/msm_gem_prime.c
 @@ -23,8 +23,12 @@
  struct sg_table *msm_gem_prime_get_sg_table(struct drm_gem_object *obj)
  {
  struct msm_gem_object *msm_obj = to_msm_bo(obj);
 -BUG_ON(!msm_obj-sgt);  /* should have already pinned! */
 -return msm_obj-sgt;
 +int npages = obj-size  PAGE_SHIFT;
 +
 +if (WARN_ON(!msm_obj-pages))  /* should have already pinned! */
 +return NULL;
 +
 +return drm_prime_pages_to_sg(msm_obj-pages, npages);
  }


 Brilliant diff, duplicating sgt fixed both issues!

 Care to fix this in mainline?

will do, just sent patch..  I'll include it for 4.2-fixes shortly

BR,
-R

 --
 regards,
 Stan
--
To unsubscribe from this list: send the line unsubscribe linux-arm-msm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: drm: msm: run into issues

2015-07-23 Thread Stanimir Varbanov
On 07/23/2015 06:59 PM, Rob Clark wrote:
 On Thu, Jul 23, 2015 at 5:02 AM, Stanimir Varbanov
 stanimir.varba...@linaro.org wrote:
 Hi Rob,

 I run into issues with msm drm driver while implementing a test
 application which use v4l2 vidc (venus) decoder driver to decode videos
 and msm drm driver to display the decoded frames. The v4l2 test
 application use msm drm as dmabuf exporter by DRM_IOCTL_MODE_CREATE_DUMB
 and DRM_IOCTL_PRIME_HANDLE_TO_FD from libdrm.

 So far the test app is able to decode and display using dmabuf type of
 buffers (so, to honest it is slower than using mmap  memcpy but this is
 another story). The problems start when destroying drm buffers by call
 to DRM_IOCTL_MODE_DESTROY_DUMB ioctl and also when closing dmabuf fd.
 The issues I'm seeing are:

 - the first and the major one happens in msm_gem_free_object() where we
 are trying to free sg table which table is already freed by drm_prime
 core in dmabuf .detach callback. In fact the msm_gem_free_object is
 called by dmabuf .release callback which should be happened after
 .detach. I find weird to call sg_table_free in .detach callback and did
 not understand why it is there.
 
 so, I think in msm_gem_prime_get_sg_table() we need to create a
 duplicate sgt.. since dma_buf_map_attachment() (and therefore
 -gem_prime_get_sg_table()) returns ownership of the sgt.  So it is
 not correct to be returning our own internal copy.
 
 - the second one is again in msm_gem.c::put_pages dma_unmap_sg call.
 Some background, vidc (venus) driver use dma-iommu infrastructure copped
 from qcom downstream kernel to abstract iommu map/unmap [1].
 On the other side when drm driver is exporter it use swiotbl [2] dma map
 operations, dma_map_sg will fill sg-dma_address with phy addresses. So
 when vidc call dma_map_sg the sg dma_address is filled with iova address
 then drm call dma_unmap_sg expecting phy addresses.
 
 hmm.. so drm_gem_map_dma_buf() should dma_map_sg() using attach-dev
 (which should be vidc, not drm).  Maybe something unexepected was
 happening there since we were incorrectly returning our own sgt (which
 had already been dma_map_sg()'d w/ the drm device for cache
 maintenance?

In fact this is consequence of the first issue.

 
 Could you try:
 
 -
 diff --git a/drivers/gpu/drm/msm/msm_gem_prime.c
 b/drivers/gpu/drm/msm/msm_gem_prime.c
 index dd7a7ab..831461b 100644
 --- a/drivers/gpu/drm/msm/msm_gem_prime.c
 +++ b/drivers/gpu/drm/msm/msm_gem_prime.c
 @@ -23,8 +23,12 @@
  struct sg_table *msm_gem_prime_get_sg_table(struct drm_gem_object *obj)
  {
  struct msm_gem_object *msm_obj = to_msm_bo(obj);
 -BUG_ON(!msm_obj-sgt);  /* should have already pinned! */
 -return msm_obj-sgt;
 +int npages = obj-size  PAGE_SHIFT;
 +
 +if (WARN_ON(!msm_obj-pages))  /* should have already pinned! */
 +return NULL;
 +
 +return drm_prime_pages_to_sg(msm_obj-pages, npages);
  }
 

Brilliant diff, duplicating sgt fixed both issues!

Care to fix this in mainline?

-- 
regards,
Stan
--
To unsubscribe from this list: send the line unsubscribe linux-arm-msm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html