Re: [PATCH v19 17/30] drm/panfrost: Fix the error path in panfrost_mmu_map_fault_addr()

2024-04-04 Thread Dmitry Osipenko
On 1/5/24 21:46, Dmitry Osipenko wrote:
> From: Boris Brezillon 
> 
> If some the pages or sgt allocation failed, we shouldn't release the
> pages ref we got earlier, otherwise we will end up with unbalanced
> get/put_pages() calls. We should instead leave everything in place
> and let the BO release function deal with extra cleanup when the object
> is destroyed, or let the fault handler try again next time it's called.
> 
> Fixes: 187d2929206e ("drm/panfrost: Add support for GPU heap allocations")
> Cc: 
> Signed-off-by: Boris Brezillon 
> Co-developed-by: Dmitry Osipenko 
> Signed-off-by: Dmitry Osipenko 
> ---
>  drivers/gpu/drm/panfrost/panfrost_mmu.c | 13 +
>  1 file changed, 9 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c 
> b/drivers/gpu/drm/panfrost/panfrost_mmu.c
> index bd5a0073009d..4a0b4bf03f1a 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c
> @@ -502,11 +502,18 @@ static int panfrost_mmu_map_fault_addr(struct 
> panfrost_device *pfdev, int as,
>   mapping_set_unevictable(mapping);
>  
>   for (i = page_offset; i < page_offset + NUM_FAULT_PAGES; i++) {
> + /* Can happen if the last fault only partially filled this
> +  * section of the pages array before failing. In that case
> +  * we skip already filled pages.
> +  */
> + if (pages[i])
> + continue;
> +
>   pages[i] = shmem_read_mapping_page(mapping, i);
>   if (IS_ERR(pages[i])) {
>   ret = PTR_ERR(pages[i]);
>   pages[i] = NULL;
> - goto err_pages;
> + goto err_unlock;
>   }
>   }
>  
> @@ -514,7 +521,7 @@ static int panfrost_mmu_map_fault_addr(struct 
> panfrost_device *pfdev, int as,
>   ret = sg_alloc_table_from_pages(sgt, pages + page_offset,
>   NUM_FAULT_PAGES, 0, SZ_2M, GFP_KERNEL);
>   if (ret)
> - goto err_pages;
> + goto err_unlock;
>  
>   ret = dma_map_sgtable(pfdev->dev, sgt, DMA_BIDIRECTIONAL, 0);
>   if (ret)
> @@ -537,8 +544,6 @@ static int panfrost_mmu_map_fault_addr(struct 
> panfrost_device *pfdev, int as,
>  
>  err_map:
>   sg_free_table(sgt);
> -err_pages:
> - drm_gem_shmem_put_pages_locked(>base);
>  err_unlock:
>   dma_resv_unlock(obj->resv);
>  err_bo:

Applied to misc-fixes

Forgot that this patch doesn't depend on others in this series, sorry
for not doing it earlier

-- 
Best regards,
Dmitry



Re: [PATCH v19 17/30] drm/panfrost: Fix the error path in panfrost_mmu_map_fault_addr()

2024-01-26 Thread AngeloGioacchino Del Regno

Il 05/01/24 19:46, Dmitry Osipenko ha scritto:

From: Boris Brezillon 

If some the pages or sgt allocation failed, we shouldn't release the
pages ref we got earlier, otherwise we will end up with unbalanced
get/put_pages() calls. We should instead leave everything in place
and let the BO release function deal with extra cleanup when the object
is destroyed, or let the fault handler try again next time it's called.

Fixes: 187d2929206e ("drm/panfrost: Add support for GPU heap allocations")
Cc: 
Signed-off-by: Boris Brezillon 
Co-developed-by: Dmitry Osipenko 
Signed-off-by: Dmitry Osipenko 


Reviewed-by: AngeloGioacchino Del Regno 





Re: [PATCH v19 17/30] drm/panfrost: Fix the error path in panfrost_mmu_map_fault_addr()

2024-01-25 Thread Dmitry Osipenko
On 1/26/24 00:41, Dmitry Osipenko wrote:
> On 1/5/24 21:46, Dmitry Osipenko wrote:
>>  for (i = page_offset; i < page_offset + NUM_FAULT_PAGES; i++) {
>> +/* Can happen if the last fault only partially filled this
>> + * section of the pages array before failing. In that case
>> + * we skip already filled pages.
>> + */
>> +if (pages[i])
>> +continue;
>> +
>>  pages[i] = shmem_read_mapping_page(mapping, i);
> 
> Although, the shmem_read_mapping_page() should return same page if it
> was already allocated, isn't it? I.e. there was no bug here and the
> fixes/stable tags not needed.

Scratch that, I forgot that the patch is about the unbalanced
get/put_pages

-- 
Best regards,
Dmitry



Re: [PATCH v19 17/30] drm/panfrost: Fix the error path in panfrost_mmu_map_fault_addr()

2024-01-25 Thread Dmitry Osipenko
On 1/5/24 21:46, Dmitry Osipenko wrote:
>   for (i = page_offset; i < page_offset + NUM_FAULT_PAGES; i++) {
> + /* Can happen if the last fault only partially filled this
> +  * section of the pages array before failing. In that case
> +  * we skip already filled pages.
> +  */
> + if (pages[i])
> + continue;
> +
>   pages[i] = shmem_read_mapping_page(mapping, i);

Although, the shmem_read_mapping_page() should return same page if it
was already allocated, isn't it? I.e. there was no bug here and the
fixes/stable tags not needed.

-- 
Best regards,
Dmitry




Re: [PATCH v19 17/30] drm/panfrost: Fix the error path in panfrost_mmu_map_fault_addr()

2024-01-25 Thread Steven Price
On 05/01/2024 18:46, Dmitry Osipenko wrote:
> From: Boris Brezillon 
> 
> If some the pages or sgt allocation failed, we shouldn't release the
> pages ref we got earlier, otherwise we will end up with unbalanced
> get/put_pages() calls. We should instead leave everything in place
> and let the BO release function deal with extra cleanup when the object
> is destroyed, or let the fault handler try again next time it's called.
> 
> Fixes: 187d2929206e ("drm/panfrost: Add support for GPU heap allocations")
> Cc: 
> Signed-off-by: Boris Brezillon 
> Co-developed-by: Dmitry Osipenko 
> Signed-off-by: Dmitry Osipenko 

Reviewed-by: Steven Price 

> ---
>  drivers/gpu/drm/panfrost/panfrost_mmu.c | 13 +
>  1 file changed, 9 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c 
> b/drivers/gpu/drm/panfrost/panfrost_mmu.c
> index bd5a0073009d..4a0b4bf03f1a 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_mmu.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c
> @@ -502,11 +502,18 @@ static int panfrost_mmu_map_fault_addr(struct 
> panfrost_device *pfdev, int as,
>   mapping_set_unevictable(mapping);
>  
>   for (i = page_offset; i < page_offset + NUM_FAULT_PAGES; i++) {
> + /* Can happen if the last fault only partially filled this
> +  * section of the pages array before failing. In that case
> +  * we skip already filled pages.
> +  */
> + if (pages[i])
> + continue;
> +
>   pages[i] = shmem_read_mapping_page(mapping, i);
>   if (IS_ERR(pages[i])) {
>   ret = PTR_ERR(pages[i]);
>   pages[i] = NULL;
> - goto err_pages;
> + goto err_unlock;
>   }
>   }
>  
> @@ -514,7 +521,7 @@ static int panfrost_mmu_map_fault_addr(struct 
> panfrost_device *pfdev, int as,
>   ret = sg_alloc_table_from_pages(sgt, pages + page_offset,
>   NUM_FAULT_PAGES, 0, SZ_2M, GFP_KERNEL);
>   if (ret)
> - goto err_pages;
> + goto err_unlock;
>  
>   ret = dma_map_sgtable(pfdev->dev, sgt, DMA_BIDIRECTIONAL, 0);
>   if (ret)
> @@ -537,8 +544,6 @@ static int panfrost_mmu_map_fault_addr(struct 
> panfrost_device *pfdev, int as,
>  
>  err_map:
>   sg_free_table(sgt);
> -err_pages:
> - drm_gem_shmem_put_pages_locked(>base);
>  err_unlock:
>   dma_resv_unlock(obj->resv);
>  err_bo:



[PATCH v19 17/30] drm/panfrost: Fix the error path in panfrost_mmu_map_fault_addr()

2024-01-05 Thread Dmitry Osipenko
From: Boris Brezillon 

If some the pages or sgt allocation failed, we shouldn't release the
pages ref we got earlier, otherwise we will end up with unbalanced
get/put_pages() calls. We should instead leave everything in place
and let the BO release function deal with extra cleanup when the object
is destroyed, or let the fault handler try again next time it's called.

Fixes: 187d2929206e ("drm/panfrost: Add support for GPU heap allocations")
Cc: 
Signed-off-by: Boris Brezillon 
Co-developed-by: Dmitry Osipenko 
Signed-off-by: Dmitry Osipenko 
---
 drivers/gpu/drm/panfrost/panfrost_mmu.c | 13 +
 1 file changed, 9 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c 
b/drivers/gpu/drm/panfrost/panfrost_mmu.c
index bd5a0073009d..4a0b4bf03f1a 100644
--- a/drivers/gpu/drm/panfrost/panfrost_mmu.c
+++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c
@@ -502,11 +502,18 @@ static int panfrost_mmu_map_fault_addr(struct 
panfrost_device *pfdev, int as,
mapping_set_unevictable(mapping);
 
for (i = page_offset; i < page_offset + NUM_FAULT_PAGES; i++) {
+   /* Can happen if the last fault only partially filled this
+* section of the pages array before failing. In that case
+* we skip already filled pages.
+*/
+   if (pages[i])
+   continue;
+
pages[i] = shmem_read_mapping_page(mapping, i);
if (IS_ERR(pages[i])) {
ret = PTR_ERR(pages[i]);
pages[i] = NULL;
-   goto err_pages;
+   goto err_unlock;
}
}
 
@@ -514,7 +521,7 @@ static int panfrost_mmu_map_fault_addr(struct 
panfrost_device *pfdev, int as,
ret = sg_alloc_table_from_pages(sgt, pages + page_offset,
NUM_FAULT_PAGES, 0, SZ_2M, GFP_KERNEL);
if (ret)
-   goto err_pages;
+   goto err_unlock;
 
ret = dma_map_sgtable(pfdev->dev, sgt, DMA_BIDIRECTIONAL, 0);
if (ret)
@@ -537,8 +544,6 @@ static int panfrost_mmu_map_fault_addr(struct 
panfrost_device *pfdev, int as,
 
 err_map:
sg_free_table(sgt);
-err_pages:
-   drm_gem_shmem_put_pages_locked(>base);
 err_unlock:
dma_resv_unlock(obj->resv);
 err_bo:
-- 
2.43.0