01.02.2019 16:28, Thierry Reding пишет:
> From: Thierry Reding <tred...@nvidia.com>
> 
> On Tegra186 and later, the ARM SMMU provides an input address space that
> is 48 bits wide. However, memory clients can only address up to 40 bits.
> If the geometry is used as-is, allocations of IOVA space can end up in a
> region that cannot be addressed by the memory clients.
> 
> To fix this, restrict the IOVA space to the DMA mask of the host1x
> device. Note that, technically, the IOVA space needs to be restricted to
> the intersection of the DMA masks for all clients that are attached to
> the IOMMU domain. In practice using the DMA mask of the host1x device is
> sufficient because all host1x clients share the same DMA mask.
> 
> Signed-off-by: Thierry Reding <tred...@nvidia.com>
> ---
>  drivers/gpu/drm/tegra/drm.c | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/gpu/drm/tegra/drm.c b/drivers/gpu/drm/tegra/drm.c
> index 271c7a5fc954..0c5f1e6a0446 100644
> --- a/drivers/gpu/drm/tegra/drm.c
> +++ b/drivers/gpu/drm/tegra/drm.c
> @@ -136,11 +136,12 @@ static int tegra_drm_load(struct drm_device *drm, 
> unsigned long flags)
>  
>       if (tegra->domain) {
>               u64 carveout_start, carveout_end, gem_start, gem_end;
> +             u64 dma_mask = dma_get_mask(&device->dev);
>               dma_addr_t start, end;
>               unsigned long order;
>  
> -             start = tegra->domain->geometry.aperture_start;
> -             end = tegra->domain->geometry.aperture_end;
> +             start = tegra->domain->geometry.aperture_start & dma_mask;
> +             end = tegra->domain->geometry.aperture_end & dma_mask;
>  
>               gem_start = start;
>               gem_end = end - CARVEOUT_SZ;
> 

For older Tegra's:

Reviewed-by: Dmitry Osipenko <dig...@gmail.com>
Tested-by: Dmitry Osipenko <dig...@gmail.com>
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

Reply via email to