On Mon, Apr 25, 2022 at 10:30:33AM +0200, Jan Beulich wrote:
> Recent changes (likely 5fafa6cf529a ["AMD/IOMMU: have callers specify
> the target level for page table walks"]) have made Coverity notice a
> shift count in iommu_pde_from_dfn() which might in theory grow too
> large. While this isn't a problem in practice, address the concern
> nevertheless to not leave dangling breakage in case very large
> superpages would be enabled at some point.
> 
> Coverity ID: 1504264
> 
> While there also address a similar issue in set_iommu_ptes_present().
> It's not clear to me why Coverity hasn't spotted that one.
> 
> Signed-off-by: Jan Beulich <jbeul...@suse.com>

Reviewed-by: Roger Pau Monné <roger....@citrix.com>

> ---
> v4: New.
> 
> --- a/xen/drivers/passthrough/amd/iommu_map.c
> +++ b/xen/drivers/passthrough/amd/iommu_map.c
> @@ -89,11 +89,11 @@ static unsigned int set_iommu_ptes_prese
>                                             bool iw, bool ir)
>  {
>      union amd_iommu_pte *table, *pde;
> -    unsigned int page_sz, flush_flags = 0;
> +    unsigned long page_sz = 1UL << (PTE_PER_TABLE_SHIFT * (pde_level - 1));

Seeing the discussion from Andrews reply, nr_pages might be more
appropriate while still quite short.

I'm not making my Rb conditional to that change though.

Thanks, Roger.

Reply via email to