On Wed, Jul 06, 2022 at 09:50:27PM +0200, Andrea Parri (Microsoft) wrote:
> @@ -305,6 +306,21 @@ void *dma_direct_alloc(struct device *dev, size_t size,
>               ret = page_address(page);
>               if (dma_set_decrypted(dev, ret, size))
>                       goto out_free_pages;
> +#ifdef CONFIG_HAS_IOMEM
> +             /*
> +              * Remap the pages in the unencrypted physical address space
> +              * when dma_unencrypted_base is set (e.g., for Hyper-V AMD
> +              * SEV-SNP isolated guests).
> +              */
> +             if (dma_unencrypted_base) {
> +                     phys_addr_t ret_pa = virt_to_phys(ret);
> +
> +                     ret_pa += dma_unencrypted_base;
> +                     ret = memremap(ret_pa, size, MEMREMAP_WB);
> +                     if (!ret)
> +                             goto out_encrypt_pages;
> +             }
> +#endif


So:

this needs to move into dma_set_decrypted, otherwise we don't handle
the dma_alloc_pages case (never mind that this is pretty unreadable).

Which then again largely duplicates the code in swiotlb.  So I think
what we need here is a low-level helper that does the
set_memory_decrypted and memremap.  I'm not quite sure where it
should go, but maybe some of the people involved with memory
encryption might have good ideas.  unencrypted_base should go with
it and then both swiotlb and dma-direct can call it.

> +     /*
> +      * If dma_unencrypted_base is set, the virtual address returned by
> +      * dma_direct_alloc() is in the vmalloc address range.
> +      */
> +     if (!dma_unencrypted_base && is_vmalloc_addr(cpu_addr)) {
>               vunmap(cpu_addr);
>       } else {
>               if (IS_ENABLED(CONFIG_ARCH_HAS_DMA_CLEAR_UNCACHED))
>                       arch_dma_clear_uncached(cpu_addr, size);
> +#ifdef CONFIG_HAS_IOMEM
> +             if (dma_unencrypted_base) {
> +                     memunmap(cpu_addr);
> +                     /* re-encrypt the pages using the original address */
> +                     cpu_addr = page_address(pfn_to_page(PHYS_PFN(
> +                                     dma_to_phys(dev, dma_addr))));
> +             }
> +#endif
>               if (dma_set_encrypted(dev, cpu_addr, size))

Same on the unmap side.  It might also be worth looking into reordering
the checks in some form instead o that raw dma_unencrypted_base check
before the unmap.
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Reply via email to