Re: [RFC PATCH 2/2] dma-direct: Fix dma_direct_{alloc,free}() for Hyperv-V IVMs

2022-07-07 Thread Andrea Parri
> > @@ -305,6 +306,21 @@ void *dma_direct_alloc(struct device *dev, size_t size,
> > ret = page_address(page);
> > if (dma_set_decrypted(dev, ret, size))
> > goto out_free_pages;
> > +#ifdef CONFIG_HAS_IOMEM
> > +   /*
> > +* Remap the pages in the unencrypted physical address space
> > +* when dma_unencrypted_base is set (e.g., for Hyper-V AMD
> > +* SEV-SNP isolated guests).
> > +*/
> > +   if (dma_unencrypted_base) {
> > +   phys_addr_t ret_pa = virt_to_phys(ret);
> > +
> > +   ret_pa += dma_unencrypted_base;
> > +   ret = memremap(ret_pa, size, MEMREMAP_WB);
> > +   if (!ret)
> > +   goto out_encrypt_pages;
> > +   }
> > +#endif
> 
> 
> So:
> 
> this needs to move into dma_set_decrypted, otherwise we don't handle
> the dma_alloc_pages case (never mind that this is pretty unreadable).
> 
> Which then again largely duplicates the code in swiotlb.  So I think
> what we need here is a low-level helper that does the
> set_memory_decrypted and memremap.  I'm not quite sure where it
> should go, but maybe some of the people involved with memory
> encryption might have good ideas.  unencrypted_base should go with
> it and then both swiotlb and dma-direct can call it.

Agreed, will look into this more  (other people's ideas welcome).


> > +   /*
> > +* If dma_unencrypted_base is set, the virtual address returned by
> > +* dma_direct_alloc() is in the vmalloc address range.
> > +*/
> > +   if (!dma_unencrypted_base && is_vmalloc_addr(cpu_addr)) {
> > vunmap(cpu_addr);
> > } else {
> > if (IS_ENABLED(CONFIG_ARCH_HAS_DMA_CLEAR_UNCACHED))
> > arch_dma_clear_uncached(cpu_addr, size);
> > +#ifdef CONFIG_HAS_IOMEM
> > +   if (dma_unencrypted_base) {
> > +   memunmap(cpu_addr);
> > +   /* re-encrypt the pages using the original address */
> > +   cpu_addr = page_address(pfn_to_page(PHYS_PFN(
> > +   dma_to_phys(dev, dma_addr;
> > +   }
> > +#endif
> > if (dma_set_encrypted(dev, cpu_addr, size))
> 
> Same on the unmap side.  It might also be worth looking into reordering
> the checks in some form instead o that raw dma_unencrypted_base check
> before the unmap.

Got it.

Thanks,
  Andrea
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RFC PATCH 2/2] dma-direct: Fix dma_direct_{alloc,free}() for Hyperv-V IVMs

2022-07-06 Thread Christoph Hellwig
On Wed, Jul 06, 2022 at 09:50:27PM +0200, Andrea Parri (Microsoft) wrote:
> @@ -305,6 +306,21 @@ void *dma_direct_alloc(struct device *dev, size_t size,
>   ret = page_address(page);
>   if (dma_set_decrypted(dev, ret, size))
>   goto out_free_pages;
> +#ifdef CONFIG_HAS_IOMEM
> + /*
> +  * Remap the pages in the unencrypted physical address space
> +  * when dma_unencrypted_base is set (e.g., for Hyper-V AMD
> +  * SEV-SNP isolated guests).
> +  */
> + if (dma_unencrypted_base) {
> + phys_addr_t ret_pa = virt_to_phys(ret);
> +
> + ret_pa += dma_unencrypted_base;
> + ret = memremap(ret_pa, size, MEMREMAP_WB);
> + if (!ret)
> + goto out_encrypt_pages;
> + }
> +#endif


So:

this needs to move into dma_set_decrypted, otherwise we don't handle
the dma_alloc_pages case (never mind that this is pretty unreadable).

Which then again largely duplicates the code in swiotlb.  So I think
what we need here is a low-level helper that does the
set_memory_decrypted and memremap.  I'm not quite sure where it
should go, but maybe some of the people involved with memory
encryption might have good ideas.  unencrypted_base should go with
it and then both swiotlb and dma-direct can call it.

> + /*
> +  * If dma_unencrypted_base is set, the virtual address returned by
> +  * dma_direct_alloc() is in the vmalloc address range.
> +  */
> + if (!dma_unencrypted_base && is_vmalloc_addr(cpu_addr)) {
>   vunmap(cpu_addr);
>   } else {
>   if (IS_ENABLED(CONFIG_ARCH_HAS_DMA_CLEAR_UNCACHED))
>   arch_dma_clear_uncached(cpu_addr, size);
> +#ifdef CONFIG_HAS_IOMEM
> + if (dma_unencrypted_base) {
> + memunmap(cpu_addr);
> + /* re-encrypt the pages using the original address */
> + cpu_addr = page_address(pfn_to_page(PHYS_PFN(
> + dma_to_phys(dev, dma_addr;
> + }
> +#endif
>   if (dma_set_encrypted(dev, cpu_addr, size))

Same on the unmap side.  It might also be worth looking into reordering
the checks in some form instead o that raw dma_unencrypted_base check
before the unmap.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[RFC PATCH 2/2] dma-direct: Fix dma_direct_{alloc, free}() for Hyperv-V IVMs

2022-07-06 Thread Andrea Parri (Microsoft)
In Hyper-V AMD SEV-SNP Isolated VMs, the virtual address returned by
dma_direct_alloc() must map above dma_unencrypted_base because the
memory is shared with the hardware device and must not be encrypted.

Modify dma_direct_alloc() to do the necessary remapping.  In
dma_direct_free(), use the (unmodified) DMA address to derive the
original virtual address and re-encrypt the pages.

Suggested-by: Michael Kelley 
Co-developed-by: Dexuan Cui 
Signed-off-by: Dexuan Cui 
Signed-off-by: Andrea Parri (Microsoft) 
---
 kernel/dma/direct.c | 30 +-
 1 file changed, 29 insertions(+), 1 deletion(-)

diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index 06b2b901e37a3..c4ce277687a49 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -13,6 +13,7 @@
 #include 
 #include 
 #include 
+#include  /* for memremap() */
 #include "direct.h"
 
 /*
@@ -305,6 +306,21 @@ void *dma_direct_alloc(struct device *dev, size_t size,
ret = page_address(page);
if (dma_set_decrypted(dev, ret, size))
goto out_free_pages;
+#ifdef CONFIG_HAS_IOMEM
+   /*
+* Remap the pages in the unencrypted physical address space
+* when dma_unencrypted_base is set (e.g., for Hyper-V AMD
+* SEV-SNP isolated guests).
+*/
+   if (dma_unencrypted_base) {
+   phys_addr_t ret_pa = virt_to_phys(ret);
+
+   ret_pa += dma_unencrypted_base;
+   ret = memremap(ret_pa, size, MEMREMAP_WB);
+   if (!ret)
+   goto out_encrypt_pages;
+   }
+#endif
}
 
memset(ret, 0, size);
@@ -360,11 +376,23 @@ void dma_direct_free(struct device *dev, size_t size,
dma_free_from_pool(dev, cpu_addr, PAGE_ALIGN(size)))
return;
 
-   if (is_vmalloc_addr(cpu_addr)) {
+   /*
+* If dma_unencrypted_base is set, the virtual address returned by
+* dma_direct_alloc() is in the vmalloc address range.
+*/
+   if (!dma_unencrypted_base && is_vmalloc_addr(cpu_addr)) {
vunmap(cpu_addr);
} else {
if (IS_ENABLED(CONFIG_ARCH_HAS_DMA_CLEAR_UNCACHED))
arch_dma_clear_uncached(cpu_addr, size);
+#ifdef CONFIG_HAS_IOMEM
+   if (dma_unencrypted_base) {
+   memunmap(cpu_addr);
+   /* re-encrypt the pages using the original address */
+   cpu_addr = page_address(pfn_to_page(PHYS_PFN(
+   dma_to_phys(dev, dma_addr;
+   }
+#endif
if (dma_set_encrypted(dev, cpu_addr, size))
return;
}
-- 
2.25.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu