Re: [PATCH V5 12/12] net: netvsc: Add Isolation VM support for netvsc driver

2021-09-29 Thread Christoph Hellwig
On Tue, Sep 28, 2021 at 05:23:31PM +0800, Tianyu Lan wrote:
>>
>>   - the bare memremap usage in swiotlb looks strange and I'd
>> definitively expect a well documented wrapper.
>
> OK. Should the wrapper in the DMA code? How about dma_map_decrypted() 
> introduced in the V4?

A mentioned then the name is a pretty bad choice as it touches the dma_map*
namespace that it is not related to.  I suspect just a little helper
in the swiotlb code that explains how it is used might be enogh for now.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH V5 12/12] net: netvsc: Add Isolation VM support for netvsc driver

2021-09-28 Thread Tianyu Lan

On 9/28/2021 1:39 PM, Christoph Hellwig wrote:

On Mon, Sep 27, 2021 at 10:26:43PM +0800, Tianyu Lan wrote:

Hi Christoph:
 Gentile ping. The swiotlb and shared memory mapping changes in this
patchset needs your reivew. Could you have a look? >

I'm a little too busy for a review of such a huge patchset right now.
That being said here are my comments from a very quick review:

Hi Christoph:
  Thanks for your comments. Most patches in the series are Hyper-V
change. I will split patchset and make it easy to review.




  - the bare memremap usage in swiotlb looks strange and I'd
definitively expect a well documented wrapper.


OK. Should the wrapper in the DMA code? How about dma_map_decrypted() 
introduced in the V4?

https://lkml.org/lkml/2021/8/27/605


  - given that we can now hand out swiotlb memory for coherent mappings
we need to carefully audit what happens when this memremaped
memory gets mmaped or used through dma_get_sgtable


OK. I check that.


  - the netscv changes I'm not happy with at all.  A large part of it
is that the driver already has a bad structure, but this series
is making it significantly worse.  We'll need to find a way
to use the proper dma mapping abstractions here.  One option
if you want to stick to the double vmapped buffer would be something
like using dma_alloc_noncontigous plus a variant of
dma_vmap_noncontiguous that takes the shared_gpa_boundary into
account.



OK. I will do that.


___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH V5 12/12] net: netvsc: Add Isolation VM support for netvsc driver

2021-09-27 Thread Christoph Hellwig
On Mon, Sep 27, 2021 at 10:26:43PM +0800, Tianyu Lan wrote:
> Hi Christoph:
> Gentile ping. The swiotlb and shared memory mapping changes in this
> patchset needs your reivew. Could you have a look?

I'm a little too busy for a review of such a huge patchset right now.
That being said here are my comments from a very quick review:

 - the bare memremap usage in swiotlb looks strange and I'd
   definitively expect a well documented wrapper.
 - given that we can now hand out swiotlb memory for coherent mappings
   we need to carefully audit what happens when this memremaped
   memory gets mmaped or used through dma_get_sgtable
 - the netscv changes I'm not happy with at all.  A large part of it
   is that the driver already has a bad structure, but this series
   is making it significantly worse.  We'll need to find a way
   to use the proper dma mapping abstractions here.  One option
   if you want to stick to the double vmapped buffer would be something
   like using dma_alloc_noncontigous plus a variant of
   dma_vmap_noncontiguous that takes the shared_gpa_boundary into
   account.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH V5 12/12] net: netvsc: Add Isolation VM support for netvsc driver

2021-09-27 Thread Tianyu Lan

Hi Christoph:
Gentile ping. The swiotlb and shared memory mapping changes in this
patchset needs your reivew. Could you have a look?

Thanks.

On 9/22/2021 6:34 PM, Tianyu Lan wrote:

Hi Christoph:
     This patch follows your purposal in the previous discussion.
Could you have a look?
     "use vmap_pfn as in the current series.  But in that case I think
     we should get rid of the other mapping created by vmalloc.  I
     though a bit about finding a way to apply the offset in vmalloc
     itself, but I think it would be too invasive to the normal fast
     path.  So the other sub-option would be to allocate the pages
     manually (maybe even using high order allocations to reduce TLB
     pressure) and then remap them(https://lkml.org/lkml/2021/9/2/112)

Otherwise, I merge your previous change for swiotlb into patch 9
“x86/Swiotlb: Add Swiotlb bounce buffer remap function for HV IVM”
You previous change 
link.(http://git.infradead.org/users/hch/misc.git/commit/8248f295928aded3364a1e54a4e0022e93d3610c) 
Please have a look.



Thanks.


On 9/16/2021 12:21 AM, Michael Kelley wrote:
From: Tianyu Lan   Sent: Tuesday, September 14, 
2021 6:39 AM


In Isolation VM, all shared memory with host needs to mark visible
to host via hvcall. vmbus_establish_gpadl() has already done it for
netvsc rx/tx ring buffer. The page buffer used by vmbus_sendpacket_
pagebuffer() stills need to be handled. Use DMA API to map/umap
these memory during sending/receiving packet and Hyper-V swiotlb
bounce buffer dma address will be returned. The swiotlb bounce buffer
has been masked to be visible to host during boot up.

Allocate rx/tx ring buffer via alloc_pages() in Isolation VM and map
these pages via vmap(). After calling vmbus_establish_gpadl() which
marks these pages visible to host, unmap these pages to release the
virtual address mapped with physical address below shared_gpa_boundary
and map them in the extra address space via vmap_pfn().

Signed-off-by: Tianyu Lan 
---
Change since v4:
* Allocate rx/tx ring buffer via alloc_pages() in Isolation VM
* Map pages after calling vmbus_establish_gpadl().
* set dma_set_min_align_mask for netvsc driver.

Change since v3:
* Add comment to explain why not to use dma_map_sg()
* Fix some error handle.
---
  drivers/net/hyperv/hyperv_net.h   |   7 +
  drivers/net/hyperv/netvsc.c   | 287 +-
  drivers/net/hyperv/netvsc_drv.c   |   1 +
  drivers/net/hyperv/rndis_filter.c |   2 +
  include/linux/hyperv.h    |   5 +
  5 files changed, 296 insertions(+), 6 deletions(-)

diff --git a/drivers/net/hyperv/hyperv_net.h 
b/drivers/net/hyperv/hyperv_net.h

index 315278a7cf88..87e8c74398a5 100644
--- a/drivers/net/hyperv/hyperv_net.h
+++ b/drivers/net/hyperv/hyperv_net.h
@@ -164,6 +164,7 @@ struct hv_netvsc_packet {
  u32 total_bytes;
  u32 send_buf_index;
  u32 total_data_buflen;
+    struct hv_dma_range *dma_range;
  };

  #define NETVSC_HASH_KEYLEN 40
@@ -1074,6 +1075,8 @@ struct netvsc_device {

  /* Receive buffer allocated by us but manages by NetVSP */
  void *recv_buf;
+    struct page **recv_pages;
+    u32 recv_page_count;
  u32 recv_buf_size; /* allocated bytes */
  struct vmbus_gpadl recv_buf_gpadl_handle;
  u32 recv_section_cnt;
@@ -1082,6 +1085,8 @@ struct netvsc_device {

  /* Send buffer allocated by us */
  void *send_buf;
+    struct page **send_pages;
+    u32 send_page_count;
  u32 send_buf_size;
  struct vmbus_gpadl send_buf_gpadl_handle;
  u32 send_section_cnt;
@@ -1731,4 +1736,6 @@ struct rndis_message {
  #define RETRY_US_HI    1
  #define RETRY_MAX    2000    /* >10 sec */

+void netvsc_dma_unmap(struct hv_device *hv_dev,
+  struct hv_netvsc_packet *packet);
  #endif /* _HYPERV_NET_H */
diff --git a/drivers/net/hyperv/netvsc.c b/drivers/net/hyperv/netvsc.c
index 1f87e570ed2b..7d5254bf043e 100644
--- a/drivers/net/hyperv/netvsc.c
+++ b/drivers/net/hyperv/netvsc.c
@@ -20,6 +20,7 @@
  #include 
  #include 
  #include 
+#include 

  #include 
  #include 
@@ -150,11 +151,33 @@ static void free_netvsc_device(struct rcu_head 
*head)

  {
  struct netvsc_device *nvdev
  = container_of(head, struct netvsc_device, rcu);
+    unsigned int alloc_unit;
  int i;

  kfree(nvdev->extension);
-    vfree(nvdev->recv_buf);
-    vfree(nvdev->send_buf);
+
+    if (nvdev->recv_pages) {
+    alloc_unit = (nvdev->recv_buf_size /
+    nvdev->recv_page_count) >> PAGE_SHIFT;
+
+    vunmap(nvdev->recv_buf);
+    for (i = 0; i < nvdev->recv_page_count; i++)
+    __free_pages(nvdev->recv_pages[i], alloc_unit);
+    } else {
+    vfree(nvdev->recv_buf);
+    }
+
+    if (nvdev->send_pages) {
+    alloc_unit = (nvdev->send_buf_size /
+    nvdev->send_page_count) >> PAGE_SHIFT;
+
+    vunmap(nvdev->send_buf);
+    for (i = 0; i < nvdev->send_page_count; i++)
+    __free_pages(nvdev->send_pages[i], 

Re: [PATCH V5 12/12] net: netvsc: Add Isolation VM support for netvsc driver

2021-09-22 Thread Tianyu Lan

Hi Christoph:
This patch follows your purposal in the previous discussion.
Could you have a look?
"use vmap_pfn as in the current series.  But in that case I think
we should get rid of the other mapping created by vmalloc.  I
though a bit about finding a way to apply the offset in vmalloc
itself, but I think it would be too invasive to the normal fast
path.  So the other sub-option would be to allocate the pages
manually (maybe even using high order allocations to reduce TLB
pressure) and then remap them(https://lkml.org/lkml/2021/9/2/112)

Otherwise, I merge your previous change for swiotlb into patch 9
“x86/Swiotlb: Add Swiotlb bounce buffer remap function for HV IVM”
You previous change 
link.(http://git.infradead.org/users/hch/misc.git/commit/8248f295928aded3364a1e54a4e0022e93d3610c) 
Please have a look.



Thanks.


On 9/16/2021 12:21 AM, Michael Kelley wrote:

From: Tianyu Lan   Sent: Tuesday, September 14, 2021 6:39 
AM


In Isolation VM, all shared memory with host needs to mark visible
to host via hvcall. vmbus_establish_gpadl() has already done it for
netvsc rx/tx ring buffer. The page buffer used by vmbus_sendpacket_
pagebuffer() stills need to be handled. Use DMA API to map/umap
these memory during sending/receiving packet and Hyper-V swiotlb
bounce buffer dma address will be returned. The swiotlb bounce buffer
has been masked to be visible to host during boot up.

Allocate rx/tx ring buffer via alloc_pages() in Isolation VM and map
these pages via vmap(). After calling vmbus_establish_gpadl() which
marks these pages visible to host, unmap these pages to release the
virtual address mapped with physical address below shared_gpa_boundary
and map them in the extra address space via vmap_pfn().

Signed-off-by: Tianyu Lan 
---
Change since v4:
* Allocate rx/tx ring buffer via alloc_pages() in Isolation VM
* Map pages after calling vmbus_establish_gpadl().
* set dma_set_min_align_mask for netvsc driver.

Change since v3:
* Add comment to explain why not to use dma_map_sg()
* Fix some error handle.
---
  drivers/net/hyperv/hyperv_net.h   |   7 +
  drivers/net/hyperv/netvsc.c   | 287 +-
  drivers/net/hyperv/netvsc_drv.c   |   1 +
  drivers/net/hyperv/rndis_filter.c |   2 +
  include/linux/hyperv.h|   5 +
  5 files changed, 296 insertions(+), 6 deletions(-)

diff --git a/drivers/net/hyperv/hyperv_net.h b/drivers/net/hyperv/hyperv_net.h
index 315278a7cf88..87e8c74398a5 100644
--- a/drivers/net/hyperv/hyperv_net.h
+++ b/drivers/net/hyperv/hyperv_net.h
@@ -164,6 +164,7 @@ struct hv_netvsc_packet {
u32 total_bytes;
u32 send_buf_index;
u32 total_data_buflen;
+   struct hv_dma_range *dma_range;
  };

  #define NETVSC_HASH_KEYLEN 40
@@ -1074,6 +1075,8 @@ struct netvsc_device {

/* Receive buffer allocated by us but manages by NetVSP */
void *recv_buf;
+   struct page **recv_pages;
+   u32 recv_page_count;
u32 recv_buf_size; /* allocated bytes */
struct vmbus_gpadl recv_buf_gpadl_handle;
u32 recv_section_cnt;
@@ -1082,6 +1085,8 @@ struct netvsc_device {

/* Send buffer allocated by us */
void *send_buf;
+   struct page **send_pages;
+   u32 send_page_count;
u32 send_buf_size;
struct vmbus_gpadl send_buf_gpadl_handle;
u32 send_section_cnt;
@@ -1731,4 +1736,6 @@ struct rndis_message {
  #define RETRY_US_HI   1
  #define RETRY_MAX 2000/* >10 sec */

+void netvsc_dma_unmap(struct hv_device *hv_dev,
+ struct hv_netvsc_packet *packet);
  #endif /* _HYPERV_NET_H */
diff --git a/drivers/net/hyperv/netvsc.c b/drivers/net/hyperv/netvsc.c
index 1f87e570ed2b..7d5254bf043e 100644
--- a/drivers/net/hyperv/netvsc.c
+++ b/drivers/net/hyperv/netvsc.c
@@ -20,6 +20,7 @@
  #include 
  #include 
  #include 
+#include 

  #include 
  #include 
@@ -150,11 +151,33 @@ static void free_netvsc_device(struct rcu_head *head)
  {
struct netvsc_device *nvdev
= container_of(head, struct netvsc_device, rcu);
+   unsigned int alloc_unit;
int i;

kfree(nvdev->extension);
-   vfree(nvdev->recv_buf);
-   vfree(nvdev->send_buf);
+
+   if (nvdev->recv_pages) {
+   alloc_unit = (nvdev->recv_buf_size /
+   nvdev->recv_page_count) >> PAGE_SHIFT;
+
+   vunmap(nvdev->recv_buf);
+   for (i = 0; i < nvdev->recv_page_count; i++)
+   __free_pages(nvdev->recv_pages[i], alloc_unit);
+   } else {
+   vfree(nvdev->recv_buf);
+   }
+
+   if (nvdev->send_pages) {
+   alloc_unit = (nvdev->send_buf_size /
+   nvdev->send_page_count) >> PAGE_SHIFT;
+
+   vunmap(nvdev->send_buf);
+   for (i = 0; i < nvdev->send_page_count; i++)
+   __free_pages(nvdev->send_pages[i], 

Re: [PATCH V5 12/12] net: netvsc: Add Isolation VM support for netvsc driver

2021-09-16 Thread Tianyu Lan

On 9/16/2021 12:21 AM, Michael Kelley wrote:

I think you are proposing this approach to allocating memory for the send
and receive buffers so that you can avoid having two virtual mappings for
the memory, per comments from Christop Hellwig.  But overall, the approach
seems a bit complex and I wonder if it is worth it.  If allocating large 
contiguous
chunks of physical memory is successful, then there is some memory savings
in that the data structures needed to keep track of the physical pages is
smaller than the equivalent page tables might be.  But if you have to revert
to allocating individual pages, then the memory savings is reduced.



Yes, this version follows idea from Christop in the previous 
discussion.(https://lkml.org/lkml/2021/9/2/112)
This patch shows the implementation and check whether this is a right 
direction.



Ultimately, the list of actual PFNs has to be kept somewhere.  Another approach
would be to do the reverse of what hv_map_memory() from the v4 patch
series does.  I.e., you could do virt_to_phys() on each virtual address that
maps above VTOM, and subtract out the shared_gpa_boundary to get the
list of actual PFNs that need to be freed.


virt_to_phys() doesn't work for virtual address returned by 
vmap/vmap_pfn() (just like it doesn't work for va returned by 
vmalloc()). The pfn above vTom doesn't have struct page backing and

vmap_pfn() populates the pfn directly in the pte.(Please see the
vmap_pfn_apply()). So it's not easy to convert the va to pa.


  This way you don't have two copies
of the list of PFNs -- one with and one without the shared_gpa_boundary added.
But it comes at the cost of additional code so that may not be a great idea.

I think what you have here works, and I don't have a clearly better solution
at the moment except perhaps to revert to the v4 solution and just have two
virtual mappings.  I'll keep thinking about it.  Maybe Christop has other
thoughts.




___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH V5 12/12] net: netvsc: Add Isolation VM support for netvsc driver

2021-09-16 Thread Tianyu Lan

On 9/16/2021 12:46 AM, Haiyang Zhang wrote:

+   memset(vmap_pages, 0,
+  sizeof(*vmap_pages) * vmap_page_index);
+   vmap_page_index = 0;
+
+   for (j = 0; j < i; j++)
+   __free_pages(pages[j], alloc_unit);
+
+   kfree(pages);
+   alloc_unit = 1;

This is the case where a large enough contiguous physical memory chunk
could not be found.  But rather than dropping all the way down to single
pages, would it make sense to try something smaller, but not 1?  For
example, cut the alloc_unit in half and try again.  But I'm not sure of
all the implications.

I had the same question. But probably gradually decrementing uses too much
time?



This version is to propose the solution. We may optimize this to try
smaller size until to single page if this is right direction.


___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


RE: [PATCH V5 12/12] net: netvsc: Add Isolation VM support for netvsc driver

2021-09-15 Thread Haiyang Zhang via iommu



> -Original Message-
> From: Michael Kelley 
> Sent: Wednesday, September 15, 2021 12:22 PM
> To: Tianyu Lan ; KY Srinivasan ;

> > +   memset(vmap_pages, 0,
> > +  sizeof(*vmap_pages) * vmap_page_index);
> > +   vmap_page_index = 0;
> > +
> > +   for (j = 0; j < i; j++)
> > +   __free_pages(pages[j], alloc_unit);
> > +
> > +   kfree(pages);
> > +   alloc_unit = 1;
> 
> This is the case where a large enough contiguous physical memory chunk
> could not be found.  But rather than dropping all the way down to single
> pages, would it make sense to try something smaller, but not 1?  For
> example, cut the alloc_unit in half and try again.  But I'm not sure of
> all the implications.

I had the same question. But probably gradually decrementing uses too much
time?

> 
> > +   goto retry;
> > +   }
> > +   }
> > +
> > +   pages[i] = page;
> > +   for (j = 0; j < alloc_unit; j++)
> > +   vmap_pages[vmap_page_index++] = page++;
> > +   }
> > +
> > +   vaddr = vmap(vmap_pages, vmap_page_index, VM_MAP, PAGE_KERNEL);
> > +   kfree(vmap_pages);
> > +
> > +   *pages_array = pages;
> > +   return vaddr;
> > +
> > +cleanup:
> > +   for (j = 0; j < i; j++)
> > +   __free_pages(pages[i], alloc_unit);
> > +
> > +   kfree(pages);
> > +   kfree(vmap_pages);
> > +   return NULL;
> > +}
> > +
> > +static void *netvsc_map_pages(struct page **pages, int count, int
> > +alloc_unit) {
> > +   int pg_count = count * alloc_unit;
> > +   struct page *page;
> > +   unsigned long *pfns;
> > +   int pfn_index = 0;
> > +   void *vaddr;
> > +   int i, j;
> > +
> > +   if (!pages)
> > +   return NULL;
> > +
> > +   pfns = kcalloc(pg_count, sizeof(*pfns), GFP_KERNEL);
> > +   if (!pfns)
> > +   return NULL;
> > +
> > +   for (i = 0; i < count; i++) {
> > +   page = pages[i];
> > +   if (!page) {
> > +   pr_warn("page is not available %d.\n", i);
> > +   return NULL;
> > +   }
> > +
> > +   for (j = 0; j < alloc_unit; j++) {
> > +   pfns[pfn_index++] = page_to_pfn(page++) +
> > +   (ms_hyperv.shared_gpa_boundary >> PAGE_SHIFT);
> > +   }
> > +   }
> > +
> > +   vaddr = vmap_pfn(pfns, pg_count, PAGE_KERNEL_IO);
> > +   kfree(pfns);
> > +   return vaddr;
> > +}
> > +
> 
> I think you are proposing this approach to allocating memory for the
> send and receive buffers so that you can avoid having two virtual
> mappings for the memory, per comments from Christop Hellwig.  But
> overall, the approach seems a bit complex and I wonder if it is worth it.
> If allocating large contiguous chunks of physical memory is successful,
> then there is some memory savings in that the data structures needed to
> keep track of the physical pages is smaller than the equivalent page
> tables might be.  But if you have to revert to allocating individual
> pages, then the memory savings is reduced.
> 
> Ultimately, the list of actual PFNs has to be kept somewhere.  Another
> approach would be to do the reverse of what hv_map_memory() from the v4
> patch series does.  I.e., you could do virt_to_phys() on each virtual
> address that maps above VTOM, and subtract out the shared_gpa_boundary
> to get the
> list of actual PFNs that need to be freed.   This way you don't have two
> copies
> of the list of PFNs -- one with and one without the shared_gpa_boundary
> added.
> But it comes at the cost of additional code so that may not be a great
> idea.
> 
> I think what you have here works, and I don't have a clearly better
> solution at the moment except perhaps to revert to the v4 solution and
> just have two virtual mappings.  I'll keep thinking about it.  Maybe
> Christop has other thoughts.
> 
> >  static int netvsc_init_buf(struct hv_device *device,
> >struct netvsc_device *net_device,
> >const struct netvsc_device_info *device_info) @@ -
> 337,7 +462,7
> > @@ static int netvsc_init_buf(struct hv_device *device,
> > struct nvsp_1_message_send_receive_buffer_complete *resp;
> > struct net_device *ndev = hv_get_drvdata(device);
> > struct nvsp_message *init_packet;
> > -   unsigned int buf_size;
> > +   unsigned int buf_size, alloc_unit;
> > size_t map_words;
> > int i, ret = 0;
> >
> > @@ -350,7 +475,14 @@ static int netvsc_init_buf(struct hv_device
> *device,
> > buf_size = min_t(unsigned int, buf_size,
> >  NETVSC_RECEIVE_BUFFER_SIZE_LEGACY);
> >
> > -   net_device->recv_buf = vzalloc(buf_size);
> > +   if (hv_isolation_type_snp())
> > +   net_device->recv_buf =
> > +   netvsc_alloc_pages(_device->recv_pages,
> > +  

RE: [PATCH V5 12/12] net: netvsc: Add Isolation VM support for netvsc driver

2021-09-15 Thread Michael Kelley via iommu
From: Tianyu Lan   Sent: Tuesday, September 14, 2021 6:39 
AM
> 
> In Isolation VM, all shared memory with host needs to mark visible
> to host via hvcall. vmbus_establish_gpadl() has already done it for
> netvsc rx/tx ring buffer. The page buffer used by vmbus_sendpacket_
> pagebuffer() stills need to be handled. Use DMA API to map/umap
> these memory during sending/receiving packet and Hyper-V swiotlb
> bounce buffer dma address will be returned. The swiotlb bounce buffer
> has been masked to be visible to host during boot up.
> 
> Allocate rx/tx ring buffer via alloc_pages() in Isolation VM and map
> these pages via vmap(). After calling vmbus_establish_gpadl() which
> marks these pages visible to host, unmap these pages to release the
> virtual address mapped with physical address below shared_gpa_boundary
> and map them in the extra address space via vmap_pfn().
> 
> Signed-off-by: Tianyu Lan 
> ---
> Change since v4:
>   * Allocate rx/tx ring buffer via alloc_pages() in Isolation VM
>   * Map pages after calling vmbus_establish_gpadl().
>   * set dma_set_min_align_mask for netvsc driver.
> 
> Change since v3:
>   * Add comment to explain why not to use dma_map_sg()
>   * Fix some error handle.
> ---
>  drivers/net/hyperv/hyperv_net.h   |   7 +
>  drivers/net/hyperv/netvsc.c   | 287 +-
>  drivers/net/hyperv/netvsc_drv.c   |   1 +
>  drivers/net/hyperv/rndis_filter.c |   2 +
>  include/linux/hyperv.h|   5 +
>  5 files changed, 296 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/net/hyperv/hyperv_net.h b/drivers/net/hyperv/hyperv_net.h
> index 315278a7cf88..87e8c74398a5 100644
> --- a/drivers/net/hyperv/hyperv_net.h
> +++ b/drivers/net/hyperv/hyperv_net.h
> @@ -164,6 +164,7 @@ struct hv_netvsc_packet {
>   u32 total_bytes;
>   u32 send_buf_index;
>   u32 total_data_buflen;
> + struct hv_dma_range *dma_range;
>  };
> 
>  #define NETVSC_HASH_KEYLEN 40
> @@ -1074,6 +1075,8 @@ struct netvsc_device {
> 
>   /* Receive buffer allocated by us but manages by NetVSP */
>   void *recv_buf;
> + struct page **recv_pages;
> + u32 recv_page_count;
>   u32 recv_buf_size; /* allocated bytes */
>   struct vmbus_gpadl recv_buf_gpadl_handle;
>   u32 recv_section_cnt;
> @@ -1082,6 +1085,8 @@ struct netvsc_device {
> 
>   /* Send buffer allocated by us */
>   void *send_buf;
> + struct page **send_pages;
> + u32 send_page_count;
>   u32 send_buf_size;
>   struct vmbus_gpadl send_buf_gpadl_handle;
>   u32 send_section_cnt;
> @@ -1731,4 +1736,6 @@ struct rndis_message {
>  #define RETRY_US_HI  1
>  #define RETRY_MAX2000/* >10 sec */
> 
> +void netvsc_dma_unmap(struct hv_device *hv_dev,
> +   struct hv_netvsc_packet *packet);
>  #endif /* _HYPERV_NET_H */
> diff --git a/drivers/net/hyperv/netvsc.c b/drivers/net/hyperv/netvsc.c
> index 1f87e570ed2b..7d5254bf043e 100644
> --- a/drivers/net/hyperv/netvsc.c
> +++ b/drivers/net/hyperv/netvsc.c
> @@ -20,6 +20,7 @@
>  #include 
>  #include 
>  #include 
> +#include 
> 
>  #include 
>  #include 
> @@ -150,11 +151,33 @@ static void free_netvsc_device(struct rcu_head *head)
>  {
>   struct netvsc_device *nvdev
>   = container_of(head, struct netvsc_device, rcu);
> + unsigned int alloc_unit;
>   int i;
> 
>   kfree(nvdev->extension);
> - vfree(nvdev->recv_buf);
> - vfree(nvdev->send_buf);
> +
> + if (nvdev->recv_pages) {
> + alloc_unit = (nvdev->recv_buf_size /
> + nvdev->recv_page_count) >> PAGE_SHIFT;
> +
> + vunmap(nvdev->recv_buf);
> + for (i = 0; i < nvdev->recv_page_count; i++)
> + __free_pages(nvdev->recv_pages[i], alloc_unit);
> + } else {
> + vfree(nvdev->recv_buf);
> + }
> +
> + if (nvdev->send_pages) {
> + alloc_unit = (nvdev->send_buf_size /
> + nvdev->send_page_count) >> PAGE_SHIFT;
> +
> + vunmap(nvdev->send_buf);
> + for (i = 0; i < nvdev->send_page_count; i++)
> + __free_pages(nvdev->send_pages[i], alloc_unit);
> + } else {
> + vfree(nvdev->send_buf);
> + }
> +
>   kfree(nvdev->send_section_map);
> 
>   for (i = 0; i < VRSS_CHANNEL_MAX; i++) {
> @@ -330,6 +353,108 @@ int netvsc_alloc_recv_comp_ring(struct netvsc_device 
> *net_device, u32 q_idx)
>   return nvchan->mrc.slots ? 0 : -ENOMEM;
>  }
> 
> +void *netvsc_alloc_pages(struct page ***pages_array, unsigned int *array_len,
> +  unsigned long size)
> +{
> + struct page *page, **pages, **vmap_pages;
> + unsigned long pg_count = size >> PAGE_SHIFT;
> + int alloc_unit = MAX_ORDER_NR_PAGES;
> + int i, j, vmap_page_index = 0;
> + void *vaddr;
> +
> + if (pg_count < alloc_unit)
> + alloc_unit = 1;
> +
> + /* vmap() accepts page array with PAGE_SIZE as 

RE: [PATCH V5 12/12] net: netvsc: Add Isolation VM support for netvsc driver

2021-09-14 Thread Haiyang Zhang via iommu



> -Original Message-
> From: Tianyu Lan 
> Sent: Tuesday, September 14, 2021 9:39 AM
> To: KY Srinivasan ; Haiyang Zhang
> ; Stephen Hemminger ;
> wei@kernel.org; Dexuan Cui ; t...@linutronix.de;
> mi...@redhat.com; b...@alien8.de; x...@kernel.org; h...@zytor.com;
> dave.han...@linux.intel.com; l...@kernel.org; pet...@infradead.org;
> konrad.w...@oracle.com; boris.ostrov...@oracle.com; jgr...@suse.com;
> sstabell...@kernel.org; j...@8bytes.org; w...@kernel.org;
> da...@davemloft.net; k...@kernel.org; j...@linux.ibm.com;
> martin.peter...@oracle.com; gre...@linuxfoundation.org; a...@arndb.de;
> h...@lst.de; m.szyprow...@samsung.com; robin.mur...@arm.com;
> brijesh.si...@amd.com; Tianyu Lan ;
> thomas.lenda...@amd.com; pgo...@google.com; a...@linux-foundation.org;
> kirill.shute...@linux.intel.com; r...@kernel.org; s...@canb.auug.org.au;
> aneesh.ku...@linux.ibm.com; saravan...@fb.com;
> krish.sadhuk...@oracle.com; xen-de...@lists.xenproject.org;
> t...@kernel.org; rient...@google.com; Michael Kelley
> 
> Cc: iommu@lists.linux-foundation.org; linux-a...@vger.kernel.org; linux-
> hyp...@vger.kernel.org; linux-ker...@vger.kernel.org; linux-
> s...@vger.kernel.org; net...@vger.kernel.org; vkuznets
> ; parri.and...@gmail.com; dave.han...@intel.com
> Subject: [PATCH V5 12/12] net: netvsc: Add Isolation VM support for
> netvsc driver
> 
> From: Tianyu Lan 
> 
> In Isolation VM, all shared memory with host needs to mark visible
> to host via hvcall. vmbus_establish_gpadl() has already done it for
> netvsc rx/tx ring buffer. The page buffer used by vmbus_sendpacket_
> pagebuffer() stills need to be handled. Use DMA API to map/umap
> these memory during sending/receiving packet and Hyper-V swiotlb
> bounce buffer dma address will be returned. The swiotlb bounce buffer
> has been masked to be visible to host during boot up.
> 
> Allocate rx/tx ring buffer via alloc_pages() in Isolation VM and map
> these pages via vmap(). After calling vmbus_establish_gpadl() which
> marks these pages visible to host, unmap these pages to release the
> virtual address mapped with physical address below shared_gpa_boundary
> and map them in the extra address space via vmap_pfn().
> 
> Signed-off-by: Tianyu Lan 
> ---
> Change since v4:
>   * Allocate rx/tx ring buffer via alloc_pages() in Isolation VM
>   * Map pages after calling vmbus_establish_gpadl().
>   * set dma_set_min_align_mask for netvsc driver.
> 
> Change since v3:
>   * Add comment to explain why not to use dma_map_sg()
>   * Fix some error handle.
> ---

Reviewed-by: Haiyang Zhang 

Thank you!
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu