On 7/8/2022 1:07 AM, Christoph Hellwig wrote:
Thanks, this looks much better. I think there is a small problem
with how default_nareas is set - we need to use 0 as the default
so that an explicit command line value of 1 works. Als have you
checked the interaction with swiotlb_adjust_size in
From: Tianyu Lan
Traditionally swiotlb was not performance critical because it was only
used for slow devices. But in some setups, like TDX/SEV confidential
guests, all IO has to go through swiotlb. Currently swiotlb only has a
single lock. Under high IO load with multiple CPUs this can lead
From: Tianyu Lan
Traditionally swiotlb was not performance critical because it was only
used for slow devices. But in some setups, like TDX/SEV confidential
guests, all IO has to go through swiotlb. Currently swiotlb only has a
single lock. Under high IO load with multiple CPUs this can lead
On 7/6/2022 5:02 PM, Christoph Hellwig wrote:
On Wed, Jul 06, 2022 at 04:57:33PM +0800, Tianyu Lan wrote:
Swiotlb_init() is called in the mem_init() of different architects and
memblock free pages are released to the buddy allocator just after
calling swiotlb_init() via memblock_free_all
On 7/6/2022 4:00 PM, Christoph Hellwig wrote:
On Fri, Jul 01, 2022 at 01:02:21AM +0800, Tianyu Lan wrote:
Can we reorder that initialization? Because I really hate having
to have an arch hook in every architecture.
How about using "flags" parameter of swiotlb_init() to pass a
On 6/29/2022 10:04 PM, Christoph Hellwig wrote:
On Mon, Jun 27, 2022 at 11:31:50AM -0400, Tianyu Lan wrote:
From: Tianyu Lan
When initialize swiotlb bounce buffer, smp_init() has not been
called and cpu number can not be got from num_online_cpus().
Use the number of lapic entry to set swiotlb
From: Tianyu Lan
Traditionally swiotlb was not performance critical because it was only
used for slow devices. But in some setups, like TDX/SEV confidential
guests, all IO has to go through swiotlb. Currently swiotlb only has a
single lock. Under high IO load with multiple CPUs this can lead
From: Tianyu Lan
Traditionally swiotlb was not performance critical because it was only
used for slow devices. But in some setups, like TDX/SEV confidential
guests, all IO has to go through swiotlb. Currently swiotlb only has a
single lock. Under high IO load with multiple CPUs this can lead
From: Tianyu Lan
Traditionally swiotlb was not performance critical because it was only
used for slow devices. But in some setups, like TDX/SEV confidential
guests, all IO has to go through swiotlb. Currently swiotlb only has a
single lock. Under high IO load with multiple CPUs this can lead
From: Tianyu Lan
When initialize swiotlb bounce buffer, smp_init() has not been
called and cpu number can not be got from num_online_cpus().
Use the number of lapic entry to set swiotlb area number and
keep swiotlb area number equal to cpu number on the x86 platform.
Based-on-idea-by: Andi
On 6/22/2022 6:54 PM, Christoph Hellwig wrote:
Thanks,
this looks pretty good to me. A few comments below:
Thanks for your review.
On Fri, Jun 17, 2022 at 10:47:41AM -0400, Tianyu Lan wrote:
+/**
+ * struct io_tlb_area - IO TLB memory area descriptor
+ *
+ * This is a single area
From: Tianyu Lan
Traditionally swiotlb was not performance critical because it was only
used for slow devices. But in some setups, like TDX/SEV confidential
guests, all IO has to go through swiotlb. Currently swiotlb only has a
single lock. Under high IO load with multiple CPUs this can lead
On 5/27/2022 2:43 AM, Dexuan Cui wrote:
From: Tianyu Lan
Sent: Thursday, May 26, 2022 5:01 AM
...
@@ -119,6 +124,10 @@ static void netvsc_subchan_work(struct work_struct
*w)
nvdev->max_chn = 1;
nvdev->num_c
From: Tianyu Lan
Traditionally swiotlb was not performance critical because it was only
used for slow devices. But in some setups, like TDX/SEV confidential
guests, all IO has to go through swiotlb. Currently swiotlb only has a
single lock. Under high IO load with multiple CPUs this can lead
From: Tianyu Lan
Netvsc driver allocates device io tlb mem via calling swiotlb_device_
allocate() and set child io tlb mem number according to device queue
number. Child io tlb mem may reduce overhead of single spin lock in
device io tlb mem among multi device queues.
Signed-off-by: Tianyu Lan
From: Tianyu Lan
Traditionally swiotlb was not performance critical because it was only
used for slow devices. But in some setups, like TDX/SEV confidential
guests, all IO has to go through swiotlb. Currently swiotlb only has a
single lock. Under high IO load with multiple CPUs this can lead
On 5/16/2022 3:34 PM, Christoph Hellwig wrote:
I don't really understand how 'childs' fit in here. The code also
doesn't seem to be usable without patch 2 and a caller of the
new functions added in patch 2, so it is rather impossible to review.
Hi Christoph:
OK. I will merge two patches
From: Tianyu Lan
swiotlb_find_slots() skips slots according to io tlb aligned mask
calculated from min aligned mask and original physical address
offset. This affects max mapping size. The mapping size can't
achieve the IO_TLB_SEGSIZE * IO_TLB_SIZE when original offset is
non-zero
On 5/2/2022 8:54 PM, Tianyu Lan wrote:
From: Tianyu Lan
Traditionally swiotlb was not performance critical because it was only
used for slow devices. But in some setups, like TDX/SEV confidential
guests, all IO has to go through swiotlb. Currently swiotlb only has a
single lock. Under high IO
From: Tianyu Lan
Traditionally swiotlb was not performance critical because it was only
used for slow devices. But in some setups, like TDX/SEV confidential
guests, all IO has to go through swiotlb. Currently swiotlb only has a
single lock. Under high IO load with multiple CPUs this can lead
From: Tianyu Lan
Traditionally swiotlb was not performance critical because it was only
used for slow devices. But in some setups, like TDX/SEV confidential
guests, all IO has to go through swiotlb. Currently swiotlb only has a
single lock. Under high IO load with multiple CPUs this can lead
From: Tianyu Lan
In SEV/TDX Confidential VM, device DMA transaction needs use swiotlb
bounce buffer to share data with host/hypervisor. The swiotlb spinlock
introduces overhead among devices if they share io tlb mem. Avoid such
issue, introduce swiotlb_device_allocate() to allocate device bounce
On 4/29/2022 10:21 PM, Tianyu Lan wrote:
From: Tianyu Lan
Traditionally swiotlb was not performance critical because it was only
used for slow devices. But in some setups, like TDX/SEV confidential
guests, all IO has to go through swiotlb. Currently swiotlb only has a
single lock. Under high
From: Tianyu Lan
Traditionally swiotlb was not performance critical because it was only
used for slow devices. But in some setups, like TDX/SEV confidential
guests, all IO has to go through swiotlb. Currently swiotlb only has a
single lock. Under high IO load with multiple CPUs this can lead
On 4/28/2022 10:44 PM, Robin Murphy wrote:
On 2022-04-28 15:14, Tianyu Lan wrote:
From: Tianyu Lan
Traditionally swiotlb was not performance critical because it was only
used for slow devices. But in some setups, like TDX/SEV confidential
guests, all IO has to go through swiotlb. Currently
On 4/28/2022 10:14 PM, Tianyu Lan wrote:
From: Tianyu Lan
In SEV/TDX Confidential VM, device DMA transaction needs use swiotlb
bounce buffer to share data with host/hypervisor. The swiotlb spinlock
introduces overhead among devices if they share io tlb mem. Avoid such
issue, introduce
From: Tianyu Lan
In SEV/TDX Confidential VM, device DMA transaction needs use swiotlb
bounce buffer to share data with host/hypervisor. The swiotlb spinlock
introduces overhead among devices if they share io tlb mem. Avoid such
issue, introduce swiotlb_device_allocate() to allocate device bounce
From: Tianyu Lan
Traditionally swiotlb was not performance critical because it was only
used for slow devices. But in some setups, like TDX/SEV confidential
guests, all IO has to go through swiotlb. Currently swiotlb only has a
single lock. Under high IO load with multiple CPUs this can lead
From: Tianyu Lan
Traditionally swiotlb was not performance critical because it was only
used for slow devices. But in some setups, like TDX/SEV confidential
guests, all IO has to go through swiotlb. Currently swiotlb only has a
single lock. Under high IO load with multiple CPUs this can lead
On 3/1/2022 7:53 PM, Christoph Hellwig wrote:
On Fri, Feb 25, 2022 at 10:28:54PM +0800, Tianyu Lan wrote:
One more perspective is that one device may have multiple queues and
each queues should have independent swiotlb bounce buffer to avoid spin
lock overhead. The number of queues
On 2/23/2022 5:46 PM, Tianyu Lan wrote:
On 2/23/2022 12:00 AM, Christoph Hellwig wrote:
On Tue, Feb 22, 2022 at 11:07:19PM +0800, Tianyu Lan wrote:
Thanks for your comment. That means we need to expose an
swiotlb_device_init() interface to allocate bounce buffer and initialize
io tlb mem
On 2/23/2022 12:00 AM, Christoph Hellwig wrote:
On Tue, Feb 22, 2022 at 11:07:19PM +0800, Tianyu Lan wrote:
Thanks for your comment. That means we need to expose an
swiotlb_device_init() interface to allocate bounce buffer and initialize
io tlb mem entry. DMA API Current
On 2/22/2022 4:05 PM, Christoph Hellwig wrote:
On Mon, Feb 21, 2022 at 11:14:58PM +0800, Tianyu Lan wrote:
Sorry. The boot failure is not related with these patches and the issue
has been fixed in the latest upstream code.
There is a performance bottleneck due to io tlb mem's spin lock
On 2/15/2022 11:32 PM, Tianyu Lan wrote:
On 2/14/2022 9:58 PM, Christoph Hellwig wrote:
On Mon, Feb 14, 2022 at 07:28:40PM +0800, Tianyu Lan wrote:
On 2/14/2022 4:19 PM, Christoph Hellwig wrote:
Adding a function to set the flag doesn't really change much. As Robin
pointed out last time you
On 2/14/2022 9:58 PM, Christoph Hellwig wrote:
On Mon, Feb 14, 2022 at 07:28:40PM +0800, Tianyu Lan wrote:
On 2/14/2022 4:19 PM, Christoph Hellwig wrote:
Adding a function to set the flag doesn't really change much. As Robin
pointed out last time you should fine a way to just call
On 2/14/2022 4:19 PM, Christoph Hellwig wrote:
Adding a function to set the flag doesn't really change much. As Robin
pointed out last time you should fine a way to just call
swiotlb_init_with_tbl directly with the memory allocated the way you
like it. Or given that we have quite a few of
From: Tianyu Lan
In Hyper-V Isolation VM, swiotlb bnounce buffer size maybe 1G at most
and there maybe no enough memory from 0 to 4G according to memory layout.
Devices in Isolation VM can use memory above 4G as DMA memory and call
swiotlb_alloc_from_low_pages() to allocate swiotlb bounce buffer
From: Tianyu Lan
Hyper-V Isolation VM and AMD SEV VM uses swiotlb bounce buffer to
share memory with hypervisor. Current swiotlb bounce buffer is only
allocated from 0 to ARCH_LOW_ADDRESS_LIMIT which is default to
0xUL. Isolation VM and AMD SEV VM needs 1G bounce buffer at most
From: Tianyu Lan
Hyper-V Isolation VM may fail to allocate swiotlb bounce buffer due
to there is no enough contiguous memory from 0 to 4G in some cases.
Current swiotlb code allocates bounce buffer in the low end memory.
This patchset adds a new function swiotlb_set_alloc_from_low_pages
On 2/3/2022 1:05 AM, Michael Kelley (LINUX) wrote:
From: Tianyu Lan Sent: Tuesday, February 1, 2022 8:32 AM
netvsc_device_remove() calls vunmap() inside which should not be
called in the interrupt context. Current code calls hv_unmap_memory()
in the free_netvsc_device() which is rcu callback
On 2/2/2022 4:12 PM, Christoph Hellwig wrote:
I think this interface is a little too hacky. In the end all the
non-trusted hypervisor schemes (including the per-device swiotlb one)
can allocate the memory from everywhere and want for force use of
swiotlb. I think we need some kind of proper
From: Tianyu Lan
netvsc_device_remove() calls vunmap() inside which should not be
called in the interrupt context. Current code calls hv_unmap_memory()
in the free_netvsc_device() which is rcu callback and maybe called
in the interrupt context. This will trigger BUG_ON(in_interrupt
From: Tianyu Lan
In Hyper-V Isolation VM, swiotlb bnounce buffer size maybe 1G at most
and there maybe no enough memory from 0 to 4G according to memory layout.
Devices in Isolation VM can use memory above 4G as DMA memory. Set swiotlb_
alloc_from_low_pages to false in Isolation VM.
Signed-off
From: Tianyu Lan
Hyper-V Isolation VM and AMD SEV VM uses swiotlb bounce buffer to
share memory with hypervisor. Current swiotlb bounce buffer is only
allocated from 0 to ARCH_LOW_ADDRESS_LIMIT which is default to
0xUL. Isolation VM and AMD SEV VM needs 1G bounce buffer at most
From: Tianyu Lan
Hyper-V Isolation VM may fail to allocate swiotlb bounce buffer due
to there is no enough contiguous memory from 0 to 4G in some cases.
Current swiotlb code allocate bounce buffer in the low end memory.
This patchset adds a switch "swiotlb_alloc_from_low_pages" an
From: Tianyu Lan
HAS_IOMEM option may not be selected on some platforms(e.g, s390) and this
will cause compile error due to miss memremap() implementation. Fix it via
adding HAS_IOMEM check around memremap() in the swiotlb.c.
Reported-by: kernel test robot
Signed-off-by: Tianyu Lan
On 12/15/2021 6:40 AM, Dave Hansen wrote:
On 12/14/21 2:23 PM, Tom Lendacky wrote:
I don't really understand how this can be more general any *not* get
utilized by the existing SEV support.
The Virtual Top-of-Memory (VTOM) support is an SEV-SNP feature that is
meant to be used with a
On 12/14/2021 12:45 AM, Dave Hansen wrote:
On 12/12/21 11:14 PM, Tianyu Lan wrote:
In Isolation VM with AMD SEV, bounce buffer needs to be accessed via
extra address space which is above shared_gpa_boundary (E.G 39 bit
address line) reported by Hyper-V CPUID ISOLATION_CONFIG. The access
From: Tianyu Lan
In Isolation VM, all shared memory with host needs to mark visible
to host via hvcall. vmbus_establish_gpadl() has already done it for
netvsc rx/tx ring buffer. The page buffer used by vmbus_sendpacket_
pagebuffer() stills need to be handled. Use DMA API to map/umap
these memory
From: Tianyu Lan
In Isolation VM, all shared memory with host needs to mark visible
to host via hvcall. vmbus_establish_gpadl() has already done it for
storvsc rx/tx ring buffer. The page buffer used by vmbus_sendpacket_
mpb_desc() still needs to be handled. Use DMA API(scsi_dma_map/unmap
From: Tianyu Lan
hyperv Isolation VM requires bounce buffer support to copy
data from/to encrypted memory and so enable swiotlb force
mode to use swiotlb bounce buffer for DMA transaction.
In Isolation VM with AMD SEV, the bounce buffer needs to be
accessed via extra address space which
From: Tianyu Lan
Hyper-V provides Isolation VM for confidential computing support and
guest memory is encrypted in it. Places checking cc_platform_has()
with GUEST_MEM_ENCRYPT attr should return "True" in Isolation vm. e.g,
swiotlb bounce buffer size needs to adjust according to m
From: Tianyu Lan
Hyper-V provides two kinds of Isolation VMs. VBS(Virtualization-based
security) and AMD SEV-SNP unenlightened Isolation VMs. This patchset
is to add support for these Isolation VM support in Linux.
The memory of these vms are encrypted and host can't access guest
memory
From: Tianyu Lan
In Isolation VM with AMD SEV, bounce buffer needs to be accessed via
extra address space which is above shared_gpa_boundary (E.G 39 bit
address line) reported by Hyper-V CPUID ISOLATION_CONFIG. The access
physical address will be original physical address + shared_gpa_boundary
On 12/10/2021 9:25 PM, Tianyu Lan wrote:
@@ -319,8 +320,16 @@ static void __init ms_hyperv_init_platform(void)
pr_info("Hyper-V: Isolation Config: Group A 0x%x, Group B
0x%x\n",
ms_hyperv.isolation_config_a,
ms_hyperv.isolatio
On 12/10/2021 4:09 AM, Michael Kelley (LINUX) wrote:
@@ -319,8 +320,16 @@ static void __init ms_hyperv_init_platform(void)
pr_info("Hyper-V: Isolation Config: Group A 0x%x, Group B
0x%x\n",
ms_hyperv.isolation_config_a,
ms_hyperv.isolation_config_b);
-
On 12/10/2021 4:38 AM, Michael Kelley (LINUX) wrote:
From: Tianyu Lan Sent: Monday, December 6, 2021 11:56 PM
Hyper-V provides Isolation VM which has memory encrypt support. Add
hyperv_cc_platform_has() and return true for check of GUEST_MEM_ENCRYPT
attribute.
Signed-off-by: Tianyu Lan
On 12/9/2021 4:00 PM, Long Li wrote:
@@ -1848,21 +1851,22 @@ static int storvsc_queuecommand(struct Scsi_Host
*host, struct scsi_cmnd *scmnd)
payload->range.len = length;
payload->range.offset = offset_in_hvpg;
+ sg_count = scsi_dma_map(scmnd);
+
On 12/9/2021 4:14 AM, Haiyang Zhang wrote:
From: Tianyu Lan
Sent: Tuesday, December 7, 2021 2:56 AM
To: KY Srinivasan ; Haiyang Zhang ;
Stephen
Hemminger ; wei@kernel.org; Dexuan Cui
;
t...@linutronix.de; mi...@redhat.com; b...@alien8.de;
dave.han...@linux.intel.com;
x...@kernel.org
Hi Borislav:
Thanks for your review.
On 12/7/2021 5:47 PM, Borislav Petkov wrote:
On Tue, Dec 07, 2021 at 02:55:58AM -0500, Tianyu Lan wrote:
From: Tianyu Lan
Hyper-V provides Isolation VM which has memory encrypt support. Add
hyperv_cc_platform_has() and return true for check
From: Tianyu Lan
In Isolation VM, all shared memory with host needs to mark visible
to host via hvcall. vmbus_establish_gpadl() has already done it for
netvsc rx/tx ring buffer. The page buffer used by vmbus_sendpacket_
pagebuffer() stills need to be handled. Use DMA API to map/umap
these memory
From: Tianyu Lan
hyperv Isolation VM requires bounce buffer support to copy
data from/to encrypted memory and so enable swiotlb force
mode to use swiotlb bounce buffer for DMA transaction.
In Isolation VM with AMD SEV, the bounce buffer needs to be
accessed via extra address space which
From: Tianyu Lan
In Isolation VM, all shared memory with host needs to mark visible
to host via hvcall. vmbus_establish_gpadl() has already done it for
storvsc rx/tx ring buffer. The page buffer used by vmbus_sendpacket_
mpb_desc() still needs to be handled. Use DMA API(scsi_dma_map/unmap
From: Tianyu Lan
Hyper-V provides Isolation VM which has memory encrypt support. Add
hyperv_cc_platform_has() and return true for check of GUEST_MEM_ENCRYPT
attribute.
Signed-off-by: Tianyu Lan
---
Change since v3:
* Change code style of checking GUEST_MEM attribute
From: Tianyu Lan
In Isolation VM with AMD SEV, bounce buffer needs to be accessed via
extra address space which is above shared_gpa_boundary (E.G 39 bit
address line) reported by Hyper-V CPUID ISOLATION_CONFIG. The access
physical address will be original physical address + shared_gpa_boundary
From: Tianyu Lan
Hyper-V provides two kinds of Isolation VMs. VBS(Virtualization-based
security) and AMD SEV-SNP unenlightened Isolation VMs. This patchset
is to add support for these Isolation VM support in Linux.
The memory of these vms are encrypted and host can't access guest
memory
From: Tianyu Lan
In Isolation VM, all shared memory with host needs to mark visible
to host via hvcall. vmbus_establish_gpadl() has already done it for
netvsc rx/tx ring buffer. The page buffer used by vmbus_sendpacket_
pagebuffer() stills need to be handled. Use DMA API to map/umap
these memory
From: Tianyu Lan
hyperv Isolation VM requires bounce buffer support to copy
data from/to encrypted memory and so enable swiotlb force
mode to use swiotlb bounce buffer for DMA transaction.
In Isolation VM with AMD SEV, the bounce buffer needs to be
accessed via extra address space which
From: Tianyu Lan
Hyper-V provides Isolation VM which has memory encrypt support. Add
hyperv_cc_platform_has() and return true for check of GUEST_MEM_ENCRYPT
attribute.
Signed-off-by: Tianyu Lan
---
Change since v3:
* Change code style of checking GUEST_MEM attribute
From: Tianyu Lan
In Isolation VM, all shared memory with host needs to mark visible
to host via hvcall. vmbus_establish_gpadl() has already done it for
storvsc rx/tx ring buffer. The page buffer used by vmbus_sendpacket_
mpb_desc() still needs to be handled. Use DMA API(scsi_dma_map/unmap
From: Tianyu Lan
In Isolation VM with AMD SEV, bounce buffer needs to be accessed via
extra address space which is above shared_gpa_boundary (E.G 39 bit
address line) reported by Hyper-V CPUID ISOLATION_CONFIG. The access
physical address will be original physical address + shared_gpa_boundary
From: Tianyu Lan
Hyper-V provides two kinds of Isolation VMs. VBS(Virtualization-based
security) and AMD SEV-SNP unenlightened Isolation VMs. This patchset
is to add support for these Isolation VM support in Linux.
The memory of these vms are encrypted and host can't access guest
memory
On 12/5/2021 6:31 PM, Juergen Gross wrote:
On 05.12.21 09:48, Tianyu Lan wrote:
On 12/5/2021 4:34 PM, Juergen Gross wrote:
On 05.12.21 09:18, Tianyu Lan wrote:
From: Tianyu Lan
hyperv Isolation VM requires bounce buffer support to copy
data from/to encrypted memory and so enable
On 12/6/2021 10:09 PM, Christoph Hellwig wrote:
Please spell swiotlb with a lower case s. Otherwise this look good
Acked-by: Christoph Hellwig
Feel free to carry this in whatever tree is suitable for the rest of the
patches.
Sure. Thanks for your ack and will update "swiotlb" in the next
Hi Christoph:
Thanks for your review.
On 12/6/2021 10:06 PM, Christoph Hellwig wrote:
On Sun, Dec 05, 2021 at 03:18:10AM -0500, Tianyu Lan wrote:
+static bool hyperv_cc_platform_has(enum cc_attr attr)
+{
+#ifdef CONFIG_HYPERV
+ return attr == CC_ATTR_GUEST_MEM_ENCRYPT;
+#else
From: Tianyu Lan
In Isolation VM, all shared memory with host needs to mark visible
to host via hvcall. vmbus_establish_gpadl() has already done it for
storvsc rx/tx ring buffer. The page buffer used by vmbus_sendpacket_
mpb_desc() still needs to be handled. Use DMA API(scsi_dma_map/unmap
From: Tianyu Lan
Hyper-V provides Isolation VM which has memory encrypt support. Add
hyperv_cc_platform_has() and return true for check of GUEST_MEM_ENCRYPT
attribute.
Signed-off-by: Tianyu Lan
---
Change since v3:
* Change code style of checking GUEST_MEM attribute
From: Tianyu Lan
In Isolation VM, all shared memory with host needs to mark visible
to host via hvcall. vmbus_establish_gpadl() has already done it for
netvsc rx/tx ring buffer. The page buffer used by vmbus_sendpacket_
pagebuffer() stills need to be handled. Use DMA API to map/umap
these memory
From: Tianyu Lan
hyperv Isolation VM requires bounce buffer support to copy
data from/to encrypted memory and so enable swiotlb force
mode to use swiotlb bounce buffer for DMA transaction.
In Isolation VM with AMD SEV, the bounce buffer needs to be
accessed via extra address space which
From: Tianyu Lan
In Isolation VM with AMD SEV, bounce buffer needs to be accessed via
extra address space which is above shared_gpa_boundary (E.G 39 bit
address line) reported by Hyper-V CPUID ISOLATION_CONFIG. The access
physical address will be original physical address + shared_gpa_boundary
From: Tianyu Lan
Hyper-V provides two kinds of Isolation VMs. VBS(Virtualization-based
security) and AMD SEV-SNP unenlightened Isolation VMs. This patchset
is to add support for these Isolation VM support in Linux.
The memory of these vms are encrypted and host can't access guest
memory
On 12/5/2021 4:34 PM, Juergen Gross wrote:
On 05.12.21 09:18, Tianyu Lan wrote:
From: Tianyu Lan
hyperv Isolation VM requires bounce buffer support to copy
data from/to encrypted memory and so enable swiotlb force
mode to use swiotlb bounce buffer for DMA transaction.
In Isolation VM
From: Tianyu Lan
Hyper-V provides Isolation VM which has memory encrypt support. Add
hyperv_cc_platform_has() and return true for check of GUEST_MEM_ENCRYPT
attribute.
Signed-off-by: Tianyu Lan
---
Change since v3:
* Change code style of checking GUEST_MEM attribute
From: Tianyu Lan
In Isolation VM, all shared memory with host needs to mark visible
to host via hvcall. vmbus_establish_gpadl() has already done it for
netvsc rx/tx ring buffer. The page buffer used by vmbus_sendpacket_
pagebuffer() stills need to be handled. Use DMA API to map/umap
these memory
From: Tianyu Lan
In Isolation VM, all shared memory with host needs to mark visible
to host via hvcall. vmbus_establish_gpadl() has already done it for
storvsc rx/tx ring buffer. The page buffer used by vmbus_sendpacket_
mpb_desc() still needs to be handled. Use DMA API(scsi_dma_map/unmap
From: Tianyu Lan
hyperv Isolation VM requires bounce buffer support to copy
data from/to encrypted memory and so enable swiotlb force
mode to use swiotlb bounce buffer for DMA transaction.
In Isolation VM with AMD SEV, the bounce buffer needs to be
accessed via extra address space which
From: Tianyu Lan
In Isolation VM with AMD SEV, bounce buffer needs to be accessed via
extra address space which is above shared_gpa_boundary (E.G 39 bit
address line) reported by Hyper-V CPUID ISOLATION_CONFIG. The access
physical address will be original physical address + shared_gpa_boundary
From: Tianyu Lan
Hyper-V provides two kinds of Isolation VMs. VBS(Virtualization-based
security) and AMD SEV-SNP unenlightened Isolation VMs. This patchset
is to add support for these Isolation VM support in Linux.
The memory of these vms are encrypted and host can't access guest
memory
On 12/4/2021 2:59 AM, Michael Kelley (LINUX) wrote:
+
+/*
+ * hv_map_memory - map memory to extra space in the AMD SEV-SNP Isolation VM.
+ */
+void *hv_map_memory(void *addr, unsigned long size)
+{
+ unsigned long *pfns = kcalloc(size / HV_HYP_PAGE_SIZE,
This should be just PAGE_SIZE, as
On 12/4/2021 3:17 AM, Michael Kelley (LINUX) wrote:
+static void __init hyperv_iommu_swiotlb_init(void)
+{
+ unsigned long hyperv_io_tlb_size;
+ void *hyperv_io_tlb_start;
+
+ /*
+* Allocate Hyper-V swiotlb bounce buffer at early place
+* to reserve large
On 12/4/2021 4:06 AM, Tom Lendacky wrote:
Hi Tom:
Thanks for your test. Could you help to test the following
patch and check whether it can fix the issue.
The patch is mangled. Is the only difference where
set_memory_decrypted() is called?
I de-mangled the patch. No more stack
On 12/2/2021 10:43 PM, Wei Liu wrote:
On Wed, Dec 01, 2021 at 11:02:54AM -0500, Tianyu Lan wrote:
[...]
diff --git a/arch/x86/xen/pci-swiotlb-xen.c b/arch/x86/xen/pci-swiotlb-xen.c
index 46df59aeaa06..30fd0600b008 100644
--- a/arch/x86/xen/pci-swiotlb-xen.c
+++ b/arch/x86/xen/pci-swiotlb-xen.c
On 12/2/2021 10:39 PM, Wei Liu wrote:
+static bool hyperv_cc_platform_has(enum cc_attr attr)
+{
+#ifdef CONFIG_HYPERV
+ if (attr == CC_ATTR_GUEST_MEM_ENCRYPT)
+ return true;
+ else
+ return false;
This can be simplified as
return attr ==
On 12/2/2021 10:42 PM, Tom Lendacky wrote:
On 12/1/21 10:02 AM, Tianyu Lan wrote:
From: Tianyu Lan
In Isolation VM with AMD SEV, bounce buffer needs to be accessed via
extra address space which is above shared_gpa_boundary (E.G 39 bit
address line) reported by Hyper-V CPUID ISOLATION_CONFIG
From: Tianyu Lan
hyperv Isolation VM requires bounce buffer support to copy
data from/to encrypted memory and so enable swiotlb force
mode to use swiotlb bounce buffer for DMA transaction.
In Isolation VM with AMD SEV, the bounce buffer needs to be
accessed via extra address space which
From: Tianyu Lan
In Isolation VM, all shared memory with host needs to mark visible
to host via hvcall. vmbus_establish_gpadl() has already done it for
netvsc rx/tx ring buffer. The page buffer used by vmbus_sendpacket_
pagebuffer() stills need to be handled. Use DMA API to map/umap
these memory
From: Tianyu Lan
In Isolation VM, all shared memory with host needs to mark visible
to host via hvcall. vmbus_establish_gpadl() has already done it for
storvsc rx/tx ring buffer. The page buffer used by vmbus_sendpacket_
mpb_desc() still needs to be handled. Use DMA API(scsi_dma_map/unmap
From: Tianyu Lan
Hyper-V provides Isolation VM which has memory encrypt support. Add
hyperv_cc_platform_has() and return true for check of GUEST_MEM_ENCRYPT
attribute.
Signed-off-by: Tianyu Lan
---
arch/x86/kernel/cc_platform.c | 15 +++
1 file changed, 15 insertions(+)
diff
From: Tianyu Lan
In Isolation VM with AMD SEV, bounce buffer needs to be accessed via
extra address space which is above shared_gpa_boundary (E.G 39 bit
address line) reported by Hyper-V CPUID ISOLATION_CONFIG. The access
physical address will be original physical address + shared_gpa_boundary
From: Tianyu Lan
Hyper-V provides two kinds of Isolation VMs. VBS(Virtualization-based
security) and AMD SEV-SNP unenlightened Isolation VMs. This patchset
is to add support for these Isolation VM support in Linux.
The memory of these vms are encrypted and host can't access guest
memory
1 - 100 of 336 matches
Mail list logo