On Thu, Jan 17, 2019 at 01:55:31PM +, Zhuangyanying wrote:
> From: Zhuang Yanying
>
> When live-migration with large-memory guests, vcpu may hang for a long
> time while starting migration, such as 9s for 2T
> (linux-5.0.0-rc2+qemu-3.1.0).
> The reason is memory_global_dirty_log_start()
On Thu, Jan 17, 2019 at 01:55:29PM +, Zhuangyanying wrote:
> From: Xiao Guangrong
>
> It is used to track possible writable sptes on the shadow page on
> which the bit is set to 1 for the sptes that are already writable
> or can be locklessly updated to writable on the fast_page_fault
>
On Thu, Jan 17, 2019 at 01:55:30PM +, Zhuangyanying wrote:
> From: Xiao Guangrong
>
> The original idea is from Avi. kvm_mmu_write_protect_all_pages() is
> extremely fast to write protect all the guest memory. Comparing with
> the ordinary algorithm which write protects last level sptes
On Thu, Jan 17, 2019 at 01:55:28PM +, Zhuangyanying wrote:
> From: Xiao Guangrong
>
> Current behavior of mmu_spte_update_no_track() does not match
> the name of _no_track() as actually the A/D bits are tracked
> and returned to the caller
Sentences should be terminated with periods.
>
On Mon, Jan 21, 2019 at 06:37:36AM +, Zhuangyanying wrote:
>
> > > u64 wp_all_indicator, kvm_wp_all_gen;
> > >
> > > - mutex_lock(>slots_lock);
> > > wp_all_indicator = get_write_protect_all_indicator(kvm);
> > > kvm_wp_all_gen = get_write_protect_all_gen(wp_all_indicator);
> > >
> > >
On Fri, Sep 06, 2019 at 09:49:44PM +, Larry Dewey wrote:
> I was playing with the new objects, etc, and found if the user
> specifies -sgx-epc, and a memory device, but does not specify -cpu
> host, +sgx, the vm runs without any warnings, while obviously not doing
> anything to the memory.
- ENCLV instruction set for VMM oversubscription of EPC
- ENCLS-C instruction set for thread safe variants of ENCLS
Signed-off-by: Sean Christopherson
---
target/i386/cpu.c | 20
target/i386/cpu.h | 1 +
2 files changed, 21 insertions(+)
diff --git a/target/i386/cpu.c b/target/
ware support for SGX was limited to single
socket systems).
Sean Christopherson (20):
hostmem: Add hostmem-epc as a backend for SGX EPC
i386: Add 'sgx-epc' device to expose EPC sections to guest
vl: Add "sgx-epc" option to expose SGX EPC sections to guest
i386: Add primary SG
).
Signed-off-by: Sean Christopherson
---
target/i386/cpu.c | 4 ++--
target/i386/cpu.h | 10 ++
2 files changed, 12 insertions(+), 2 deletions(-)
diff --git a/target/i386/cpu.c b/target/i386/cpu.c
index 19751e37a7..f529fb0dc8 100644
--- a/target/i386/cpu.c
+++ b/target/i386/cpu.c
@@ -1041,7
to the guest as
KVM blocks access to the PROVISIONKEY by default and requires userspace
to provide additional credentials (via ioctl()) to expose PROVISIONKEY.
Signed-off-by: Sean Christopherson
---
hw/i386/sgx-epc.c | 17 +
include/hw/i386/sgx-epc.h | 1 +
target/i386/cpu.c
CPUID leaf 12_1_EBX is an Intel-defined feature bits leaf enumerating
the platform's SGX extended capabilities. Currently there is a single
capabilitiy:
- EXINFO: record information about #PFs and #GPs in the enclave's SSA
Signed-off-by: Sean Christopherson
---
target/i386/cpu.c | 20
Add helpers to detect if SGX EPC exists above 4g, and if so, where SGX
EPC above 4g ends. Use the helpers to adjust the device memory range
if SGX EPC exists above 4g.
Note that SGX EPC is currently hardcoded to reside above 4g.
Signed-off-by: Sean Christopherson
---
hw/i386/pc.c
should report support for
PROVISIONKEY via CPUID if and only if it supports KVM_CAP_SGX_ATTRIBUTE.
Signed-off-by: Sean Christopherson
---
target/i386/cpu.c | 5 -
target/i386/kvm-stub.c | 5 +
target/i386/kvm.c | 25 +
target/i386/kvm_i386.h | 3 +++
4
posing EPC to guests does not require -maxmem,
and last but not least allows all of EPC to be enumerated in a single
ACPI entry, which is expected by some kernels, e.g. Windows 7 and 8.
Signed-off-by: Sean Christopherson
---
hw/i386/sgx-epc.c | 107 +-
inclu
Note that SGX EPC is currently guaranteed to reside in a single
contiguous chunk of memory regardless of the number of EPC sections.
Signed-off-by: Sean Christopherson
---
hw/i386/pc.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/hw/i386/pc.c b/hw/i386/pc.c
index 614d464394
is not yet defined for bare metal.
Signed-off-by: Sean Christopherson
---
hw/i386/acpi-build.c | 22 ++
1 file changed, 22 insertions(+)
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
index f3fdfefcd5..73d5321e0e 100644
--- a/hw/i386/acpi-build.c
+++ b/hw/i386/acpi
e 'sgx-epc' device is essentially a placholder at this time, it will
be fully implemented in a future patch along with a dedicated command
to create 'sgx-epc' devices.
Signed-off-by: Sean Christopherson
---
hw/i386/Makefile.objs | 1 +
hw/i386/sgx-epc.c
Request SGX an SGX Launch Control to be enabled in FEATURE_CONTROL when
the features are exposed to the guest.
Signed-off-by: Sean Christopherson
---
hw/i386/pc.c | 10 +-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/hw/i386/pc.c b/hw/i386/pc.c
index 549c437050
KVM_CAP_SGX_ATTRIBUTE is a proposed capability for Intel SGX that can be
used by userspace to enable privileged attributes, e.g. access to the
PROVISIONKEY. The capability number is a placeholder defined well above
existing capabilities so that it's stable during development.
Signed-off-by: Sean
SGX EPC virtualization is currently only support by KVM.
Signed-off-by: Sean Christopherson
---
hw/i386/pc_q35.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/hw/i386/pc_q35.c b/hw/i386/pc_q35.c
index 397e1fdd2f..ed385b8ca2 100644
--- a/hw/i386/pc_q35.c
+++ b/hw/i386/pc_q35.c
@@ -178,6
FEAT_12_1_EAX). Bits 63:32 are currently all
reserved and bits 127:64 correspond to the allowed XSAVE Feature Request
Mask, which is calculated based on other CPU features, e.g. XSAVE, MPX,
AVX, etc... and is not exposed to the user.
Signed-off-by: Sean Christopherson
---
target/i386/cpu.c | 20
MSRs won't be set to a random or hardware specific value.
Likewise, migrate the MSRs if they are exposed to the guest.
Signed-off-by: Sean Christopherson
---
target/i386/cpu.h | 1 +
target/i386/kvm.c | 21 +
target/i386/machine.c | 20
3 files
.
Signed-off-by: Sean Christopherson
---
backends/Makefile.objs | 1 +
backends/hostmem-epc.c | 91 ++
2 files changed, 92 insertions(+)
create mode 100644 backends/hostmem-epc.c
diff --git a/backends/Makefile.objs b/backends/Makefile.objs
index 981e8e122f
SGX adds multiple flags to FEATURE_CONTROL to enable SGX and Flexible
Launch Control.
Signed-off-by: Sean Christopherson
---
target/i386/kvm.c | 5 +
1 file changed, 5 insertions(+)
diff --git a/target/i386/kvm.c b/target/i386/kvm.c
index 07565820bd..e40c4fd673 100644
--- a/target/i386
SGX capabilities are enumerated through CPUID_0x12.
Signed-off-by: Sean Christopherson
---
target/i386/cpu.c | 5 +
1 file changed, 5 insertions(+)
diff --git a/target/i386/cpu.c b/target/i386/cpu.c
index a951a02baa..0e6b9980d9 100644
--- a/target/i386/cpu.c
+++ b/target/i386/cpu.c
ber of SGX sub-leafs is "NULL" terminated.
Signed-off-by: Sean Christopherson
---
target/i386/kvm.c | 19 +++
1 file changed, 19 insertions(+)
diff --git a/target/i386/kvm.c b/target/i386/kvm.c
index dcda0bb0e9..8a3dccf54e 100644
--- a/target/i386/kvm.c
+++ b/targ
SGX EPC virtualization is currently only support by KVM.
Signed-off-by: Sean Christopherson
---
hw/i386/pc_piix.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
index c2280c72ef..3e70c6e311 100644
--- a/hw/i386/pc_piix.c
+++ b/hw/i386/pc_piix.c
On Tue, Oct 01, 2019 at 07:20:17AM -0700, Jim Mattson wrote:
> On Mon, Sep 30, 2019 at 5:45 PM Huang, Kai wrote:
> >
> > On Mon, 2019-09-30 at 12:23 -0700, Jim Mattson wrote:
> > > On Mon, Sep 30, 2019 at 10:54 AM Eduardo Habkost
> > > wrote:
> > > I had only looked at the SVM implementation of
On Tue, Oct 01, 2019 at 10:23:31AM -0700, Jim Mattson wrote:
> On Tue, Oct 1, 2019 at 10:06 AM Sean Christopherson
> wrote:
> >
> > On Tue, Oct 01, 2019 at 07:20:17AM -0700, Jim Mattson wrote:
> > > On Mon, Sep 30, 2019 at 5:45 PM Huang, Kai wrote:
> > > &
On Tue, Oct 13, 2020 at 01:19:30PM +0800, Yang Weijiang wrote:
> With more components in XSS being developed on Intel platform,
> it's necessary to clean up existing XSAVE related feature words to
> make the name clearer. It's to prepare for adding CET related support
> in following patches.
>
>
On Sun, Oct 11, 2020 at 10:11:39AM -0400, harry harry wrote:
> Hi Maxim,
>
> Thanks much for your reply.
>
> On Sun, Oct 11, 2020 at 3:29 AM Maxim Levitsky wrote:
> >
> > On Sun, 2020-10-11 at 01:26 -0400, harry harry wrote:
> > > Hi QEMU/KVM developers,
> > >
> > > I am sorry if my email
On Tue, Oct 13, 2020 at 12:30:39AM -0400, harry harry wrote:
> Hi Sean,
>
> Thank you very much for your thorough explanations. Please see my
> inline replies as follows. Thanks!
>
> On Mon, Oct 12, 2020 at 12:54 PM Sean Christopherson
> wrote:
> >
> > No,
On Tue, Oct 13, 2020 at 01:33:28AM -0400, harry harry wrote:
> > > Do you mean that GPAs are different from their corresponding HVAs when
> > > KVM does the walks (as you said above) in software?
> >
> > What do you mean by "different"? GPAs and HVAs are two completely
> different
> > address
On Thu, Sep 17, 2020 at 01:56:21PM -0500, Tom Lendacky wrote:
> On 9/17/20 12:28 PM, Dr. David Alan Gilbert wrote:
> > * Tom Lendacky (thomas.lenda...@amd.com) wrote:
> > > From: Tom Lendacky
> > >
> > > This patch series provides support for launching an SEV-ES guest.
> > >
> > > Secure
maintenance of the code.
>
> Suggested-by: Vitaly Kuznetsov
> Suggested-by: Paolo Bonzini
> Signed-off-by: Sean Christopherson
> Signed-off-by: Krish Sadhukhan
> ---
> arch/x86/kvm/vmx/nested.c | 2 +-
> arch/x86/kvm/vmx/vmx.c| 234
> +++
This needs a changelog.
I would also split the non-x86 parts, i.e. the kvm_arch_* renames, to a
separate patch.
On Tue, Jul 28, 2020 at 12:10:45AM +, Krish Sadhukhan wrote:
> Suggested-by: Vitaly Kuznetsov
> Suggested-by: Paolo Bonzini
> Signed-off-by: Sean Christopherson
>
maintenance of the code.
I'd probably prefer to split this into two patches, one to rename all the
functions and then the second to introduce the autofill macros. Ditto for
VMX.
> Suggested-by: Vitaly Kuznetsov
> Suggested-by: Paolo Bonzini
> Signed-off-by: Sean Christopherson
All o
On Thu, May 21, 2020 at 01:42:46PM +1000, David Gibson wrote:
> A number of hardware platforms are implementing mechanisms whereby the
> hypervisor does not have unfettered access to guest memory, in order
> to mitigate the security impact of a compromised hypervisor.
>
> AMD's SEV implements
On Wed, Jun 17, 2020 at 05:09:31PM +0200, Paolo Bonzini wrote:
> In order to allow everyone to present at KVM Forum, including people
> who might not have been able to travel to Dublin, we are extending the
> submission deadline for presentations for 6 more weeks!
>
> * CFP Closes: Sunday, August
On Thu, Jun 04, 2020 at 01:11:29PM +1000, David Gibson wrote:
> On Mon, Jun 01, 2020 at 10:16:18AM +0100, Dr. David Alan Gilbert wrote:
> > * Sean Christopherson (sean.j.christopher...@intel.com) wrote:
> > > On Thu, May 21, 2020 at 01:42:46PM +1000, David Gibson wrote:
>
On Tue, Jun 09, 2020 at 02:42:59PM -0400, Michael S. Tsirkin wrote:
> On Tue, Jun 09, 2020 at 08:38:15PM +0200, David Hildenbrand wrote:
> > On 09.06.20 18:18, Eduardo Habkost wrote:
> > > On Tue, Jun 09, 2020 at 11:59:04AM -0400, Michael S. Tsirkin wrote:
> > >> On Tue, Jun 09, 2020 at 03:26:08PM
On Mon, Jul 12, 2021, Maxim Levitsky wrote:
> On Mon, 2021-07-12 at 08:02 -0500, harry harry wrote:
> > Dear Maxim,
> >
> > Thanks for your reply. I knew, in our current design/implementation,
> > EPT/NPT is enabled by a module param. I think it is possible to modify
> > the QEMU/KVM code to let
On Mon, May 03, 2021, Paolo Bonzini wrote:
> On 30/04/21 08:24, Yang Zhong wrote:
> > +void pc_machine_init_sgx_epc(PCMachineState *pcms)
> > +{
> > +SGXEPCState *sgx_epc;
> > +X86MachineState *x86ms = X86_MACHINE(pcms);
> > +
> > +sgx_epc = g_malloc0(sizeof(*sgx_epc));
> > +
On Tue, May 04, 2021, Paolo Bonzini wrote:
> On 04/05/21 02:09, Sean Christopherson wrote:
> > Is there a way to process "-device sgx-epc..." before vCPUs are realized?
> > The
> > ordering problem was the only reason I added a dedicated option.
>
> If i
On Tue, Apr 06, 2021, Michael Tokarev wrote:
> Hi!
>
> It looks like this commit:
>
> commit 87fa7f3e98a1310ef1ac1900e7ee7f9610a038bc
> Author: Thomas Gleixner
> Date: Wed Jul 8 21:51:54 2020 +0200
>
> x86/kvm: Move context tracking where it belongs
>
> Context tracking for KVM
On Fri, Sep 10, 2021, Paolo Bonzini wrote:
> On 19/07/21 13:21, Yang Zhong wrote:
> > +void sgx_memory_backend_reset(HostMemoryBackend *backend, int fd,
> > + Error **errp)
> > +{
> > +MemoryRegion *mr = >mr;
> > +
> > +mr->enabled = false;
> > +
> > +/*
On Fri, Sep 10, 2021, Paolo Bonzini wrote:
> On 10/09/21 17:34, Sean Christopherson wrote:
> > > Yang explained to me (offlist) that this is needed because Windows fails
> > > to
> > > reboot without it. We would need a way to ask Linux to reinitialize the
> >
On Fri, Sep 10, 2021, Paolo Bonzini wrote:
> On 10/09/21 19:34, Sean Christopherson wrote:
> > On Fri, Sep 10, 2021, Paolo Bonzini wrote:
> > > On 10/09/21 17:34, Sean Christopherson wrote:
> > > > The only other option that comes to mind is a dedicated ioctl().
On Mon, Sep 13, 2021, Jarkko Sakkinen wrote:
> On Fri, 2021-09-10 at 17:10 +0200, Paolo Bonzini wrote:
> > On 19/07/21 13:21, Yang Zhong wrote:
> > > +void sgx_memory_backend_reset(HostMemoryBackend *backend, int fd,
> > > + Error **errp)
> > > +{
> > > +
On Wed, Jul 14, 2021, harry harry wrote:
> > Heh, because the MMUs are all per-vCPU, it actually wouldn't be that much
> > effort
> > beyond supporting !TDP and TDP for different VMs...
>
> Sorry, may I know what do you mean by "MMUs are all per-vCPU"? Do you
> mean the MMUs walk the page tables
On Wed, Jul 28, 2021, harry harry wrote:
> Sean, sorry for the late reply. Thanks for your careful explanations.
>
> > For emulation of any instruction/flow that starts with a guest virtual
> > address.
> > On Intel CPUs, that includes quite literally any "full" instruction
> > emulation,
> >
On Fri, Dec 31, 2021, Chao Peng wrote:
> On Fri, Dec 24, 2021 at 12:13:51PM +0800, Chao Peng wrote:
> > On Thu, Dec 23, 2021 at 06:06:19PM +0000, Sean Christopherson wrote:
> > > On Thu, Dec 23, 2021, Chao Peng wrote:
> > > > This new function establishes
On Fri, Dec 31, 2021, Chao Peng wrote:
> On Fri, Dec 24, 2021 at 11:53:15AM +0800, Robert Hoo wrote:
> > On Thu, 2021-12-23 at 20:29 +0800, Chao Peng wrote:
> > > From: "Kirill A. Shutemov"
> > >
> > > +static void notify_fallocate(struct inode *inode, pgoff_t start,
> > > pgoff_t end)
> > > +{
On Fri, Dec 31, 2021, Chao Peng wrote:
> On Thu, Dec 23, 2021 at 05:35:37PM +0000, Sean Christopherson wrote:
> > On Thu, Dec 23, 2021, Chao Peng wrote:
> > > diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
> > > index 1daa45268de2..41434322fa23 1006
On Fri, Dec 31, 2021, Chao Peng wrote:
> On Tue, Dec 28, 2021 at 09:48:08PM +0000, Sean Christopherson wrote:
> >KVM handles
> > reverse engineering the memslot to get the offset and whatever else it
> > needs.
> > notify_fallocate() and other callbacks are unch
On Wed, Jan 05, 2022, Chao Peng wrote:
> On Tue, Jan 04, 2022 at 05:31:30PM +0000, Sean Christopherson wrote:
> > On Fri, Dec 31, 2021, Chao Peng wrote:
> > > On Fri, Dec 24, 2021 at 12:13:51PM +0800, Chao Peng wrote:
> > > > On Thu, Dec 23, 2021 at 06:06:19PM +0
On Tue, Dec 21, 2021, Chao Peng wrote:
> This is the third version of this series which try to implement the
> fd-based KVM guest private memory.
...
> Test
>
> This code has been tested with latest TDX code patches hosted at
> (https://github.com/intel/tdx/tree/kvm-upstream) with minimal
On Thu, Dec 23, 2021, Chao Peng wrote:
> Extend the memslot definition to provide fd-based private memory support
> by adding two new fields(fd/ofs). The memslot then can maintain memory
> for both shared and private pages in a single memslot. Shared pages are
> provided in the existing way by
On Thu, Dec 23, 2021, Chao Peng wrote:
> Similar to hva_tree for hva range, maintain interval tree ofs_tree for
> offset range of a fd-based memslot so the lookup by offset range can be
> faster when memslot count is high.
This won't work. The hva_tree relies on there being exactly one virtual
On Fri, Dec 24, 2021, Chao Peng wrote:
> On Thu, Dec 23, 2021 at 06:02:33PM +0000, Sean Christopherson wrote:
> > On Thu, Dec 23, 2021, Chao Peng wrote:
> > > Similar to hva_tree for hva range, maintain interval tree ofs_tree for
> > > offset range of a fd-based mems
On Fri, Dec 24, 2021, Chao Peng wrote:
> On Fri, Dec 24, 2021 at 12:09:47AM +0100, Paolo Bonzini wrote:
> > On 12/23/21 19:34, Sean Christopherson wrote:
> > > > select HAVE_KVM_PM_NOTIFIER if PM
> > > > + select MEMFD_OPS
> > > MEMFD_O
On Thu, Dec 23, 2021, Chao Peng wrote:
> This new function establishes the mapping in KVM page tables for a
> given gfn range. It can be used in the memory fallocate callback for
> memfd based memory to establish the mapping for KVM secondary MMU when
> the pages are allocated in the memory
On Thu, Dec 23, 2021, Chao Peng wrote:
> This new exit allows user space to handle memory-related errors.
> Currently it supports two types (KVM_EXIT_MEM_MAP_SHARED/PRIVATE) of
> errors which are used for shared memory <-> private memory conversion
> in memory encryption usage.
>
> After private
On Thu, Dec 23, 2021, Chao Peng wrote:
> diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig
> index 03b2ce34e7f4..86655cd660ca 100644
> --- a/arch/x86/kvm/Kconfig
> +++ b/arch/x86/kvm/Kconfig
> @@ -46,6 +46,7 @@ config KVM
> select SRCU
> select INTERVAL_TREE
> select
On Fri, Nov 19, 2021, David Hildenbrand wrote:
> On 19.11.21 16:19, Jason Gunthorpe wrote:
> > As designed the above looks useful to import a memfd to a VFIO
> > container but could you consider some more generic naming than calling
> > this 'guest' ?
>
> +1 the guest terminology is somewhat
On Fri, Nov 19, 2021, Jason Gunthorpe wrote:
> On Fri, Nov 19, 2021 at 07:18:00PM +0000, Sean Christopherson wrote:
> > No ideas for the kernel API, but that's also less concerning since
> > it's not set in stone. I'm also not sure that dedicated APIs for
> > each high-ish
On Fri, Nov 19, 2021, Jason Gunthorpe wrote:
> On Fri, Nov 19, 2021 at 10:21:39PM +0000, Sean Christopherson wrote:
> > On Fri, Nov 19, 2021, Jason Gunthorpe wrote:
> > > On Fri, Nov 19, 2021 at 07:18:00PM +, Sean Christopherson wrote:
> > > > No ideas for the ke
On Tue, Dec 07, 2021, Chris Murphy wrote:
> cc: qemu-devel
>
> Hi,
>
> I'm trying to help progress a very troublesome and so far elusive bug
> we're seeing in Fedora infrastructure. When running dozens of qemu-kvm
> VMs simultaneously, eventually they become unresponsive, as well as
> new
On Thu, Jul 15, 2021, harry harry wrote:
> Hi Sean,
>
> Thanks for the explanations. Please see my comments below. Thanks!
>
> > When TDP (EPT) is used, the hardware MMU has two parts: the TDP PTEs that
> > are controlled by KVM, and the IA32 PTEs that are controlled by the guest.
> > And
On Thu, Jul 15, 2021, harry harry wrote:
> Hi Sean,
>
> > No, each vCPU has its own MMU instance, where an "MMU instance" is (mostly)
> > a KVM
> > construct. Per-vCPU MMU instances are necessary because each vCPU has its
> > own
> > relevant state, e.g. CR0, CR4, EFER, etc..., that affects
On Wed, Jan 05, 2022, Yan Zhao wrote:
> Sorry, maybe I didn't express it clearly.
>
> As in the kvm_faultin_pfn_private(),
> static bool kvm_faultin_pfn_private(struct kvm_vcpu *vcpu,
> struct kvm_page_fault *fault,
> bool
On Wed, Mar 30, 2022, Quentin Perret wrote:
> On Wednesday 30 Mar 2022 at 09:58:27 (+0100), Steven Price wrote:
> > On 29/03/2022 18:01, Quentin Perret wrote:
> > > Is implicit sharing a thing? E.g., if a guest makes a memory access in
> > > the shared gpa range at an address that doesn't have a
On Wed, Mar 30, 2022, Steven Price wrote:
> On 29/03/2022 18:01, Quentin Perret wrote:
> > Is implicit sharing a thing? E.g., if a guest makes a memory access in
> > the shared gpa range at an address that doesn't have a backing memslot,
> > will KVM check whether there is a corresponding private
On Fri, Apr 01, 2022, Quentin Perret wrote:
> On Friday 01 Apr 2022 at 17:14:21 (+), Sean Christopherson wrote:
> > On Fri, Apr 01, 2022, Quentin Perret wrote:
> > I assume there is a scenario where a page can be converted from
> > shared=>private?
> > If
On Fri, Apr 01, 2022, Quentin Perret wrote:
> The typical flow is as follows:
>
> - the host asks the hypervisor to run a guest;
>
> - the hypervisor does the context switch, which includes switching
>stage-2 page-tables;
>
> - initially the guest has an empty stage-2 (we don't require
>
On Mon, Apr 04, 2022, Quentin Perret wrote:
> On Friday 01 Apr 2022 at 12:56:50 (-0700), Andy Lutomirski wrote:
> FWIW, there are a couple of reasons why I'd like to have in-place
> conversions:
>
> - one goal of pKVM is to migrate some things away from the Arm
>Trustzone environment (e.g.
On Mon, Mar 28, 2022, Quentin Perret wrote:
> Hi Sean,
>
> Thanks for the reply, this helps a lot.
>
> On Monday 28 Mar 2022 at 17:13:10 (+), Sean Christopherson wrote:
> > On Thu, Mar 24, 2022, Quentin Perret wrote:
> > > For Protected KVM (and I suspect mos
On Thu, Mar 24, 2022, Quentin Perret wrote:
> For Protected KVM (and I suspect most other confidential computing
> solutions), guests have the ability to share some of their pages back
> with the host kernel using a dedicated hypercall. This is necessary
> for e.g. virtio communications, so these
On Thu, Mar 10, 2022, Chao Peng wrote:
> Extend the memslot definition to provide fd-based private memory support
> by adding two new fields (private_fd/private_offset). The memslot then
> can maintain memory for both shared pages and private pages in a single
> memslot. Shared pages are provided
On Thu, Mar 10, 2022, Chao Peng wrote:
> Extend the memslot definition to provide fd-based private memory support
> by adding two new fields (private_fd/private_offset). The memslot then
> can maintain memory for both shared pages and private pages in a single
> memslot. Shared pages are provided
On Mon, Mar 28, 2022, Nakajima, Jun wrote:
> > On Mar 28, 2022, at 1:16 PM, Andy Lutomirski wrote:
> >
> > On Thu, Mar 10, 2022 at 6:09 AM Chao Peng
> > wrote:
> >>
> >> This is the v5 of this series which tries to implement the fd-based KVM
> >> guest private memory. The patches are based on
On Thu, Mar 10, 2022, Chao Peng wrote:
> @@ -2217,4 +2220,34 @@ static inline void kvm_handle_signal_exit(struct
> kvm_vcpu *vcpu)
> /* Max number of entries allowed for each kvm dirty ring */
> #define KVM_DIRTY_RING_MAX_ENTRIES 65536
>
> +#ifdef CONFIG_MEMFILE_NOTIFIER
> +static inline
On Thu, Mar 10, 2022, Chao Peng wrote:
> @@ -3890,7 +3893,59 @@ static bool kvm_arch_setup_async_pf(struct kvm_vcpu
> *vcpu, gpa_t cr2_or_gpa,
> kvm_vcpu_gfn_to_hva(vcpu, gfn), );
> }
>
> -static bool kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault
On Thu, Mar 10, 2022, Chao Peng wrote:
> @@ -4476,14 +4477,23 @@ static long kvm_vm_ioctl(struct file *filp,
> break;
> }
> case KVM_SET_USER_MEMORY_REGION: {
> - struct kvm_userspace_memory_region kvm_userspace_mem;
> + struct
On Thu, Mar 10, 2022, Chao Peng wrote:
> This new KVM exit allows userspace to handle memory-related errors. It
> indicates an error happens in KVM at guest memory range [gpa, gpa+size).
> The flags includes additional information for userspace to handle the
> error. Currently bit 0 is defined as
On Thu, Mar 10, 2022, Chao Peng wrote:
> diff --git a/mm/Makefile b/mm/Makefile
> index 70d4309c9ce3..f628256dce0d 100644
> +void memfile_notifier_invalidate(struct memfile_notifier_list *list,
> + pgoff_t start, pgoff_t end)
> +{
> + struct memfile_notifier
On Thu, Mar 10, 2022, Chao Peng wrote:
> KVM_MEM_PRIVATE is not exposed by default but architecture code can turn
> on it by implementing kvm_arch_private_memory_supported().
>
> Signed-off-by: Yu Zhang
> Signed-off-by: Chao Peng
> ---
> include/linux/kvm_host.h | 1 +
> virt/kvm/kvm_main.c
On Thu, Mar 10, 2022, Chao Peng wrote:
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 67349421eae3..52319f49d58a 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -841,8 +841,43 @@ static int kvm_init_mmu_notifier(struct kvm *kvm)
> #endif /*
On Thu, Mar 10, 2022, Chao Peng wrote:
> Add 'notifier' to memslot to make it a memfile_notifier node and then
> register it to memory backing store via memfile_register_notifier() when
> memslot gets created. When memslot is deleted, do the reverse with
> memfile_unregister_notifier(). Note each
On Tue, Apr 05, 2022, Quentin Perret wrote:
> On Monday 04 Apr 2022 at 15:04:17 (-0700), Andy Lutomirski wrote:
> > >> - it can be very useful for protected VMs to do shared=>private
> > >>conversions. Think of a VM receiving some data from the host in a
> > >>shared buffer, and then it
On Tue, Apr 05, 2022, Andy Lutomirski wrote:
> On Tue, Apr 5, 2022, at 3:36 AM, Quentin Perret wrote:
> > On Monday 04 Apr 2022 at 15:04:17 (-0700), Andy Lutomirski wrote:
> >> The best I can come up with is a special type of shared page that is not
> >> GUP-able and maybe not even mmappable,
On Fri, Apr 08, 2022, Chao Peng wrote:
> On Mon, Mar 28, 2022 at 09:56:33PM +0000, Sean Christopherson wrote:
> > struct kvm_userspace_memory_region_ext {
> > #ifdef __KERNEL__
>
> Is this #ifndef? As I think anonymous struct is only for kernel?
Doh, yes, I inverted th
On Thu, Apr 07, 2022, Andy Lutomirski wrote:
>
> On Thu, Apr 7, 2022, at 9:05 AM, Sean Christopherson wrote:
> > On Thu, Mar 10, 2022, Chao Peng wrote:
> >> Since page migration / swapping is not supported yet, MFD_INACCESSIBLE
> >> memory behave like longter
On Thu, Mar 10, 2022, Chao Peng wrote:
> Since page migration / swapping is not supported yet, MFD_INACCESSIBLE
> memory behave like longterm pinned pages and thus should be accounted to
> mm->pinned_vm and be restricted by RLIMIT_MEMLOCK.
>
> Signed-off-by: Chao Peng
> ---
> mm/shmem.c | 25
On Tue, Apr 05, 2022, Michael Roth wrote:
> On Thu, Mar 10, 2022 at 10:09:09PM +0800, Chao Peng wrote:
> > static inline bool kvm_slot_is_private(const struct kvm_memory_slot *slot)
> > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> > index 67349421eae3..52319f49d58a 100644
> > ---
On Thu, Sep 14, 2023, David Hildenbrand wrote:
> On 14.09.23 05:50, Xiaoyao Li wrote:
> > It's the v2 RFC of enabling KVM gmem[1] as the backend for private
> > memory.
> >
> > For confidential-computing, KVM provides gmem/guest_mem interfaces for
> > userspace, like QEMU, to allocate
On Mon, Apr 25, 2022, Andy Lutomirski wrote:
>
>
> On Mon, Apr 25, 2022, at 6:40 AM, Chao Peng wrote:
> > On Sun, Apr 24, 2022 at 09:59:37AM -0700, Andy Lutomirski wrote:
> >>
>
> >>
> >> 2. Bind the memfile to a VM (or at least to a VM technology). Now it's in
> >> the initial state
On Fri, May 20, 2022, Andy Lutomirski wrote:
> The alternative would be to have some kind of separate table or bitmap (part
> of the memslot?) that tells KVM whether a GPA should map to the fd.
>
> What do you all think?
My original proposal was to have expolicit shared vs. private memslots, and
On Mon, May 23, 2022, Chao Peng wrote:
> On Fri, May 20, 2022 at 06:31:02PM +0000, Sean Christopherson wrote:
> > On Fri, May 20, 2022, Andy Lutomirski wrote:
> > > The alternative would be to have some kind of separate table or bitmap
> > > (part
> > > of
On Fri, May 20, 2022, zhenwei pi wrote:
> @@ -59,6 +60,12 @@ enum virtio_balloon_config_read {
> VIRTIO_BALLOON_CONFIG_READ_CMD_ID = 0,
> };
>
> +/* the request body to commucate with host side */
> +struct __virtio_balloon_recover {
> + struct virtio_balloon_recover vbr;
> +
1 - 100 of 226 matches
Mail list logo