On Thu, Sep 24, 2020 at 02:37:33PM -0500, Brijesh Singh wrote:
>
> On 9/24/20 2:06 PM, Ashish Kalra wrote:
> > Hello Dave,
> >
> > Thanks for your response, please see my replies inline :
> >
> > On Thu, Sep 24, 2020 at 02:53:42PM +0100, Dr. David Ala
Hello Dave,
Thanks for your response, please see my replies inline :
On Thu, Sep 24, 2020 at 02:53:42PM +0100, Dr. David Alan Gilbert wrote:
> * Ashish Kalra (ashish.ka...@amd.com) wrote:
> > Hello Alan, Paolo,
> >
> > I am following up on Brijesh’s patches for SEV gu
Hello Paolo,
Thanks for your response.
On Fri, Sep 25, 2020 at 10:51:05AM +0200, Paolo Bonzini wrote:
> On 22/09/20 22:11, Ashish Kalra wrote:
> > This internally invokes the address_space_rw() accessor functions
> > which we had "fixed" internally (as part of the ear
Hello Alan, Paolo,
I am following up on Brijesh’s patches for SEV guest debugging support for Qemu
using gdb and/or qemu monitor.
I believe that last time, Qemu SEV debug patches were not applied and have
attached the link to the email thread and Paolo’s feedback below for reference
[1].
I
Hello Paolo,
On Sat, Sep 26, 2020 at 02:02:20AM +0200, Paolo Bonzini wrote:
> On 26/09/20 01:48, Ashish Kalra wrote:
> > Thanks for your input, i have one additional query with reference to this
> > support :
> >
> > For all explicitly unecrypted guest memory regio
Hello Paolo,
On Fri, Sep 25, 2020 at 10:56:10PM +0200, Paolo Bonzini wrote:
> On 25/09/20 22:46, Ashish Kalra wrote:
> > I was also considering abstracting this vendor/SEV specific debug
> > interface via the CPUClass object, the CPUClass object aleady has cpu
> > speci
On Tue, Dec 01, 2020 at 11:48:23AM +, Peter Maydell wrote:
> On Mon, 16 Nov 2020 at 19:07, Ashish Kalra wrote:
> >
> > From: Ashish Kalra
> >
> > Introduce new MemoryDebugOps which hook into guest virtual and physical
> > memory debug interfaces such as cpu
On Tue, Dec 01, 2020 at 12:08:28PM +, Peter Maydell wrote:
> On Mon, 16 Nov 2020 at 19:19, Ashish Kalra wrote:
> >
> > From: Brijesh Singh
> >
> > From: Brijesh Singh
> >
> > Currently, guest memory access for debugging purposes is performed
On Tue, Dec 01, 2020 at 02:38:30PM +, Peter Maydell wrote:
> On Tue, 1 Dec 2020 at 14:28, Ashish Kalra wrote:
> > On Tue, Dec 01, 2020 at 11:48:23AM +, Peter Maydell wrote:
> > > This seems like a weird place to insert these hooks. Not
> > > all debug related a
From: Ashish Kalra
Add SEV specific MemoryDebugOps which override the default MemoryDebugOps
when SEV memory encryption is enabled. The SEV specific MemoryDebugOps
invoke the generic address_space_rw_debug helpers which will then invoke
the memory region specific callbacks to handle and access
From: Ashish Kalra
This patchset adds QEMU debug support for SEV guests. Debug requires access to
the guest pages, which is encrypted when SEV is enabled.
KVM_SEV_DBG_DECRYPT and KVM_SEV_DBG_ENCRYPT commands are available to
decrypt/encrypt the guest pages, if the guest policy allows
;
ops.read = mem_read;
ops.write = mem_write;
memory_region_init_ram(mem, NULL, "memory", size, NULL);
memory_region_set_ram_debug_ops(mem, ops);
Signed-off-by: Brijesh Singh
Signed-off-by: Ashish Kalra
---
include/exec/memory.h | 27 +++
1 file changed, 27
From: Ashish Kalra
Add new address_space_read and address_space_write debug helper
interfaces which can be invoked by vendor specific guest memory
debug assist/hooks to do guest RAM memory accesses using the
added MemoryRegion callbacks.
Signed-off-by: Ashish Kalra
---
include/exec/memory.h
when debugging an
SEV guest.
Signed-off-by: Brijesh Singh
Signed-off-by: Ashish Kalra
---
include/exec/cpu-common.h | 15 +
softmmu/physmem.c | 47 +++
2 files changed, 62 insertions(+)
diff --git a/include/exec/cpu-common.h b/include
From: Ashish Kalra
Introduce new MemoryDebugOps which hook into guest virtual and physical
memory debug interfaces such as cpu_memory_rw_debug, to allow vendor specific
assist/hooks for debugging and delegating accessing the guest memory.
This is required for example in case of AMD SEV platform
walker callback.
Signed-off-by: Brijesh Singh
Signed-off-by: Ashish Kalra
---
accel/kvm/kvm-all.c| 19 +++
accel/stubs/kvm-stub.c | 8
include/sysemu/kvm.h | 15 +++
3 files changed, 42 insertions(+)
diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm
From: Brijesh Singh
The KVM_SEV_DBG_DECRYPT and KVM_SEV_DBG_ENCRYPT commands are used for
decrypting and encrypting guest memory. The command works only if the
guest policy allows the debugging.
Signed-off-by: Brijesh Singh
Signed-off-by: Ashish Kalra
---
accel/kvm/kvm-all.c | 2
page table walk is added as a
vendor specific assist/hook as part of the new MemoryDebugOps and
available via the new debug API interface cpu_physical_memory_pte_mask_debug().
Signed-off-by: Brijesh Singh
Signed-off-by: Ashish Kalra
---
include/exec/cpu-common.h | 3 ++
include/exec/memory.h
From: Brijesh Singh
When memory encryption is enabled, the guest RAM and boot flash ROM will
contain the encrypted data. By setting the debug ops allow us to invoke
encryption APIs when accessing the memory for the debug purposes.
Signed-off-by: Brijesh Singh
Signed-off-by: Ashish Kalra
. This is a prerequisite to support
debugging an encrypted guest. When a request with debug=1 is seen, the
encryption APIs will be used to access the guest memory.
Signed-off-by: Brijesh Singh
Signed-off-by: Ashish Kalra
---
include/exec/memattrs.h | 2 ++
1 file changed, 2 insertions(+)
diff --git a/include
From: Brijesh Singh
Update the HMP commands to use the debug version of APIs when accessing
guest memory.
Signed-off-by: Brijesh Singh
Signed-off-by: Ashish Kalra
---
monitor/misc.c| 4 ++--
softmmu/cpus.c| 2 +-
target/i386/monitor.c | 54
On Wed, Aug 18, 2021 at 12:37:32AM +0200, Paolo Bonzini wrote:
> On Tue, Aug 17, 2021 at 11:54 PM Steve Rutherford
> wrote:
> > > 1) the easy one: the bottom 4G of guest memory are mapped in the mirror
> > > VM 1:1. The ram_addr_t-based addresses are shifted by either 4G or a
> > > huge value
On Wed, Aug 18, 2021 at 02:06:25PM +, Ashish Kalra wrote:
> On Wed, Aug 18, 2021 at 12:37:32AM +0200, Paolo Bonzini wrote:
> > On Tue, Aug 17, 2021 at 11:54 PM Steve Rutherford
> > wrote:
> > > > 1) the easy one: the bottom 4G of guest memory are mapped i
Hello Dave, Steve,
On Tue, Aug 17, 2021 at 09:38:24AM +0100, Dr. David Alan Gilbert wrote:
> * Steve Rutherford (srutherf...@google.com) wrote:
> > On Mon, Aug 16, 2021 at 6:37 AM Ashish Kalra wrote:
> > >
> > > From: Ashish Kalra
> > >
> > >
From: Dov Murik
The mirror field indicates mirror VCPUs. This will allow QEMU to act
differently on mirror VCPUs.
Signed-off-by: Dov Murik
Co-developed-by: Ashish Kalra
Signed-off-by: Ashish Kalra
---
hw/core/cpu-common.c | 1 +
include/hw/core/cpu.h | 3 +++
2 files changed, 4 insertions
From: Dov Murik
The mirror_vcpu flag indicates whether a vcpu is a mirror.
Signed-off-by: Dov Murik
Co-developed-by: Ashish Kalra
Signed-off-by: Ashish Kalra
---
include/hw/boards.h | 2 ++
1 file changed, 2 insertions(+)
diff --git a/include/hw/boards.h b/include/hw/boards.h
index
From: Ashish Kalra
Create the Mirror VM and share the primary VM's encryption context
with it using the KVM_CAP_VM_COPY_ENC_CONTEXT_FROM capability.
Signed-off-by: Ashish Kalra
---
accel/kvm/kvm-all.c | 30 ++
1 file changed, 30 insertions(+)
diff --git a/accel
From: Ashish Kalra
Add a new kvm_mirror_vcpu_thread_fn() which is qemu's mirror vcpu
thread and the corresponding kvm_init_mirror_vcpu() which creates
the vcpu's for the mirror VM and a different KVM run loop
kvm_mirror_cpu_exec() which differs from the main KVM run loop as
it currently mainly
From: Ashish Kalra
OVMF expects both fw_cfg and the modern CPU hotplug interface to
return the same boot CPU count. We reduce the fw_cfg boot cpu count
with number of mirror vcpus's. This fails the OVMF sanity check
as fw_cfg boot cpu count and modern CPU hotplug interface boot
count don't match
From: Ashish Kalra
Skip mirror vcpus's for vcpu pause, resume and synchronization
operations.
Signed-off-by: Ashish Kalra
---
softmmu/cpus.c | 27 +++
1 file changed, 27 insertions(+)
diff --git a/softmmu/cpus.c b/softmmu/cpus.c
index 071085f840..caed382669 100644
...
Signed-off-by: Dov Murik
Co-developed-by: Ashish Kalra
Signed-off-by: Ashish Kalra
---
hw/core/machine.c | 7 +++
hw/i386/pc.c| 7 +++
include/hw/boards.h | 1 +
qapi/machine.json | 5 -
softmmu/vl.c| 3 +++
5 files changed, 22 insertions(+), 1 deletion(-)
diff
From: Dov Murik
Mark the last mirror_vcpus vcpus in the machine state's possible_cpus as
mirror.
Signed-off-by: Dov Murik
Co-developed-by: Ashish Kalra
Signed-off-by: Ashish Kalra
---
hw/i386/x86.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/hw/i386/x86.c b/hw/i386/x86.c
index
From: Ashish Kalra
Add VM ioctl and enable cap support for Mirror VM's and
a new VM file descriptor for Mirror VM's in KVMState.
The VCPU ioctl interface for Mirror VM works as it is,
as it uses a CPUState and VCPU file descriptor allocated
and setup for mirror vcpus.
Signed-off-by: Ashish
From: Tobin Feldman-Fitzthum
By excluding mirror vcpus from the ACPI tables, we hide them from the
guest OS.
Signed-off-by: Tobin Feldman-Fitzthum
Signed-off-by: Dov Murik
Co-developed-by: Ashish Kalra
Signed-off-by: Ashish Kalra
---
hw/acpi/cpu.c | 10 ++
hw/i386/acpi
From: Ashish Kalra
This is an RFC series for Mirror VM support that are
essentially secondary VMs sharing the encryption context
(ASID) with a primary VM. The patch-set creates a new
VM and shares the primary VM's encryption context
with it using the KVM_CAP_VM_COPY_ENC_CONTEXT_FROM
From: Dov Murik
On x86 machines, when initializing the CPUState structs, set the
mirror_vcpu flag to true for mirror vcpus.
Signed-off-by: Dov Murik
Co-developed-by: Ashish Kalra
Signed-off-by: Ashish Kalra
---
hw/i386/x86.c | 9 +++--
include/hw/i386/x86.h | 3 ++-
2 files
From: Ashish Kalra
Mirror VM does not support any interrupt controller and this
requires disabling the in-kernel APIC support on mirror vcpu's.
Signed-off-by: Ashish Kalra
---
hw/i386/kvm/apic.c | 15 +++
1 file changed, 15 insertions(+)
diff --git a/hw/i386/kvm/apic.c b/hw/i386
Hello Paolo,
On Mon, Aug 16, 2021 at 04:15:46PM +0200, Paolo Bonzini wrote:
> On 16/08/21 15:25, Ashish Kalra wrote:
> > From: Ashish Kalra
> >
> > This is an RFC series for Mirror VM support that are
> > essentially secondary VMs sharing the encryption context
&
From: Ashish Kalra
Signed-off-by: Ashish Kalra
---
hw/i386/pc.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/hw/i386/pc.c b/hw/i386/pc.c
index 3856a47390..2c353becb7 100644
--- a/hw/i386/pc.c
+++ b/hw/i386/pc.c
@@ -962,6 +962,9 @@ void pc_memory_init(PCMachineState *pcms
Hello Paolo,
On Mon, Aug 16, 2021 at 04:58:02PM +0200, Paolo Bonzini wrote:
> On 16/08/21 16:44, Ashish Kalra wrote:
> > I think that once the mirror VM starts booting and running the UEFI
> > code, it might be only during the PEI or DXE phase where it will
> > start actuall
Hello Paolo,
On Mon, Aug 16, 2021 at 05:38:55PM +0200, Paolo Bonzini wrote:
> On 16/08/21 17:13, Ashish Kalra wrote:
> > > > I think that once the mirror VM starts booting and running the UEFI
> > > > code, it might be only during the PEI or DXE phase where it will
>
> - We introduce another new vm level ioctl focus on the encrypted
> guest memory accessing:
>
> KVM_MEMORY_ENCRYPT_{READ,WRITE}_MEMORY
>
> struct kvm_rw_memory rw;
> rw.addr = gpa_OR_hva;
> rw.buf = (__u64)src;
> rw.len = len;
> kvm_vm_ioctl(kvm_state,
>
page is
> > private or shared. This list is built during the VM bootup and must be
> > migrated
> > to the target host so that hypervisor on target host can use it for future
> > migration.
> >
> > Signed-off-by: Brijesh Singh
> > Co-developed-by: Ashish Kalra
On Fri, Sep 10, 2021 at 07:56:36AM +, Wang, Wei W wrote:
> On Wednesday, August 4, 2021 8:00 PM, Ashish Kalra wrote:
> > +/*
> > + * Currently this exit is only used by SEV guests for
> > + * MSR_KVM_MIGRATION_CONTROL to indicate if the guest
> >
On Fri, Sep 10, 2021 at 09:11:09AM +, Wang, Wei W wrote:
> On Friday, September 10, 2021 4:48 PM, Ashish Kalra wrote:
> > On Fri, Sep 10, 2021 at 07:54:10AM +, Wang, Wei W wrote:
> > There has been a long discussion on this implementation on KVM mailing list.
> > Track
Hello Yuan,
On Thu, Sep 02, 2021 at 11:23:50PM +, Yao, Yuan wrote:
> >-Original Message-
> >From: Ashish Kalra
> >Sent: Thursday, September 02, 2021 22:05
> >To: yuan@linux.intel.com
> >Cc: thomas.lenda...@amd.com; arm...@redhat.com; ashish
On Thu, Aug 05, 2021 at 04:06:27PM +0300, Dov Murik wrote:
>
>
> On 04/08/2021 14:56, Ashish Kalra wrote:
> > From: Brijesh Singh
> >
> > The user provides the target machine's Platform Diffie-Hellman key (PDH)
> > and certificate chain before starti
Hello Dov,
On Thu, Aug 05, 2021 at 03:20:50PM +0300, Dov Murik wrote:
>
>
> On 04/08/2021 14:55, Ashish Kalra wrote:
> > From: Brijesh Singh
> >
> > When memory encryption is enabled in VM, the guest RAM will be encrypted
> > with the guest-specific key, to pr
Hello Dov,
On Thu, Aug 05, 2021 at 12:42:50PM +0300, Dov Murik wrote:
>
>
> On 04/08/2021 14:54, Ashish Kalra wrote:
> > From: Brijesh Singh
> >
> > AMD SEV migration flow requires that target machine's public Diffie-Hellman
> > key (PDH) and certificate chain
creating the outgoing
encryption context.
Signed-off-by: Brijesh Singh
Signed-off-by: Ashish Kalra
---
migration/migration.c | 61 +++
monitor/hmp-cmds.c| 18 +
qapi/migration.json | 40 +---
3 files changed, 116
ConfidentialGuestMemoryEncryptionOps in this patch
which will be later used by the encrypted guest for migration.
Signed-off-by: Brijesh Singh
Co-developed-by: Ashish Kalra
Signed-off-by: Ashish Kalra
---
include/exec/confidential-guest-support.h | 27 +++
1 file changed, 27 insertions(+)
diff --git
From: Brijesh Singh
The LAUNCH_START is used for creating an encryption context to encrypt
newly created guest, for an incoming guest the RECEIVE_START should be
used.
Reviewed-by: Dr. David Alan Gilbert
Signed-off-by: Brijesh Singh
Signed-off-by: Ashish Kalra
---
target/i386/sev.c | 15
From: Brijesh Singh
Signed-off-by: Brijesh Singh
Signed-off-by: Ashish Kalra
---
docs/amd-memory-encryption.txt | 46 +-
1 file changed, 45 insertions(+), 1 deletion(-)
diff --git a/docs/amd-memory-encryption.txt b/docs/amd-memory-encryption.txt
index
From: Ashish Kalra
AMD SEV encrypts the memory of VMs and because this encryption is done using
an address tweak, the hypervisor will not be able to simply copy ciphertext
between machines to migrate a VM. Instead the AMD SEV Key Management API
provides a set of functions which the hypervisor
From: Brijesh Singh
Reviewed-by: Dr. David Alan Gilbert
Signed-off-by: Brijesh Singh
Signed-off-by: Ashish Kalra
---
docs/amd-memory-encryption.txt | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/docs/amd-memory-encryption.txt b/docs/amd-memory-encryption.txt
index
() is used
by the sender to write the encrypted pages onto the socket, similarly the
sev_load_incoming_page() is used by the target to read the
encrypted pages from the socket and load into the guest memory.
Signed-off-by: Brijesh Singh
Co-developed-by: Ashish Kalra
Signed-off-by: Ashish Kalra
Kalra
Signed-off-by: Ashish Kalra
---
include/sysemu/sev.h | 2 ++
target/i386/sev.c| 61
2 files changed, 63 insertions(+)
diff --git a/include/sysemu/sev.h b/include/sysemu/sev.h
index 94d821d737..64fc88d3c5 100644
--- a/include/sysemu/sev.h
+++ b
the
RECEIEVE_UPDATE_DATA command to load the encrypted pages into the guest
memory. After migration is completed, we issue the RECEIVE_FINISH command
to transition the SEV guest to the runnable state so that it can be
executed.
Signed-off-by: Brijesh Singh
Co-developed-by: Ashish Kalra
Signed-off-by: Ashish
on target host can
use it for future migration.
Signed-off-by: Brijesh Singh
Co-developed-by: Ashish Kalra
Signed-off-by: Ashish Kalra
---
include/sysemu/sev.h | 2 ++
target/i386/sev.c| 43 +++
2 files changed, 45 insertions(+)
diff --git a/include/sysemu
-by: Brijesh Singh
Co-developed-by: Ashish Kalra
Signed-off-by: Ashish Kalra
---
include/sysemu/sev.h | 2 +
target/i386/sev.c| 221 +++
target/i386/trace-events | 3 +
3 files changed, 226 insertions(+)
diff --git a/include/sysemu/sev.h b
From: Ashish Kalra
KVM_HC_MAP_GPA_RANGE hypercall is used by the SEV guest to notify a
change in the page encryption status to the hypervisor. The hypercall
should be invoked only when the encryption attribute is changed from
encrypted -> decrypted and vice versa. By default all guest pa
From: Ashish Kalra
Add support for userspace MSR filtering using KVM_X86_SET_MSR_FILTER
ioctl and handling of MSRs in userspace. Currently this is only used
for SEV guests which use MSR_KVM_MIGRATION_CONTROL to indicate if the
guest is enabled and ready for migration.
KVM arch code calls
From: Ashish Kalra
Currently OVMF clears the C-bit and marks NonExistent memory space
as decrypted in the page encryption bitmap. By marking the
NonExistent memory space as decrypted it gurantees any future MMIO adds
will work correctly, but this marks flash0 device space as decrypted.
At reset
From: Ashish Kalra
Now, qemu has a default expected downtime of 300 ms and
SEV Live migration has a page-per-second bandwidth of 350-450 pages
( SEV Live migration being generally slow due to guest RAM pages
being migrated after encryption using the security processor ).
With this expected
Hello Dov,
On Thu, Aug 05, 2021 at 09:34:42AM +0300, Dov Murik wrote:
>
>
> On 04/08/2021 14:53, Ashish Kalra wrote:
> > From: Brijesh Singh
> >
> > Signed-off-by: Brijesh Singh
> > Signed-off-by: Ashish Kalra
> > ---
&
On Tue, Aug 24, 2021 at 06:00:51PM -0400, Tobin Feldman-Fitzthum wrote:
> On Mon, Aug 16, 2021 at 04:15:46PM +0200, Paolo Bonzini wrote:
>
> > Hi,
> >
> > first of all, thanks for posting this work and starting the discussion.
> >
> > However, I am not sure if the in-guest migration helper vCPUs
On Fri, Sep 10, 2021 at 10:43:50AM +0100, Daniel P. Berrangé wrote:
> On Wed, Aug 04, 2021 at 11:59:47AM +0000, Ashish Kalra wrote:
> > From: Ashish Kalra
> >
> > Now, qemu has a default expected downtime of 300 ms and
> > SEV Live migration has a page-per-second
67 matches
Mail list logo