0710,8 +10732,7 @@ void kvm_arch_destroy_vm(struct kvm *kvm)
mutex_unlock(&kvm->slots_lock);
}
static_call_cond(kvm_x86_vm_destroy)(kvm);
- for (i = 0; i < kvm->arch.msr_filter.count; i++)
- kfree(kvm->arch.msr_filter.ranges[i].bitmap);
+ kvm_free_msr_filter(srcu_dereference_check(kvm->arch.msr_filter,
&kvm->srcu, 1));
kvm_pic_destroy(kvm);
kvm_ioapic_destroy(kvm);
kvm_free_vcpus(kvm);
Reviewed-by: Paolo Bonzini
On 16/03/21 19:44, Sean Christopherson wrote:
+ return (ret)true; \
I'm not sure if (void)true is amazing or disgusting, but anyway...
+BUILD_VMX_MSR_BITMAP_HELPER(bool, test, read)
+BUILD_VMX_MSR_BITMAP_HELPER(bool, test, write)
+BUIL
On 17/03/21 08:44, Emanuele Giuseppe Esposito wrote:
+ printf("vcpu executing...\n");
+ vcpu_run(vm, vcpuid);
+ printf("vcpu executed\n");
+
+ switch (get_ucall(vm, vcpuid, &uc)) {
+ case UCALL_SYNC:
+ printf("stage %d sync %ld\n", stage, uc.args[1]);
+
On 17/03/21 11:53, Marc Zyngier wrote:
On Wed, 17 Mar 2021 10:40:23 +,
Paolo Bonzini wrote:
On 17/03/21 10:10, Marc Zyngier wrote:
@@ -366,7 +366,7 @@ static int hyp_map_walker(u64 addr, u64 end, u32 level,
kvm_pte_t *ptep,
if (WARN_ON(level == KVM_PGTABLE_MAX_LEVELS - 1
On 17/03/21 08:45, Emanuele Giuseppe Esposito wrote:
+ struct kvm_msr_list features_list;
buffer.header.nmsrs = 1;
buffer.entry.index = msr_index;
+ features_list.nmsrs = 1;
+
kvm_fd = open(KVM_DEV_PATH, O_RDONLY);
if (kvm_fd < 0)
exit(KS
On 17/03/21 10:10, Marc Zyngier wrote:
@@ -366,7 +366,7 @@ static int hyp_map_walker(u64 addr, u64 end, u32 level,
kvm_pte_t *ptep,
if (WARN_ON(level == KVM_PGTABLE_MAX_LEVELS - 1))
return -EINVAL;
- childp = (kvm_pte_t *)get_zeroed_page(GFP_KERNEL);
+ childp =
On 16/03/21 00:37, Ben Gardon wrote:
The Linux Test Robot found a few RCU warnings in the TDP MMU:
https://www.spinics.net/lists/kernel/msg3845500.html
https://www.spinics.net/lists/kernel/msg3845521.html
Fix these warnings and cleanup a hack in tdp_mmu_iter_cond_resched.
Tested by compiling as
On 16/03/21 18:52, Sean Christopherson wrote:
I don't
know that holding the fd instead of the kvm makes that much better though,
are there advantages to that I'm not seeing?
If there's no kvm pointer, it's much more difficult for someone to do the wrong
thing, and any such shenanigans stick out
On 15/03/21 19:19, Maxim Levitsky wrote:
On Mon, 2021-03-15 at 18:56 +0100, Paolo Bonzini wrote:
On 15/03/21 18:43, Maxim Levitsky wrote:
+ if (!guest_cpuid_is_intel(vcpu)) {
+ /*
+* If hardware supports Virtual VMLOAD VMSAVE then enable it
On 15/03/21 18:43, Maxim Levitsky wrote:
+ if (!guest_cpuid_is_intel(vcpu)) {
+ /*
+* If hardware supports Virtual VMLOAD VMSAVE then enable it
+* in VMCB and clear intercepts to avoid #VMEXIT.
+*/
+ if (vls) {
+
On 15/03/21 18:05, Tobin Feldman-Fitzthum wrote:
I can answer this part. I think this will actually be simpler than
with auxiliary vCPUs. There will be a separate pair of VM+vCPU file
descriptors within the same QEMU process, and some code to set up the
memory map using KVM_SET_USER_MEMORY_
Linus,
The following changes since commit 9e46f6c6c959d9bb45445c2e8f04a75324a0dfd0:
KVM: SVM: Clear the CR4 register on reset (2021-03-02 14:39:11 -0500)
are available in the Git repository at:
https://git.kernel.org/pub/scm/virt/kvm/kvm.git tags/for-linus
for you to fetch changes up to 35
On 13/03/21 01:57, Wanpeng Li wrote:
A third option would be to split the paths. In the end, it's only the ptr/val
line that's shared.
I just sent out a formal patch for my alternative fix, I think the
whole logic in kvm_wait is more clear w/ my version.
I don't know, having three "if"s in 1
On 09/03/21 23:42, Sean Christopherson wrote:
A few stragglers bundled together to hopefully avoid more messy conflicts.
v2 (relative to the fixup mini-series):
- Moved SME fixes from "PCID fixup" to its correct location, in "Mark
PAE roots decrypted".
- Collected Reviewed/Tested-by t
On 24/02/21 02:37, Wanpeng Li wrote:
From: Wanpeng Li
# lscpu
Architecture: x86_64
CPU op-mode(s):32-bit, 64-bit
Byte Order:Little Endian
CPU(s):88
On-line CPU(s) list: 0-63
Off-line CPU(s) list: 64-87
# cat /proc/cmdline
BOOT_IMAGE=/vmlinuz-5.10
On 05/03/21 20:11, Muhammad Usama Anjum wrote:
This patch adds the annotation to fix the following sparse errors:
arch/x86/kvm//x86.c:8147:15: error: incompatible types in comparison expression
(different address spaces):
arch/x86/kvm//x86.c:8147:15:struct kvm_apic_map [noderef] __rcu *
arch
On 04/03/21 01:35, Wanpeng Li wrote:
From: Wanpeng Li
Advancing the timer expiration should only be necessary on guest initiated
writes. When we cancel the timer and clear .pending during state restore,
clear expired_tscdeadline as well.
Reviewed-by: Sean Christopherson
Signed-off-by: Wanpeng
On 10/03/21 23:24, Ben Gardon wrote:
On Wed, Mar 10, 2021 at 1:14 PM Sean Christopherson wrote:
On Wed, Mar 10, 2021, Paolo Bonzini wrote:
On 10/03/21 01:30, Sean Christopherson wrote:
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index 50ef757c5586..f0c99fa04ef2
On 12/03/21 00:16, Ben Gardon wrote:
The Linux Test Robot found a few RCU warnings in the TDP MMU:
https://www.spinics.net/lists/kernel/msg3845500.html
https://www.spinics.net/lists/kernel/msg3845521.html
Fix these warnings and cleanup a hack in tdp_mmu_iter_cond_resched.
Tested by compiling as
On 12/03/21 17:35, Sean Christopherson wrote:
What about calling it tdp_iter_restart()? Or tdp_iter_resume()? Or something
like tdp_iter_restart_at_next() if we want it to give a hint that the next_last
thing is where it restarts.
I think I like tdp_iter_restart() the best. It'd be easy enoug
On 12/03/21 16:37, Sean Christopherson wrote:
On Thu, Mar 11, 2021, Ben Gardon wrote:
The pt passed into handle_removed_tdp_mmu_page does not need RCU
protection, as it is not at any risk of being freed by another thread at
that point. However, the implicit cast from tdp_sptep_t to u64 * dropped
On 11/03/21 16:54, Sean Christopherson wrote:
On Tue, Feb 23, 2021, Wanpeng Li wrote:
On Tue, 23 Feb 2021 at 13:25, Wanpeng Li wrote:
From: Wanpeng Li
After commit 997acaf6b4b59c (lockdep: report broken irq restoration), the guest
splatting below during boot:
raw_local_irq_restore() call
On 11/03/21 16:30, Tobin Feldman-Fitzthum wrote:
I am not sure how the mirror VM will be supported in QEMU. Usually there
is one QEMU process per-vm. Now we would need to run a second VM and
communicate with it during migration. Is there a way to do this without
adding significant complexity?
On 10/03/21 15:58, Babu Moger wrote:
There is no upstream version 4.9.258.
Sure there is, check out https://cdn.kernel.org/pub/linux/kernel/v4.x/
The easiest way to do it is to bisect on the linux-4.9.y branch of
git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git.
paolo
On 10/03/21 02:04, Babu Moger wrote:
Debian kernel 4.10(tag 4.10~rc6-1~exp1) also works fine. It appears the
problem is on Debian 4.9 kernel. I am not sure how to run git bisect on
Debian kernel. Tried anyway. It is pointing to
47811c66356d875e76a6ca637a9d384779a659bb is the first bad commit
com
On 10/03/21 01:30, Sean Christopherson wrote:
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index 50ef757c5586..f0c99fa04ef2 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -323,7 +323,18 @@ static void handle_removed_tdp_mmu_page(struct kvm *kvm
On 09/03/21 10:30, Borislav Petkov wrote:
On Tue, Mar 09, 2021 at 02:38:49PM +1300, Kai Huang wrote:
This series adds KVM SGX virtualization support. The first 14 patches starting
with x86/sgx or x86/cpu.. are necessary changes to x86 and SGX core/driver to
support KVM SGX virtualization, while
On 09/03/21 11:09, Maxim Levitsky wrote:
What happens if mmio generation overflows (e.g if userspace keeps on updating
the memslots)?
In theory if we have a SPTE with a stale generation, it can became valid, no?
I think that we should in the case of the overflow zap all mmio sptes.
What do you
On 09/03/21 02:18, Sean Christopherson wrote:
Maybe this series is cursed. The first patch got mangled and broke SME.
It shows up as two commits with the same changelog, so maybe you intended to
split the patch and things went sideways?
There was a conflict. I admit kvm/queue is not always th
On 08/03/21 21:43, Sean Christopherson wrote:
On Mon, Mar 08, 2021, Paolo Bonzini wrote:
On 08/03/21 17:44, Sean Christopherson wrote:
VMCALL is also probably ok
in most scenarios, but patching L2's code from L0 KVM is sketchy.
I agree that patching is sketchy and I'll send a patch
On 08/03/21 19:52, Tom Lendacky wrote:
On 2/25/21 2:47 PM, Sean Christopherson wrote:
Introduce MMU_PRESENT to explicitly track which SPTEs are "present" from
the MMU's perspective. Checking for shadow-present SPTEs is a very
common operation for the MMU, particularly in hot paths such as page
On 08/03/21 17:44, Sean Christopherson wrote:
VMCALL is also probably ok
in most scenarios, but patching L2's code from L0 KVM is sketchy.
I agree that patching is sketchy and I'll send a patch. However...
The same is true for the VMware #GP interception case.
I highly doubt that will ever
On 05/03/21 19:31, Sean Christopherson wrote:
Clean up KVM's PV TLB flushing when running with EPT on Hyper-V, i.e. as
a nested VMM. No real goal in mind other than the sole patch in v1, which
is a minor change to avoid a future mixup when TDX also wants to define
.remote_flush_tlb. Everything
On 05/03/21 23:57, Dongli Zhang wrote:
The new per-cpu stat 'nested_run' is introduced in order to track if L1 VM
is running or used to run L2 VM.
An example of the usage of 'nested_run' is to help the host administrator
to easily track if any L1 VM is used to run L2 VM. Suppose there is issue
t
On 06/03/21 02:39, Sean Christopherson wrote:
Unless KVM (L0) knowingly wants to override L1, e.g. KVM_GUESTDBG_* cases, KVM
shouldn't do a damn thing except forward the exception to L1 if L1 wants the
exception.
ud_interception() and gp_interception() do quite a bit before forwarding the
except
On 05/03/21 19:22, Sean Christopherson wrote:
On Fri, Mar 05, 2021, Paolo Bonzini wrote:
On 05/03/21 02:10, Sean Christopherson wrote:
Use '0' to denote an invalid pae_root instead of '0' or INVALID_PAGE.
Unlike root_hpa, the pae_roots hold permission bits and thus are
On 05/03/21 02:10, Sean Christopherson wrote:
Fix nested NPT (nSVM) with 32-bit L1 and SME with shadow paging, which
are completely broken. Opportunistically fix theoretical bugs related to
prematurely reloading/unloading the MMU.
If nNPT is enabled, L1 can crash the host simply by using 32-bit
On 05/03/21 02:10, Sean Christopherson wrote:
Use '0' to denote an invalid pae_root instead of '0' or INVALID_PAGE.
Unlike root_hpa, the pae_roots hold permission bits and thus are
guaranteed to be non-zero. Having to deal with both values leads to
bugs, e.g. failing to set back to INVALID_PAGE,
On 05/03/21 02:10, Sean Christopherson wrote:
@@ -5301,6 +5307,22 @@ static int __kvm_mmu_create(struct kvm_vcpu *vcpu,
struct kvm_mmu *mmu)
for (i = 0; i < 4; ++i)
mmu->pae_root[i] = 0;
I think this should be deleted, since you have another identical for
loop below?
On 05/03/21 02:10, Sean Christopherson wrote:
+ /*
+* This mess only works with 4-level paging and needs to be updated to
+* work with 5-level paging.
+*/
Planning for this, it's probably a good idea to rename lm_root to
pml4_root. Can be done on top.
Paolo
On 05/03/21 15:04, Ashish Kalra wrote:
+ /* Mirrors of mirrors should work, but let's not get silly */
+ if (is_mirroring_enc_context(kvm)) {
+ ret = -ENOTTY;
+ goto failed;
+ }
How will A->B->C->... type of live migration work if mirrors of
mirrors
On 05/03/21 03:16, Sean Christopherson wrote:
Directly connect the 'npt' param to the 'npt_enabled' variable so that
runtime adjustments to npt_enabled are reflected in sysfs. Move the
!PAE restriction to a runtime check to ensure NPT is forced off if the
host is using 2-level paging, and add a
On 05/03/21 03:18, Sean Christopherson wrote:
When posting a deadline timer interrupt, open code the checks guarding
__kvm_wait_lapic_expire() in order to skip the lapic_timer_int_injected()
check in kvm_wait_lapic_expire(). The injection check will always fail
since the interrupt has not yet be
on active_mmu_pages
Kai Huang (1):
KVM: Documentation: Fix index for KVM_CAP_PPC_DAWR1
Paolo Bonzini (3):
Documentation: kvm: fix messy conversion from .txt to .rst
KVM: xen: flush deferred static key before checking it
KVM: x86: allow compiling out the Xen hypercall interface
S
On 03/03/21 07:04, Yang Weijiang wrote:
These fields are rarely updated by L1 QEMU/KVM, sync them when L1 is trying to
read/write them and after they're changed. If CET guest entry-load bit is not
set by L1 guest, migrate them to L2 manaully.
Suggested-by: Sean Christopherson
Signed-off-by: Yan
The logic of update_cr0_intercept is pointlessly complicated.
All svm_set_cr0 is compute the effective cr0 and compare it with
the guest value.
Inlining the function and simplifying the condition
clarifies what it is doing.
Signed-off-by: Paolo Bonzini
---
arch/x86/kvm/svm/svm.c | 54
Most fields were going to be overwritten by vmcb12 control fields, or
do not matter at all because they are filled by the processor on vmexit.
Therefore, we need not copy them from vmcb01 to vmcb02 on vmentry.
Signed-off-by: Paolo Bonzini
---
arch/x86/kvm/svm/nested.c | 23
On 02/03/21 18:45, Sean Christopherson wrote:
If KVM (L0) intercepts #GP, but L1 does not, then L2 can kill L1 by
triggering triple fault. On both VMX and SVM, if the CPU hits a fault
while vectoring an injected #DF (or I supposed any #DF), any intercept
from the hypervisor takes priority over t
On 02/03/21 19:51, Babu Moger wrote:
This problem was reported on a SVM guest while executing kexec.
Kexec fails to load the new kernel when the PCID feature is enabled.
When kexec starts loading the new kernel, it starts the process by
resetting the vCPU's and then bringing each vCPU online one
erent. This prevents
the processor from using old cached data for a vmcb that may
have been updated on a prior run on a different processor.
It also moves the physical cpu check from svm_vcpu_load
to pre_svm_run as the check only needs to be done at run.
Suggested-by: Paolo Bonzini
Signed-o
The VMLOAD/VMSAVE data is not taken from userspace, since it will
not be restored on VMEXIT (it will be copied from VMCB02 to VMCB01).
For clarity, replace the wholesale copy of the VMCB save area
with a copy of that state only.
Signed-off-by: Paolo Bonzini
---
arch/x86/kvm/svm/nested.c | 20
zero.
Signed-off-by: Sean Christopherson
Message-Id: <20210205005750.3841462-10-sea...@google.com>
Signed-off-by: Paolo Bonzini
---
arch/x86/kvm/svm/svm.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 3725a4
Now that SVM is using a separate vmcb01 and vmcb02 (and also uses the vmcb12
naming) we can give clearer names to functions that write to and read
from those VMCBs. Likewise, variables and parameters can be renamed
from nested_vmcb to vmcb12.
Signed-off-by: Paolo Bonzini
---
arch/x86/kvm/svm
ed-off-by: Sean Christopherson
Message-Id: <20210204000117.3303214-12-sea...@google.com>
Signed-off-by: Paolo Bonzini
---
arch/x86/kvm/vmx/nested.c | 8 +---
arch/x86/kvm/x86.h| 8
2 files changed, 9 insertions(+), 7 deletions(-)
diff --git a/arch/x86/kvm/vmx/nested
abled.
Signed-off-by: Sean Christopherson
Message-Id: <20210205005750.3841462-9-sea...@google.com>
Signed-off-by: Paolo Bonzini
---
arch/x86/kvm/svm/svm.c | 10 +-
1 file changed, 1 insertion(+), 9 deletions(-)
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 8
Thanks to the new macros that handle exception handling for SVM
instructions, it is easier to just do the VMLOAD/VMSAVE in C.
This is safe, as shown by the fact that the host reload is
already done outside the assembly source.
Signed-off-by: Paolo Bonzini
---
arch/x86/kvm/svm/svm.c | 2
: Borislav Petkov
Message-Id: <161188100272.28787.4097272856384825024.stgit@bmoger-ubuntu>
Signed-off-by: Paolo Bonzini
---
arch/x86/include/asm/cpufeatures.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/x86/include/asm/cpufeatures.h
b/arch/x86/include/asm/cpufeatures.h
index f1957b
d.
So, the guest will always see the proper value when it is read back.
Signed-off-by: Babu Moger
Message-Id: <161188100955.28787.11816849358413330720.stgit@bmoger-ubuntu>
Signed-off-by: Paolo Bonzini
---
arch/x86/include/asm/svm.h | 4 +++-
arch/x86/kvm/svm/nested.c | 15 +++
2-5-sea...@google.com>
Signed-off-by: Paolo Bonzini
---
arch/x86/kvm/svm/svm.c | 37 -
1 file changed, 12 insertions(+), 25 deletions(-)
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index c2626babe575..5815fedf978e 100644
--- a/arch/x86/kvm/svm
Christopherson
Message-Id: <20210204000117.3303214-13-sea...@google.com>
Signed-off-by: Paolo Bonzini
---
arch/x86/kvm/svm/nested.c | 33 +++--
1 file changed, 19 insertions(+), 14 deletions(-)
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
Message-Id: <20201006190654.32305-3-krish.sadhuk...@oracle.com>
Signed-off-by: Paolo Bonzini
---
arch/x86/kvm/svm/nested.c | 54 ---
1 file changed, 39 insertions(+), 15 deletions(-)
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
a future patch.
Signed-off-by: Sean Christopherson
Message-Id: <20210205005750.3841462-8-sea...@google.com>
Signed-off-by: Paolo Bonzini
---
arch/x86/include/asm/kvm_host.h | 2 +-
arch/x86/kvm/svm/svm.c | 5 +
arch/x86/kvm/vmx/vmx.c | 10 +-
arch/x86/kvm
3841462-6-sea...@google.com>
Signed-off-by: Paolo Bonzini
---
arch/x86/include/asm/kvm_host.h | 2 +-
arch/x86/kvm/svm/svm.c | 11 +--
arch/x86/kvm/vmx/vmx.c | 11 +--
arch/x86/kvm/x86.c | 13 -
4 files changed, 11 insertions(+), 26 de
ctions are clean.
Signed-off-by: Paolo Bonzini
---
arch/x86/kvm/svm/nested.c | 18 --
1 file changed, 12 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 4fc742ba1f1f..945c2a48b591 100644
--- a/arch/x86/kvm/svm/nested.c
+++
From: Maxim Levitsky
This allows to avoid copying of these fields between vmcb01
and vmcb02 on nested guest entry/exit.
Signed-off-by: Maxim Levitsky
Signed-off-by: Paolo Bonzini
---
arch/x86/kvm/svm/nested.c | 3 --
arch/x86/kvm/svm/svm.c| 70 ---
2
riate.
No functional change intended.
Signed-off-by: Sean Christopherson
Message-Id: <20210205005750.3841462-7-sea...@google.com>
Signed-off-by: Paolo Bonzini
---
arch/x86/include/asm/kvm_host.h | 5 ++
arch/x86/kvm/svm/svm.c | 90 +
arch/x86/kvm/vmx/vmx
pointless casting.
No functional change intended.
Signed-off-by: Sean Christopherson
Message-Id: <20210205005750.3841462-4-sea...@google.com>
Signed-off-by: Paolo Bonzini
---
arch/x86/kvm/svm/avic.c | 24 +-
arch/x86/kvm/svm/nested.c | 126 -
arch/x86/kvm/svm/sev.c| 27 +-
ar
(1):
KVM: nSVM: Add missing checks for reserved bits to
svm_set_nested_state()
Maxim Levitsky (1):
KVM: nSVM: always use vmcb01 to for vmsave/vmload of guest state
Paolo Bonzini (8):
KVM: nSVM: rename functions and variables according to vmcbXY
nomenclature
KVM: nSVM: do not copy
From: Cathy Avery
This patch moves the asid_generation from the vcpu to the vmcb
in order to track the ASID generation that was active the last
time the vmcb was run. If sd->asid_generation changes between
two runs, the old ASID is invalid and must be changed.
Suggested-by: Paolo Bonz
n fedora
Signed-off-by: Cathy Avery
Message-Id: <20201011184818.3609-3-cav...@redhat.com>
[Fix conflicts; keep VMCB02 G_PAT up to date whenever guest writes the
PAT MSR; do not copy CR4 over from VMCB01 as it is not needed anymore; add
a few more comments. - Paolo]
Signed-off-by: Paolo Bonz
Since L1 and L2 now use different VMCBs, most of the fields remain
the same from one L1 run to the next. svm_set_cr0 and other functions
called by nested_svm_vmexit already take care of clearing the
corresponding clean bits; only the TSC offset is special.
Signed-off-by: Paolo Bonzini
---
arch
On 02/03/21 18:45, Sean Christopherson wrote:
If KVM (L0) intercepts #GP, but L1 does not, then L2 can kill L1 by
triggering triple fault. On both VMX and SVM, if the CPU hits a fault
while vectoring an injected #DF (or I supposed any #DF), any intercept
from the hypervisor takes priority over t
On 02/03/21 01:59, Sean Christopherson wrote:
+ svm->vmcb->save.cr2 = svm->vcpu.arch.cr2 = vmcb12->save.cr2;
Same question for VMCB_CR2.
Besides the question of how much AMD processors actually use the clean
bits (a quick test suggests "not much"), in this specific case I suspect
that
On 02/03/21 13:56, Cathy Avery wrote:
On 3/1/21 7:59 PM, Sean Christopherson wrote:
On Mon, Mar 01, 2021, Cathy Avery wrote:
svm->nested.vmcb12_gpa = 0;
+ svm->nested.last_vmcb12_gpa = 0;
This should not be 0 to avoid a false match. "-1" should be okay.
kvm_set_rflags(&
On 02/03/21 10:05, Yang Weijiang wrote:
I got some description from MSFT as below, do you mean that:
GuestSsp uses clean field GUEST_BASIC (bit 10)
GuestSCet/GuestInterruptSspTableAddr uses GUEST_GRP1 (bit 11)
HostSCet/HostSsp/HostInterruptSspTableAddr uses HOST_GRP1 (bit 14)
If it is, should t
On 26/02/21 15:18, Thomas Lamprecht wrote:
Does that mean I should not take the patch here in this email and that
you will submit it after some timeperiod, or that I should take this
patch as-is?
The patch that Thomas requested (commit 841c2be09fe) does not apply cleanly, so
I'll take care of s
On 26/02/21 13:59, Greg Kroah-Hartman wrote:
So can you please add this patch to the stable trees that backported the
problematic upstream commit 6441fa6178f5456d1d4b512c0879f99db185 ?
If I should submit this in any other way just ask, was not sure about
what works best with a patch which ca
On 26/02/21 13:56, Uros Bizjak wrote:
Avoid jump by moving exception fixups out of line.
Cc: Sean Christopherson
Cc: Paolo Bonzini
Signed-off-by: Uros Bizjak
---
arch/x86/kvm/svm/vmenter.S | 35 ---
1 file changed, 20 insertions(+), 15 deletions(-)
diff
The Xen hypercall interface adds to the attack surface of the hypervisor
and will be used quite rarely. Allow compiling it out.
Suggested-by: Christoph Hellwig
Cc: David Woodhouse
Signed-off-by: Paolo Bonzini
---
v1->v2: do not use stubs for the ioctls, cull KVM_CAP_XEN_HVM too
a
On 26/02/21 12:03, Thomas Lamprecht wrote:
On 04.01.21 16:57, Greg Kroah-Hartman wrote:
From: Paolo Bonzini
[ Upstream commit 6441fa6178f5456d1d4b512c0879f99db185 ]
If the guest is configured to have SPEC_CTRL but the host does not
(which is a nonsensical configuration but these are not
The Xen hypercall interface adds to the attack surface of the hypervisor
and will be used quite rarely. Allow compiling it out.
Suggested-by: Christoph Hellwig
Cc: David Woodhouse
Signed-off-by: Paolo Bonzini
---
arch/x86/kvm/Kconfig | 9
arch/x86/kvm/Makefile | 3 ++-
arch/x86
A missing flush would cause the static branch to trigger incorrectly.
Cc: David Woodhouse
Signed-off-by: Paolo Bonzini
---
arch/x86/kvm/x86.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 1d2bc89431a2..bfc928495bd4 100644
--- a/arch/x86/kvm
On 25/02/21 21:47, Sean Christopherson wrote:
This series adds the simple idea of tagging shadow-present SPTEs with
a single bit, instead of looking for non-zero SPTEs that aren't MMIO and
aren't REMOVED. Doing so reduces KVM's code footprint by 2k bytes on
x86-64, and presumably adds a tiny per
On 26/02/21 02:03, Sean Christopherson wrote:
Effectively belated code review of a few pieces of the TDP MMU.
Sean Christopherson (5):
KVM: x86/mmu: Remove spurious TLB flush from TDP MMU's change_pte()
hook
KVM: x86/mmu: WARN if TDP MMU's set_tdp_spte() sees multiple GFNs
KVM: x86
On 26/02/21 07:19, Dongli Zhang wrote:
The 'mmu_page_hash' is used as hash table while 'active_mmu_pages' is a
list. Remove the misplaced comment as it's mostly stating the obvious
anyways.
Signed-off-by: Dongli Zhang
Reviewed-by: Sean Christopherson
---
Changed since v1:
- change 'incorrec
On 26/02/21 08:55, Chenyi Qiang wrote:
Commit c32b1b896d2a ("KVM: X86: Add the Document for
KVM_CAP_X86_BUS_LOCK_EXIT") added a new flag in kvm_run->flags
documentation, and caused warning in make htmldocs:
Documentation/virt/kvm/api.rst:5004: WARNING: Unexpected indentation
Documentation/
KVM: VMX: Dynamically enable/disable PML based on memslot dirty logging
Maxim Levitsky (2):
KVM: VMX: read idt_vectoring_info a bit earlier
KVM: nSVM: move nested vmrun tracepoint to enter_svm_guest_mode
Paolo Bonzini (4):
selftests: kvm: avoid uninitialized variable warning
K
On 25/02/21 18:53, James Bottomley wrote:
https://lore.kernel.org/qemu-devel/8b824c44-6a51-c3a7-6596-921dc47fe...@linux.ibm.com/
It sounds like this mechanism can be used to boot a vCPU through a
mirror VM after the fact, which is very compatible with the above whose
mechanism is simply to ste
On 25/02/21 19:18, Ashish Kalra wrote:
I do believe that some of these alternative SEV live migration support
or Migration helper (MH) solutions will still use SEV PSP migration for
migrating the MH itself, therefore the SEV live migration patches
(currently v10 posted upstream) still make sense
On 25/02/21 17:06, Maxim Levitsky wrote:
On Thu, 2021-02-25 at 17:05 +0100, Paolo Bonzini wrote:
On 25/02/21 16:41, Maxim Levitsky wrote:
Injected events should not block a pending exception, but rather,
should either be lost or be delivered to the nested hypervisor as part of
exitintinfo
On 25/02/21 16:41, Maxim Levitsky wrote:
Injected events should not block a pending exception, but rather,
should either be lost or be delivered to the nested hypervisor as part of
exitintinfo/IDT_VECTORING_INFO
(if nested hypervisor intercepts the pending exception)
Signed-off-by: Maxim Levitsk
Building the documentation gives a warning that the KVM_PPC_RESIZE_HPT_PREPARE
label is defined twice. The root cause is that the KVM_PPC_RESIZE_HPT_PREPARE
API is present twice, the second being a mix of the prepare and commit APIs.
Fix it.
Signed-off-by: Paolo Bonzini
---
Documentation/virt
On 24/02/21 17:58, Sean Christopherson wrote:
That being said, is there a strong need to get this into 5.12? AIUI, this
hasn't
had any meaningful testing, selftests/kvm-unit-tests or otherwise. Pushing out
to 5.13 might give us a good chance of getting some real testing before merging,
dependi
On 24/02/21 01:56, Sean Christopherson wrote:
Fix the interpreation of nested_svm_vmexit()'s return value when
synthesizing a nested VM-Exit after intercepting an SVM instruction while
L2 was running. The helper returns '0' on success, whereas a return
value of '0' in the exit handler path means
[CCing Nathaniel McCallum]
On 24/02/21 09:59, Nathan Tempelman wrote:
+7.23 KVM_CAP_VM_COPY_ENC_CONTEXT_TO
+---
+
+Architectures: x86 SEV enabled
+Type: system
vm ioctl, not system (/dev/kvm). But, see below.
+Parameters: args[0] is the fd of the kvm to mirr
On 23/02/21 18:15, Sean Christopherson wrote:
If event
creation fails in that flow, I would think KVM would do its best to create an
event in future runs without waiting for additional actions from the guest.
Also, this bug suggests there's a big gaping hole in the test coverage. AFAICT,
event
On 23/02/21 17:38, Sean Christopherson wrote:
On Tue, Feb 23, 2021, Like Xu wrote:
When the processor that support model-specific LBR generates a debug
breakpoint event, it automatically clears the LBR flag. This action
does not clear previously stored LBR stack MSRs. (Intel SDM 17.4.2)
Signed-
On 23/02/21 02:39, Like Xu wrote:
If lbr_desc->event is successfully created, the intel_pmu_create_
guest_lbr_event() will return 0, otherwise it will return -ENOENT,
and then jump to LBR msrs dummy handling.
Fixes: 1b5ac3226a1a ("KVM: vmx/pmu: Pass-through LBR msrs when the guest LBR event
is
On 22/02/21 03:45, David Stevens wrote:
These patches reduce how often mmu_notifier updates block guest page
faults. The primary benefit of this is the reduction in the likelihood
of extreme latency when handling a page fault due to another thread
having been preempted while modifying host virtua
On 19/02/21 16:33, Vitaly Kuznetsov wrote:
Stephen Rothwell writes:
Hi all,
Building Linus' tree, today's linux-next build (htmldocs) produced
these warnings:
Documentation/virt/kvm/api.rst:4537: WARNING: Unexpected indentation.
Documentation/virt/kvm/api.rst:4539: WARNING: Block quote ends
201 - 300 of 3172 matches
Mail list logo