Skip PAUSE after interception to avoid unnecessarily re-executing the
instruction in the guest, e.g. after regaining control post-yield.
This is a benign bug as KVM disables PAUSE interception if filtering is
off, including the case where pause_filter_count is set to zero.
Signed-off-by: Sean
Move the entirety of XSETBV emulation to x86.c, and assign the
function directly to both VMX's and SVM's exit handlers, i.e. drop the
unnecessary trampolines.
No functional change intended.
Signed-off-by: Sean Christopherson
---
arch/x86/include/asm/kvm_host.h | 2 +-
arch/x86/kvm
-by: Sean Christopherson
---
arch/x86/include/asm/kvm_host.h | 2 +-
arch/x86/kvm/svm/svm.c | 5 +
arch/x86/kvm/vmx/vmx.c | 10 +-
arch/x86/kvm/x86.c | 15 ---
4 files changed, 11 insertions(+), 21 deletions(-)
diff --git a/arch/x86/include
Drop a defunct forward declaration of svm_complete_interrupts().
No functional change intended.
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/svm/svm.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 3fac9e77cca3..8c2ed1633350
No functional change intended.
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/svm/avic.c | 35 +++
1 file changed, 19 insertions(+), 16 deletions(-)
diff --git a/arch/x86/kvm/svm/avic.c b/arch/x86/kvm/svm/avic.c
index 0ef84d57b72e..78bdcfac4e40 100644
--- a/a
On Thu, Feb 04, 2021, Paolo Bonzini wrote:
> On 04/02/21 18:52, Sean Christopherson wrote:
> > > Alternatively there could be something like a is_rsvd_cr3_bits() helper
> > > that
> > > just uses reserved_gpa_bits for now. Probably put the comment in the wrong
>
On Thu, Feb 04, 2021, Edgecombe, Rick P wrote:
> On Thu, 2021-02-04 at 11:34 +0100, Paolo Bonzini wrote:
> > On 04/02/21 03:19, Sean Christopherson wrote:
> > > Ah, took me a few minutes, but I see what you're saying. LAM will
> > > introduce
> > >
tevens
Cc: Jann Horn
Cc: Jason Gunthorpe
Cc: Paolo Bonzini
Cc: k...@vger.kernel.org
Signed-off-by: Sean Christopherson
---
Paolo, maybe you can squash this with the appropriate acks?
mm/memory.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/mm/memory.c b/mm/memory.c
index feff48e1465a..15
On Thu, Feb 04, 2021, Paolo Bonzini wrote:
> On 03/02/21 22:46, Sean Christopherson wrote:
> >
> > diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
> > index dbca1687ae8e..0b6dab6915a3 100644
> > --- a/arch/x86/kvm/vmx/nested.c
> > +++ b/arch/x8
emulation on an unknown instruction is obviously not good.
Fixes: b3f4e11adc7d ("KVM: SVM: Add emulation support for #GP triggered by SVM
instructions")
Cc: Bandan Das
Cc: Maxim Levitsky
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/svm/svm.c | 3 ++-
1 file changed, 2 insertions(+),
On Thu, Feb 04, 2021, Edgecombe, Rick P wrote:
> On Wed, 2021-02-03 at 16:01 -0800, Sean Christopherson wrote:
> >
> > - unsigned long cr3_lm_rsvd_bits;
> > + u64 reserved_gpa_bits;
>
> LAM defines bits above the GFN in CR3:
> https://software.intel
;s C-bit is the only repurposed GPA bit,
and KVM doesn't support shadowing encrypted page tables (which is
theoretically possible via SEV debug APIs).
Cc: Rick Edgecombe
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/cpuid.c | 10 ++--
arch/x86
Add a helper to generate the mask of reserved PA bits in the host.
No functional change intended.
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/mmu/mmu.c | 14 +-
1 file changed, 9 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
Use trace_kvm_nested_vmenter_failed() and its macro magic to trace
consistency check failures on nested VMRUN. Tracing such failures by
running the buggy VMM as a KVM guest is often the only way to get a
precise explanation of why VMRUN failed.
Signed-off-by: Sean Christopherson
---
arch/x86
Move KVM's CC() macro to x86.h so that it can be reused by nSVM.
Debugging VM-Enter is as painful on SVM as it is on VMX.
Rename the more visible macro to KVM_NESTED_VMENTER_CONSISTENCY_CHECK
to avoid any collisions with the uber-concise "CC".
Signed-off-by: Sean Christopherson
read page tables, e.g. to load PDPTRs, the memory can't be encrypted if
L1 has any expectation of L0 doing the right thing.
Cc: Tom Lendacky
Cc: Brijesh Singh
Signed-off-by: Sean Christopherson
---
arch/x86/include/asm/kvm_host.h | 2 +-
arch/x86/kvm/cpuid.c| 2 +-
arch/x86/
Replace an open coded check for an invalid CR3 with its equivalent
helper.
No functional change intended.
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/svm/nested.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
Replace a variety of open coded GPA checks with the recently introduced
common helpers.
No functional change intended.
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/vmx/nested.c | 26 +++---
arch/x86/kvm/vmx/vmx.c| 2 +-
2 files changed, 8 insertions(+), 20
Add a helper to genericize checking for a legal GPA that also must
conform to an arbitrary alignment, and use it in the existing
page_address_valid(). Future patches will replace open coded variants
in VMX and SVM.
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/cpuid.h | 8 +++-
1
Add a helper to generate the mask of reserved GPA bits _without_ any
adjustments for repurposed bits, and use it to replace a variety of
open coded variants in the MTRR and APIC_BASE flows.
No functional change intended.
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/cpuid.c | 12
o interpreting encrypted data as a PDPTR.
Fixes: d0ec49d4de90 ("kvm/x86/svm: Support Secure Memory Encryption within KVM")
Cc: Tom Lendacky
Cc: Brijesh Singh
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/svm/nested.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --
ot;)
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/x86.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 667d0042d0b7..e6fbf2f574a6 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -10091,6 +10091,7 @@ int kvm_arch_vcpu_create(
Add a helper to check for a legal GPA, and use it to consolidate code
in existing, related helpers. Future patches will extend usage to
VMX and SVM code, properly handle exceptions to the maxphyaddr rule, and
add more helpers.
No functional change intended.
Signed-off-by: Sean Christopherson
on kvm/queue, commit 3f87cb8253c3 ("KVM: X86: Expose bus lock debug
exception to guest").
Sean Christopherson (12):
KVM: x86: Set so called 'reserved CR3 bits in LM mask' at vCPU reset
KVM: nSVM: Don't strip host's C-bit from guest's CR3 when reading
PDPTRs
On Wed, Feb 03, 2021, Yang Weijiang wrote:
> Add handling for Control Protection (#CP) exceptions, vector 21, used
> and introduced by Intel's Control-Flow Enforcement Technology (CET).
> relevant CET violation case. See Intel's SDM for details.
>
> Signed-off-by: Yang Weijiang
> ---
> arch/x86
On Wed, Feb 03, 2021, Paolo Bonzini wrote:
> Looks good! I'll wait for a few days of reviews,
I guess I know what I'm doing this afternoon :-)
> but I'd like to queue this for 5.12 and I plan to make it the default in 5.13
> or 5.12-rc (depending on when I can ask Red Hat QE to give it a shake).
One quick comment while it's on my mind, I'll give this a proper gander
tomorrow.
On Tue, Feb 02, 2021, Michael Roth wrote:
> diff --git a/arch/x86/kvm/svm/svm_ops.h b/arch/x86/kvm/svm/svm_ops.h
> index 0c8377aee52c..c2a05f56c8e4 100644
> --- a/arch/x86/kvm/svm/svm_ops.h
> +++ b/arch/x86/kvm/svm/
On Tue, Feb 02, 2021, Maciej S. Szmigiero wrote:
> On 02.02.2021 02:33, Sean Christopherson wrote:
> > > Making lookup and memslot management operations O(log(n)) brings
> > > some performance benefits (tested on a Xeon 8167M machine):
> > > 509 slots in u
On Tue, Feb 02, 2021, Sean Christopherson wrote:
> Take an 'unsigned long' instead of 'hpa_t' in the recently added vmsave()
> helper, as loading a 64-bit GPR isn't possible in 32-bit mode. This is
> properly reflected in the SVM ISA, which explicitly states
el test robot
Cc: Robert Hu
Cc: Farrah Chen
Cc: Danmei Wei
Cc: Tom Lendacky
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/svm/svm_ops.h | 7 ++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kvm/svm/svm_ops.h b/arch/x86/kvm/svm/svm_ops.h
index 0c8377aee52c..9f
.
Note, VMLOAD, VMRUN, etc... will also #GP on GPAs with C-bit set, i.e. KVM
is doomed even if the SEV guest is debuggable and the hypervisor is willing
to decrypt the VMCB. This may or may not be fixed on CPUs that have the
SVME_ADDR_CHK fix.
Cc: sta...@vger.kernel.org
Signed-off-by: Sean Christophe
cryption bit.
>
> Make the checks consistent with the above, and match them between
> nested_vmcb_checks and KVM_SET_SREGS.
>
Fixes + Cc:stable@?
> Signed-off-by: Paolo Bonzini
Reviewed-by: Sean Christopherson
> ---
> arch/x86/kvm/svm/nested.c | 12 ++--
> arch/x86/k
kvm_vcpu *vcpu,
> if (dbgregs->flags)
> return -EINVAL;
>
> - if (dbgregs->dr6 & ~0xull)
Oof, you weren't kidding.
Reviewed-by: Sean Christopherson
> + if (!kvm_dr6_valid(dbgregs->dr6))
> return -EINVAL;
>
On Tue, Feb 02, 2021, Paolo Bonzini wrote:
> Push the injection of #GP up to the callers, so that they can just use
> kvm_complete_insn_gp. __kvm_set_dr is pretty much what the callers can use
> together with kvm_complete_insn_gp, so rename it to kvm_set_dr and drop
> the old kvm_set_dr wrapper.
>
On Tue, Feb 02, 2021, Paolo Bonzini wrote:
> Push the injection of #GP up to the callers, so that they can just use
> kvm_complete_insn_gp.
>
> Signed-off-by: Paolo Bonzini
> ---
...
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 08568c47337c..edbeb162012b 100644
> --- a/arch/x8
On Tue, Feb 02, 2021, Paolo Bonzini wrote:
> Push the injection of #GP up to the callers, so that they can just use
> kvm_complete_insn_gp.
The SVM and VMX code is identical, IMO we should push all the code to x86.c
instead of shuffling it around.
I'd also like to change svm_exit_handlers to take
ote changelog]
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/emulate.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
index 56cae1ff9e3f..66a08322988f 100644
--- a/arch/x86/kvm/emulate.c
+++ b/arch/x86/kvm/emulate.c
@@ -2879,6 +2879,8
The patch numbering and/or threading is messed up. This should either be a
standalone patch, or fully incorporated into the same series as the selftests
changes. But, that's somewhat of a moot point...
On Mon, Feb 01, 2021, Maciej S. Szmigiero wrote:
> From: "Maciej S. Szmigiero"
>
> The curre
On Mon, Feb 01, 2021, Paolo Bonzini wrote:
> On 01/02/21 17:38, Sean Christopherson wrote:
> > > > > /*
> > > > > * On TAA affected systems:
> > > > > * - nothing to do if TSX is disabled on the host.
> > > >
On Mon, Feb 01, 2021, Paolo Bonzini wrote:
> On 29/01/21 17:58, Sean Christopherson wrote:
> > On Fri, Jan 29, 2021, Paolo Bonzini wrote:
> > >*/
> > > if (!boot_cpu_has(X86_FEATURE_RTM))
> > > - data &= ~(ARCH_CAP_TAA_NO | ARCH
On Mon, Feb 01, 2021, Paolo Bonzini wrote:
> On 01/02/21 09:46, Paolo Bonzini wrote:
> > >
> > > This comment be updated to call out the new TSX_CTRL behavior.
> > >
> > > /*
> > > * On TAA affected systems:
> > > * - nothing to do if TSX is disabled on the host.
> > > *
On Thu, Jan 28, 2021, ira.we...@intel.com wrote:
> From: Ira Weiny
>
> There is no reason to alloc a page and kmap it to store this temporary
> data from the user.
Actually, there is, it's just poorly documented. The sigstruct needs to be
page aligned, and the token needs to be 512-byte aligne
On Fri, Jan 29, 2021, Paolo Bonzini wrote:
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 76bce832cade..15733013b266 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -1401,7 +1401,7 @@ static u64 kvm_get_arch_capabilities(void)
>*This lets the guest
On Thu, Jan 28, 2021, Paolo Bonzini wrote:
> On 28/01/21 19:04, Sean Christopherson wrote:
> > On Thu, Jan 28, 2021, Paolo Bonzini wrote:
> > > On 06/11/20 02:16, Yang Weijiang wrote:
> > > > Control-flow Enforcement Technology (CET) provides protection agains
On Thu, Jan 28, 2021, Paolo Bonzini wrote:
> On 28/01/21 18:56, Sean Christopherson wrote:
> > On Thu, Jan 28, 2021, Paolo Bonzini wrote:
> > > - vmx->guest_uret_msrs[j].mask =
> > > ~(u64)TSX_CTRL_CPUID_CLEAR;
> > > +
On Thu, Jan 28, 2021, Paolo Bonzini wrote:
> On 06/11/20 02:16, Yang Weijiang wrote:
> > Control-flow Enforcement Technology (CET) provides protection against
> > Return/Jump-Oriented Programming (ROP/JOP) attack. There're two CET
> > sub-features: Shadow Stack (SHSTK) and Indirect Branch Tracking
On Thu, Jan 28, 2021, Paolo Bonzini wrote:
> Userspace that does not know about KVM_GET_MSR_FEATURE_INDEX_LIST will
> generally use the default value for MSR_IA32_ARCH_CAPABILITIES.
> When this happens and the host has tsx=on, it is possible to end up
> with virtual machines that have HLE and RTM d
On Thu, Jan 28, 2021, Paolo Bonzini wrote:
> On 14/01/21 01:36, Sean Christopherson wrote:
> > Add a reverse-CPUID entry for the memory encryption word, 0x801F.EAX,
> > and use it to override the supported CPUID flags reported to userspace.
> > Masking the reported CP
The following commit has been merged into the x86/cpu branch of tip:
Commit-ID: fb35d30fe5b06cc2f0405da8fbe0be5330d1
Gitweb:
https://git.kernel.org/tip/fb35d30fe5b06cc2f0405da8fbe0be5330d1
Author:Sean Christopherson
AuthorDate:Fri, 22 Jan 2021 12:40:46 -08:00
On Thu, Jan 28, 2021, Paolo Bonzini wrote:
> I can't find 00/14 in my inbox, so: queued 1-3 and 6-14, thanks.
If it's not too late, v3 has a few tweaks that would be nice to have, as well as
a new patch to remove the CONFIG_AMD_MEM_ENCRYPT_ACTIVE_BY_DEFAULT dependency.
https://lkml.kernel.org/r/2
o help prevent future regressions.
>
> Cc: Thomas Gleixner
> Cc: Ingo Molnar
> Cc: "H. Peter Anvin"
> Cc: Paolo Bonzini
> Cc: Joerg Roedel
> Cc: Tom Lendacky
> Cc: Brijesh Singh
> Cc: Sean Christopherson
> Cc: x...@kernel.org
> Cc: k...
#x27; is on the left-hand side for both
checks, i.e.
if (unlikely(kvm->mmu_notifier_count) &&
hva >= kvm->mmu_notifier_range_start &&
hva < kvm->mmu_notifier_range_end)
> + return 1;
> + if (kvm->mmu_notifier_seq != mmu_seq)
>
On Tue, Jan 26, 2021, Ben Gardon wrote:
> On Wed, Jan 20, 2021 at 4:56 PM Sean Christopherson wrote:
> >
> > On Tue, Jan 12, 2021, Ben Gardon wrote:
> > > Make the last few changes necessary to enable the TDP MMU to handle page
> > > faults in parallel while
On Tue, Jan 26, 2021, Tom Lendacky wrote:
> On 1/26/21 12:54 PM, Peter Gonda wrote:
> > sev_pin_memory assumes that callers hold the kvm->lock. This was true for
> > all callers except svm_register_enc_region since it does not originate
> > from svm_mem_enc_op. Also added lockdep annotation to help
On Wed, Jan 20, 2021, Babu Moger wrote:
>
> On 1/19/21 5:45 PM, Sean Christopherson wrote:
> > Potentially harebrained alternative...
> >
> > From an architectural SVM perspective, what are the rules for VMCB fields
> > that
> > don't exist (on the c
On Tue, Jan 26, 2021, David Stevens wrote:
> > This needs a comment to explicitly state that 'count > 1' cannot be done at
> > this time. My initial thought is that it would be more intuitive to check
> > for
> > 'count > 1' here, but that would potentially check the wrong wrange when
> > count
On Tue, Jan 26, 2021, Sean Christopherson wrote:
> On Tue, Jan 26, 2021, Paolo Bonzini wrote:
> > On 21/01/21 22:32, Sean Christopherson wrote:
> > > Coming back to this series, I wonder if the RCU approach is truly
> > > necessary to
> > > g
On Tue, Jan 26, 2021, Paolo Bonzini wrote:
> On 21/01/21 22:32, Sean Christopherson wrote:
> > Coming back to this series, I wonder if the RCU approach is truly necessary
> > to
> > get the desired scalability. If both zap_collapsible_sptes() and NX huge
> > page
&
On Tue, Jan 26, 2021, Paolo Bonzini wrote:
> On 30/09/20 06:36, Sean Christopherson wrote:
> > > CR4.PKS is not in the list of CR4 bits that result in a PDPTE load.
> > > Since it has no effect on PAE paging, I would be surprised if it did
> > > result in a PDPTE l
V enabled VM on host.
Personally I'd just leave this out. Unless stated otherwise, it's implied that
you've tested the patch.
> Cc: Thomas Gleixner
> Cc: Ingo Molnar
> Cc: "H. Peter Anvin"
> Cc: Paolo Bonzini
> Cc: Joerg Roedel
> Cc: Tom Lendack
On Tue, Jan 26, 2021, Paolo Bonzini wrote:
> On 11/01/21 18:15, Vitaly Kuznetsov wrote:
> > kvm_no_apic_vcpu is different, we actually need to increase it with
> > every vCPU which doesn't have LAPIC but maybe we can at least switch to
> > static_branch_inc()/static_branch_dec(). It is still weird
On Mon, Jan 25, 2021, Paolo Bonzini wrote:
> +static bool nested_get_vmcs12_pages(struct kvm_vcpu *vcpu)
> +{
> + struct vmcs12 *vmcs12 = get_vmcs12(vcpu);
> + struct vcpu_vmx *vmx = to_vmx(vcpu);
> + struct kvm_host_map *map;
> + struct page *page;
> + u64 hpa;
> +
> + if (
On Sun, Jan 24, 2021, Greg KH wrote:
> On Sun, Jan 24, 2021 at 02:29:07PM +0800, Tianjia Zhang wrote:
> > In this scenario, there is no case where va_page is NULL, and
> > the error has been checked. The if condition statement here is
> > redundant, so remove the condition detection.
> >
> > Signe
On Mon, Jan 25, 2021, Paolo Bonzini wrote:
> On 25/01/21 10:54, Vitaly Kuznetsov wrote:
> >
> > What if we do something like (completely untested):
> >
> > diff --git a/arch/x86/kvm/mmu/mmu_internal.h
> > b/arch/x86/kvm/mmu/mmu_internal.h
> > index bfc6389edc28..5ec15e4160b1 100644
> > --- a/arc
On Mon, Jan 25, 2021, Maxim Levitsky wrote:
> On Thu, 2021-01-21 at 14:27 -0800, Sean Christopherson wrote:
> > I still don't see why VMX can't share this with SVM. "npt' can easily be
> > "tdp",
> > differentiating between VMCB and VMCS ca
On Mon, Jan 25, 2021, Paolo Bonzini wrote:
> On 25/01/21 20:16, Sean Christopherson wrote:
> > > }
> > > +static bool vmx_get_nested_state_pages(struct kvm_vcpu *vcpu)
> > > +{
> > > + if (!nested_get_evmcs_page(vcpu))
> > > + retu
+Cc the other architectures, I'm guessing this would be a helpful optimization
for all archs.
Quite a few comments, but they're all little more than nits. Nice!
On Mon, Jan 25, 2021, David Stevens wrote:
> From: David Stevens
>
> Use the range passed to mmu_notifer's invalidate_range_start to
ly squash
the entire check since casting to an int is guaranteed to yield a
return value of '0'.
Opportunistically refactor is_last_spte() so that it naturally returns
a bool value instead of letting it implicitly cast 0/1 to false/true.
No functional change intended.
Signed-off-by: Sean
On Fri, Jan 22, 2021, Tom Lendacky wrote:
> On 1/22/21 5:50 PM, Sean Christopherson wrote:
> > Sync GPRs to the GHCB on VMRUN only if a sync is needed, i.e. if the
> > previous exit was a VMGEXIT and the guest is expecting some data back.
> >
>
> The start of sev_es_s
roven correct.
The SEV-ES changes are effectively compile tested only, but unless I've
overlooked a code path, patch 1 is a nop. Patch 3 definitely needs
testing.
Paolo, I'd really like to get patches 1 and 2 into 5.11, the code cost of
the dirty/available tracking is not trivial.
Sean
c, thus it would be a bug if KVM tried to resolve a new pfn, i.e.
we want the splat that would be reached via might_fault().
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/x86.c | 10 --
1 file changed, 10 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 9a
Cleanups and a minor optimization to kvm_steal_time_set_preempted() that
were made possible by the switch to using a cache gfn->pfn translation.
Sean Christopherson (2):
KVM: x86: Remove obsolete disabling of page faults in
kvm_arch_vcpu_put()
KVM: x86: Take KVM's SRCU lock only
more precise as to exactly why memslots will
be queried.
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/x86.c | 20 +++-
1 file changed, 11 insertions(+), 9 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 3f4b09d9f25b..4efaa858a8bb 100644
--- a/arch
EXIT")
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/svm/sev.c | 15 ++-
1 file changed, 6 insertions(+), 9 deletions(-)
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index c8ffdbc81709..ac652bc476ae 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/
.
This reverts commit 1c04d8c986567c27c56c05205dceadc92efb14ff.
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/kvm_cache_regs.h | 51 +--
1 file changed, 25 insertions(+), 26 deletions(-)
diff --git a/arch/x86/kvm/kvm_cache_regs.h b/arch/x86/kvm/kvm_cac
Sync GPRs to the GHCB on VMRUN only if a sync is needed, i.e. if the
previous exit was a VMGEXIT and the guest is expecting some data back.
Cc: Brijesh Singh
Cc: Tom Lendacky
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/svm/sev.c | 15 ++-
arch/x86/kvm/svm/svm.h | 1 +
2
VM_AMD_SEV)
check in svm_sev_enabled(), which will be dropped in a future patch.
Reviewed by: Tom Lendacky
Reviewed-by: Brijesh Singh
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/svm/sev.c | 9 -
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kvm/svm/sev.c b/ar
E_BY_DEFAULT has the
unfortunate side effect of enabling all the SEV-ES _guest_ code due to
it being dependent on CONFIG_AMD_MEM_ENCRYPT=y.
Cc: Borislav Petkov
Cc: Tom Lendacky
Cc: Brijesh Singh
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/svm/sev.c | 4 ++--
1 file changed, 2 insertions(+),
Move the allocation of the SEV VMCB array to sev.c to help pave the way
toward encapsulating SEV enabling wholly within sev.c.
No functional change intended.
Reviewed by: Tom Lendacky
Reviewed-by: Brijesh Singh
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/svm/sev.c | 12
VM if
SEV_INIT fails, but that's a problem for another day.
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/svm/sev.c | 23 +++
1 file changed, 11 insertions(+), 12 deletions(-)
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 73da2af1e25d..0a4715e60
supported features to userspace.
No functional change intended.
Reviewed-by: Brijesh Singh
Signed-off-by: Sean Christopherson
---
arch/x86/include/asm/cpufeature.h | 7 +--
arch/x86/include/asm/cpufeatures.h | 17 +++--
arch/x86/include/asm/disabled
userspace.
Cc: Tom Lendacky
Reviewed-by: Brijesh Singh
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/cpuid.c | 2 ++
arch/x86/kvm/cpuid.h | 1 +
2 files changed, 3 insertions(+)
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index 13036cf0b912..b7618cdd06b5 100644
--- a/arch
a larger KVM SEV cleanup series[*], thus the somewhat
questionable v3 tag.
Based on v5.11-rc4.
[*] https://lkml.kernel.org/r/20210114003708.3798992-1-sea...@google.com
Sean Christopherson (2):
x86/cpufeatures: Assign dedicated feature word for
CPUID_0x801F[EAX]
KVM: x86: Override
Remove the forward declaration of sev_flush_asids(), which is only a few
lines above the function itself.
No functional change intended.
Reviewed by: Tom Lendacky
Reviewed-by: Brijesh Singh
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/svm/sev.c | 1 -
1 file changed, 1 deletion
: Brijesh Singh
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/svm/sev.c | 6 +++---
arch/x86/kvm/svm/svm.h | 5 -
2 files changed, 3 insertions(+), 8 deletions(-)
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 55a47a34a0ef..15bdc97454ab 100644
--- a/arch/x86/kvm/svm
Query max_sev_asid directly after setting it instead of bouncing through
its wrapper, svm_sev_enabled(). Using the wrapper is unnecessary
obfuscation.
No functional change intended.
Reviewed by: Tom Lendacky
Reviewed-by: Brijesh Singh
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/svm
Singh
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/svm/svm.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index d223db3a77b0..751785b156ab 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -886,8
ed.
Acked-by: Tom Lendacky
Reviewed-by: Brijesh Singh
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/svm/sev.c | 20 ++--
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 4595f04310e2..ef2ae734b6bc 100644
t; for its own
purposes.
No functional change intended.
Reviewed-by: Tom Lendacky
Reviewed-by: Brijesh Singh
Signed-off-by: Sean Christopherson
---
arch/x86/include/asm/mem_encrypt.h | 1 -
arch/x86/mm/mem_encrypt.c | 12 +---
arch/x86/mm/mem_encrypt_identity.c | 1 -
3 file
#x27; flag directly. While sev_hardware_enabled() checks max_sev_asid,
which is true even if KVM setup fails, 'sev' will be true if and only
if KVM setup fully succeeds.
Fixes: 33af3a7ef9e6 ("KVM: SVM: Reduce WBINVD/DF_FLUSH invocations")
Cc: Tom Lendacky
Signed-off-by: Sean Christo
t
side of things has already laid claim to 'sev_enabled'.
Reviewed-by: Tom Lendacky
Reviewed-by: Brijesh Singh
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/svm/sev.c | 11 +++
arch/x86/kvm/svm/svm.c | 15 +--
arch/x86/kvm/svm/svm.h | 2 --
3 files changed,
islav Petkov
Reviewed-by: Tom Lendacky
Reviewed-by: Brijesh Singh
Fixes: 70cd94e60c73 ("KVM: SVM: VMRUN should use associated ASID when SEV is
enabled")
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/svm/svm.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/a
dropped later in the series. It's still arguably a fix since KVM will
unnecessarily keep memory, but it's not stable material. [Tom]
- Collect one Ack. [Tom]
v1:
- https://lkml.kernel.org/r/20210109004714.1341275-1-sea...@google.com
Sean Christopherson (13):
KVM: SVM: Zero out the
On Tue, Jan 12, 2021, Ben Gardon wrote:
> In order to protect TDP MMU PT memory with RCU, ensure that page table
> links are properly rcu_derefenced.
>
> Reviewed-by: Peter Feiner
>
> Signed-off-by: Ben Gardon
> ---
> arch/x86/kvm/mmu/tdp_iter.c | 6 +-
> 1 file changed, 5 insertions(+), 1
On Thu, Jan 21, 2021, Tom Lendacky wrote:
> On 1/21/21 9:55 AM, Tejun Heo wrote:
> > Hello,
> >
> > On Thu, Jan 21, 2021 at 08:55:07AM -0600, Tom Lendacky wrote:
> > > The hardware will allow any SEV capable ASID to be run as SEV-ES, however,
> > > the SEV firmware will not allow the activation of
On Thu, Jan 21, 2021, Maxim Levitsky wrote:
> BTW, on unrelated note, currently the smap test is broken in kvm-unit tests.
> I bisected it to commit 322cdd6405250a2a3e48db199f97a45ef519e226
>
> It seems that the following hack (I have no idea why it works,
> since I haven't dug deep into the area
On Thu, Jan 21, 2021, Borislav Petkov wrote:
> On Mon, Dec 28, 2020 at 11:15:11AM -0800, Sean Christopherson wrote:
> > Alternatively, could the kernel case use insn_decode_regs()? If
> > vc_fetch_insn_kernel() were also modified to mirror insn_fetch_from_user(),
> > the
>
On Thu, Jan 21, 2021, Maxim Levitsky wrote:
> This is very helpful to debug nested VMX issues.
>
> Signed-off-by: Maxim Levitsky
> ---
> arch/x86/kvm/trace.h | 30 ++
> arch/x86/kvm/vmx/nested.c | 5 +
> arch/x86/kvm/x86.c| 3 ++-
> 3 files changed,
On Thu, Jan 21, 2021, Sean Christopherson wrote:
> On Tue, Jan 12, 2021, Ben Gardon wrote:
> > +static void tdp_mmu_unlink_page(struct kvm *kvm, struct kvm_mmu_page *sp,
> > + bool atomic)
> > +{
> > + if (atomic)
>
> Summarizin
On Tue, Jan 12, 2021, Ben Gardon wrote:
> Add a lock to protect the data structures that track the page table
> memory used by the TDP MMU. In order to handle multiple TDP MMU
> operations in parallel, pages of PT memory must be added and removed
> without the exclusive protection of the MMU lock.
601 - 700 of 1217 matches
Mail list logo