On Wed, Oct 25, 2023 at 03:31:07PM +0200, Juergen Gross wrote:
> There is
>
> #define nop() asm volatile ("nop")
>
> in arch/x86/include/asm/special_insns.h already.
Then call it "nop_func" or so.
> It might not be needed now, but are you sure we won't need it in future?
No, I'm not.
What
On Thu, Oct 19, 2023 at 11:15:16AM +0200, Juergen Gross wrote:
> +/* Low-level backend functions usable from alternative code replacements. */
> +DEFINE_ASM_FUNC(x86_nop, "", .entry.text);
> +EXPORT_SYMBOL_GPL(x86_nop);
This is all x86 code so you don't really need the "x86_" prefix - "nop"
is
On Wed, Mar 08, 2023 at 04:42:10PM +0100, Juergen Gross wrote:
> All functions referenced via __PV_IS_CALLEE_SAVE() need to be assembler
> functions, as those functions calls are hidden from gcc. In case the
> kernel is compiled with "-fzero-call-used-regs" the compiler will
> clobber caller-saved
On Fri, Mar 10, 2023 at 07:24:17AM +0100, Juergen Gross wrote:
> The "normal" cases not using alternatives should rather be switched to
> static calls.
Or that.
> Whether it is possible to mix a static call with alternatives needs to
> be evaluated.
I'd prefer not to mix them. Either should be
On Wed, Mar 08, 2023 at 04:42:10PM +0100, Juergen Gross wrote:
> All functions referenced via __PV_IS_CALLEE_SAVE() need to be assembler
> functions, as those functions calls are hidden from gcc. In case the
> kernel is compiled with "-fzero-call-used-regs" the compiler will
> clobber caller-saved
On Thu, Feb 23, 2023 at 04:05:51PM +0100, Juergen Gross wrote:
> x86 maintainers, I think this patch should be carried via the tip tree.
You missed a spot. I'll whack it.
diff --git a/arch/x86/include/asm/mmu_context.h
b/arch/x86/include/asm/mmu_context.h
index a8b323266179..c3ad8a526378 100644
Hi,
this goes ontop of x86/core as the issue is caused by one of the
includes in callthunks.c there.
Thx.
---
From: Borislav Petkov
Fix
./include/trace/events/xen.h:28:31: warning: ‘enum paravirt_lazy_mode’ \
declared inside parameter list will not be visible outside
+ kvm ML and leaving the whole mail quoted in for them.
On Fri, Sep 23, 2022 at 09:05:26AM +0200, Peter Zijlstra wrote:
> On Thu, Jul 21, 2022 at 01:44:33PM -0700, Srivatsa S. Bhat wrote:
> > From: Srivatsa S. Bhat (VMware)
> >
> > VMware ESXi allows enabling a passthru mwait CPU-idle state in
On Fri, May 20, 2022 at 07:33:30PM +0530, Shreenidhi Shedi wrote:
> I deliberately did it because I was lacking clarity on using my org
> mail & personal mail id.
You could have a look at Documentation/process/submitting-patches.rst
and everything under Documentation/process/ in case you don't
On Fri, May 20, 2022 at 12:58:57PM +0530, Shreenidhi Shedi wrote:
> Shifting signed 32-bit value by 31 bits is implementation-defined
> behaviour. Using unsigned is better option for this.
>
> Signed-off-by: Shreenidhi Shedi
> ---
> arch/x86/kernel/cpu/vmware.c | 2 +-
> 1 file changed, 1
On Mon, Jan 10, 2022 at 02:26:18PM +0100, Juergen Gross wrote:
> Thomas, another ping. Didn't you want to take this patch more than a
> month ago? Cc-ing the other x86 maintainers, too.
I'll have a look after the merge window is over.
Thx.
--
Regards/Gruss,
Boris.
On Mon, Sep 13, 2021 at 05:56:00PM +0200, Joerg Roedel wrote:
> diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h
> index 134a7c9d91b6..cd14b6e10f12 100644
> --- a/arch/x86/include/asm/sev.h
> +++ b/arch/x86/include/asm/sev.h
> @@ -81,12 +81,19 @@ static __always_inline void
On Mon, Sep 13, 2021 at 05:55:59PM +0200, Joerg Roedel wrote:
> From: Joerg Roedel
>
> GHCB protocol version 2 adds the MSR-based AP-reset-hold VMGEXIT which
> does not need a GHCB. Use that to park APs in 16-bit protected mode on
> the AP Jump Table.
>
> Signed-off-by: Joerg Roedel
> ---
>
On Mon, Sep 13, 2021 at 05:55:58PM +0200, Joerg Roedel wrote:
> From: Joerg Roedel
>
> The AP Jump Table under SEV-ES contains the reset vector where non-boot
> CPUs start executing when coming out of reset. This means that a CPU
> coming out of the AP-reset-hold VMGEXIT also needs to start
On Mon, Sep 13, 2021 at 05:55:57PM +0200, Joerg Roedel wrote:
> From: Joerg Roedel
>
> Store the physical address of the AP Jump Table in kernel memory so
> that it does not need to be fetched from the Hypervisor again.
>
> Signed-off-by: Joerg Roedel
> ---
> arch/x86/kernel/sev.c | 26
On Mon, Sep 13, 2021 at 05:55:56PM +0200, Joerg Roedel wrote:
> From: Joerg Roedel
>
> Check whether the hypervisor supports GHCB version 2 and use it if
> available.
>
> Signed-off-by: Joerg Roedel
> ---
> arch/x86/boot/compressed/sev.c | 10 --
> arch/x86/include/asm/sev.h | 4
On Mon, Sep 13, 2021 at 05:55:54PM +0200, Joerg Roedel wrote:
> From: Joerg Roedel
>
> Save the results of the GHCB protocol negotiation into a data structure
> and print information about versions supported and used to the kernel
> log.
Which is useful for?
> +/*
> + * struct
On Mon, Nov 01, 2021 at 04:11:42PM -0500, Eric W. Biederman wrote:
> I seem to remember the consensus when this was reviewed that it was
> unnecessary and there is already support for doing something like
> this at a more fine grained level so we don't need a new kexec hook.
Well, the executive
On Mon, Sep 13, 2021 at 05:55:52PM +0200, Joerg Roedel wrote:
> From: Joerg Roedel
>
> Allow a runtime opt-out of kexec support for architecture code in case
> the kernel is running in an environment where kexec is not properly
> supported yet.
>
> This will be used on x86 when the kernel is
https://lkml.kernel.org/r/20210622144825.27588-2-j...@8bytes.org too.
Simplify the brewing macro maze into readability. ]
Co-developed-by: Tom Lendacky
Signed-off-by: Tom Lendacky
Signed-off-by: Brijesh Singh
Signed-off-by: Joerg Roedel
Signed-off-by: Borislav Petkov
Link: https://lkml.
On Fri, Jun 18, 2021 at 01:54:08PM +0200, Joerg Roedel wrote:
> From: Joerg Roedel
>
> The #VC handler only cares about IRQs being disabled while the GHCB is
> active, as it must not be interrupted by something which could cause
> another #VC while it holds the GHCB (NMI is the exception for
On Mon, Jun 14, 2021 at 06:25:18PM +0200, Borislav Petkov wrote:
> ** underscored to mean, that callers need to disable local locks. There's
^^
"interrupts" ofc.
--
Regards/Gruss,
Boris.
https://people.kernel.o
On Mon, Jun 14, 2021 at 03:53:23PM +0200, Joerg Roedel wrote:
> From: Joerg Roedel
>
> The #VC handler only cares about IRQs being disabled while the GHCB is
> active, as it must not be interrupted by something which could cause
> another #VC while it holds the GHCB (NMI is the exception for
On Fri, Jun 11, 2021 at 04:20:36PM +0200, Joerg Roedel wrote:
> I am not a fan of this, because its easily forgotten to add
> local_irq_save()/local_irq_restore() calls around those. Yes, we can add
> irqs_disabled() assertions to the functions, but we can as well just
> disable/enable IRQs in
On Thu, Jun 10, 2021 at 11:11:37AM +0200, Joerg Roedel wrote:
> From: Joerg Roedel
>
> The #VC handler only cares about IRQs being disabled while the GHCB is
> active, as it must not be interrupted by something which could cause
> another #VC while it holds the GHCB (NMI is the exception for
On Wed, May 19, 2021 at 03:52:50PM +0200, Joerg Roedel wrote:
> diff --git a/arch/x86/lib/insn-eval.c b/arch/x86/lib/insn-eval.c
> index 4eecb9c7c6a0..d8a057ba0895 100644
> --- a/arch/x86/lib/insn-eval.c
> +++ b/arch/x86/lib/insn-eval.c
> @@ -1442,27 +1442,36 @@ static int
On Thu, Mar 11, 2021 at 01:50:26PM +0100, Borislav Petkov wrote:
> and move the cleanups patches 13 and 14 to the beginning of the set?
Yeah, 14 needs ALTERNATIVE_TERNARY so I guess after patch 5, that is.
Thx.
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netique
On Tue, Mar 09, 2021 at 02:48:01PM +0100, Juergen Gross wrote:
> This is a major cleanup of the paravirt infrastructure aiming at
> eliminating all custom code patching via paravirt patching.
>
> This is achieved by using ALTERNATIVE instead, leading to the ability
> to give objtool access to the
On Wed, Mar 10, 2021 at 08:51:22AM +0100, Jürgen Groß wrote:
> It is combining the two needed actions: update the static call and
> set the paravirt_using_native_sched_clock boolean.
I actually meant what the point of using_native_sched_clock() is but put
this comment at the wrong place, sorry.
On Tue, Mar 09, 2021 at 02:48:03PM +0100, Juergen Gross wrote:
> @@ -167,6 +168,17 @@ static u64 native_steal_clock(int cpu)
> return 0;
> }
>
> +DEFINE_STATIC_CALL(pv_steal_clock, native_steal_clock);
> +DEFINE_STATIC_CALL(pv_sched_clock, native_sched_clock);
> +
> +bool
On Mon, Mar 08, 2021 at 01:28:43PM +0100, Juergen Gross wrote:
> diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
> index 36cd71fa097f..04b3067f31b5 100644
> --- a/arch/x86/include/asm/paravirt.h
> +++ b/arch/x86/include/asm/paravirt.h
> @@ -137,7 +137,8 @@ static
On Wed, Feb 10, 2021 at 11:21:34AM +0100, Joerg Roedel wrote:
> + /*
> + * Store the sme_me_mask as an indicator that SEV is active. It will be
> + * set again in startup_64().
So why bother? Or does something needs it before that?
...
> +SYM_FUNC_START(sev_startup32_cbit_check)
On Wed, Feb 10, 2021 at 11:21:32AM +0100, Joerg Roedel wrote:
> From: Joerg Roedel
>
> Add a #VC exception handler which is used when the kernel still executes
> in protected mode. This boot-path already uses CPUID, which will cause #VC
> exceptions in an SEV-ES guest.
>
> Signed-off-by: Joerg
On Wed, Feb 10, 2021 at 11:21:31AM +0100, Joerg Roedel wrote:
> From: Joerg Roedel
>
> This boot path needs exception handling when it is used with SEV-ES.
For ?
Let's explain pls.
> Setup an IDT and provide a helper function to write IDT entries for
> use in 32-bit protected mode.
>
>
On Wed, Feb 17, 2021 at 01:01:43PM +0100, Joerg Roedel wrote:
> From: Joerg Roedel
>
> Better explain why this code is necessary and what it is doing.
>
> Signed-off-by: Joerg Roedel
> ---
> arch/x86/kernel/sev-es.c | 23 ---
> 1 file changed, 16 insertions(+), 7
On Wed, Feb 17, 2021 at 01:01:42PM +0100, Joerg Roedel wrote:
> From: Joerg Roedel
>
> The code in the NMI handler to adjust the #VC handler IST stack is
> needed in case an NMI hits when the #VC handler is still using its IST
> stack.
> But the check for this condition also needs to look if the
I guess subject prefix should be "x86/traps:" but I'll fix that up while
applying eventually.
On Wed, Feb 17, 2021 at 01:01:41PM +0100, Joerg Roedel wrote:
> From: Joerg Roedel
>
> Introduce a helper to check whether an exception came from the syscall
> gap and use it in the SEV-ES code
>
>
On Wed, Jan 20, 2021 at 02:55:47PM +0100, Juergen Gross wrote:
> The time pvops functions are the only ones left which might be
> used in 32-bit mode and which return a 64-bit value.
>
> Switch them to use the static_call() mechanism instead of pvops, as
> this allows quite some simplification of
On Thu, Dec 17, 2020 at 05:31:50PM +, Michael Kelley wrote:
> These Hyper-V changes are problematic as we want to keep hyperv_timer.c
> architecture independent. While only the code for x86/x64 is currently
> accepted upstream, code for ARM64 support is in progress. So we need
> to use
On Thu, Dec 17, 2020 at 10:31:24AM +0100, Juergen Gross wrote:
> The time pvops functions are the only ones left which might be
> used in 32-bit mode and which return a 64-bit value.
>
> Switch them to use the static_call() mechanism instead of pvops, as
> this allows quite some simplification of
On Wed, Dec 02, 2020 at 03:48:21PM +0100, Jürgen Groß wrote:
> I wanted to avoid the additional NOPs for the bare metal case.
Yeah, in that case it gets optimized to a single NOP:
[0.176692] SMP alternatives: 81a00068: [0:5) optimized NOPs: 0f 1f
44 00 00
which is nopl
On Fri, Nov 20, 2020 at 12:46:22PM +0100, Juergen Gross wrote:
> @@ -123,12 +115,15 @@ SYM_INNER_LABEL(entry_SYSCALL_64_after_hwframe,
> SYM_L_GLOBAL)
>* Try to use SYSRET instead of IRET if we're returning to
>* a completely clean 64-bit userspace context. If we're not,
>
2 --
> arch/x86/kernel/asm-offsets_64.c | 1 -
> arch/x86/kernel/paravirt.c| 1 -
> arch/x86/kernel/paravirt_patch.c | 3 ---
> arch/x86/xen/enlighten_pv.c | 3 ---
> 8 files changed, 13 insertions(+), 47 deletions(-)
I love patches like this one!
On Thu, Nov 05, 2020 at 06:31:53PM -0600, Michael Roth wrote:
> I can confirm that patch fixes the issue. It is indeed a 5.9.1 tree, but
> looks like the SEV-ES patches didn't go in until v5.10-rc1
Yes, they went into 5.10-rc1 during the merge window.
> (this tree had a backport of them), so
On Thu, Nov 05, 2020 at 10:24:37AM -0600, Michael Roth wrote:
> > out_set_gif:
> > svm_set_gif(svm, !!(kvm_state->flags & KVM_STATE_NESTED_GIF_SET));
> > - return 0;
> > +
> > + ret = 0;
> > +out_free:
> > + kfree(save);
> > + kfree(ctl);
>
> This change seems to
GFP_KERNEL);
> + new = krealloc_array(hw->dimms, hw->num_dimms + 16,
> + sizeof(struct dimm_info), GFP_KERNEL);
> if (!new) {
> WARN_ON_ONCE(1);
>
+ Ard so that he can ack the efi bits.
On Mon, Sep 07, 2020 at 03:16:12PM +0200, Joerg Roedel wrote:
> From: Tom Lendacky
>
> Calling down to EFI runtime services can result in the firmware performing
> VMGEXIT calls. The firmware is likely to use the GHCB of the OS (e.g., for
> setting EFI
On Mon, Sep 07, 2020 at 03:16:08PM +0200, Joerg Roedel wrote:
> From: Joerg Roedel
>
> The IDT on 64bit contains vectors which use paranoid_entry() and/or IST
> stacks. To make these vectors work the TSS and the getcpu GDT entry need
> to be set up before the IDT is loaded.
>
> Signed-off-by:
On Mon, Sep 07, 2020 at 03:15:42PM +0200, Joerg Roedel wrote:
> +void __init sev_es_init_vc_handling(void)
> +{
> + int cpu;
> +
> + BUILD_BUG_ON((offsetof(struct sev_es_runtime_data, ghcb_page) %
> PAGE_SIZE) != 0);
Simplified that to:
BUILD_BUG_ON(offsetof(struct
On Mon, Sep 07, 2020 at 03:15:37PM +0200, Joerg Roedel wrote:
> @@ -347,7 +348,13 @@ bool sme_active(void)
>
> bool sev_active(void)
> {
> - return sme_me_mask && sev_enabled;
> + return !!(sev_status & MSR_AMD64_SEV_ENABLED);
Dropped those "!!" here too while applying.
--
On Mon, Sep 07, 2020 at 03:15:20PM +0200, Joerg Roedel wrote:
> +static inline u64 sev_es_rd_ghcb_msr(void)
> +{
> + unsigned long low, high;
> +
> + asm volatile("rdmsr\n" : "=a" (low), "=d" (high) :
> + "c" (MSR_AMD64_SEV_ES_GHCB));
> +
> + return ((high << 32) |
On Tue, Sep 01, 2020 at 02:59:22PM +0200, Joerg Roedel wrote:
> True, but having a separate function might be handy when support for #VE
> and #HV is developed. Those might also need to setup their early
> handlers here, no?
Ok.
--
Regards/Gruss,
Boris.
On Mon, Aug 24, 2020 at 10:54:43AM +0200, Joerg Roedel wrote:
> @@ -674,6 +675,56 @@ asmlinkage __visible noinstr struct pt_regs
> *sync_regs(struct pt_regs *eregs)
> return regs;
> }
>
> +#ifdef CONFIG_AMD_MEM_ENCRYPT
> +asmlinkage __visible noinstr struct pt_regs
On Mon, Aug 24, 2020 at 10:55:07AM +0200, Joerg Roedel wrote:
> From: Joerg Roedel
>
> For SEV-ES this entry point will be used for restarting APs after they
> have been offlined. Remove the '0' from the name to reflect that.
Sure but only for SEV-ES guests and your change is unconditional. I
On Mon, Aug 24, 2020 at 10:55:05AM +0200, Joerg Roedel wrote:
> @@ -1814,27 +1814,26 @@ static inline void ucode_cpu_init(int cpu)
> load_ucode_ap();
> }
>
> -static inline void tss_setup_ist(struct tss_struct *tss)
> +static inline void tss_setup_ist(struct tss_struct *tss,
> +
On Mon, Aug 24, 2020 at 10:55:04AM +0200, Joerg Roedel wrote:
> diff --git a/arch/x86/include/uapi/asm/svm.h b/arch/x86/include/uapi/asm/svm.h
> index 8f36ae021a7f..a19ce9681ec2 100644
> --- a/arch/x86/include/uapi/asm/svm.h
> +++ b/arch/x86/include/uapi/asm/svm.h
> @@ -84,6 +84,9 @@
> /* SEV-ES
On Mon, Aug 24, 2020 at 10:54:59AM +0200, Joerg Roedel wrote:
> From: Joerg Roedel
>
> Handle #VC exceptions caused by #DB exceptions in the guest. Those
> must be handled outside of instrumentation_begin()/end() so that the
> handler will not be raised recursivly.
On Mon, Aug 24, 2020 at 10:54:47AM +0200, Joerg Roedel wrote:
> + if (bytes == 4)
> + *reg_data = 0; /* Zero-extend for 32-bit operation */
Please put all side-comments over the respective line. There are a
couple in this patch.
Thx.
--
Regards/Gruss,
On Mon, Aug 24, 2020 at 10:54:43AM +0200, Joerg Roedel wrote:
> @@ -446,6 +448,82 @@ _ASM_NOKPROBE(\asmsym)
> SYM_CODE_END(\asmsym)
> .endm
>
ifdeffery pls...
> +/**
> + * idtentry_vc - Macro to generate entry stub for #VC
> + * @vector: Vector number
> + * @asmsym: ASM
On Mon, Aug 24, 2020 at 10:54:42AM +0200, Joerg Roedel wrote:
> diff --git a/arch/x86/kernel/dumpstack_64.c b/arch/x86/kernel/dumpstack_64.c
> index c49cf594714b..5a85730eb0ca 100644
> --- a/arch/x86/kernel/dumpstack_64.c
> +++ b/arch/x86/kernel/dumpstack_64.c
> @@ -85,7 +85,7 @@ struct
On Mon, Aug 24, 2020 at 10:54:41AM +0200, Joerg Roedel wrote:
> diff --git a/arch/x86/kernel/nmi.c b/arch/x86/kernel/nmi.c
> index 4fc9954a9560..951f098a4bf5 100644
> --- a/arch/x86/kernel/nmi.c
> +++ b/arch/x86/kernel/nmi.c
> @@ -33,6 +33,7 @@
> #include
> #include
> #include
> +#include
>
On Mon, Aug 24, 2020 at 10:54:40AM +0200, Joerg Roedel wrote:
> From: Joerg Roedel
>
> Allocate and map an IST stack and an additional fall-back stack for
> the #VC handler. The memory for the stacks is allocated only when
> SEV-ES is active.
>
> The #VC handler needs to use an IST stack
On Mon, Aug 24, 2020 at 10:54:37AM +0200, Joerg Roedel wrote:
> +#ifdef CONFIG_AMD_MEM_ENCRYPT
> +static void set_early_idt_handler(gate_desc *idt, int n, void *handler)
> +{
> + struct idt_data data;
> + gate_desc desc;
> +
> + init_idt_data(, n, handler);
> + idt_init_desc(, );
>
On Mon, Aug 31, 2020 at 10:58:10AM +0200, Joerg Roedel wrote:
> This is not needed on the boot CPU, but only on secondary CPUs. When
> those are brought up the alternatives have been patches already. The
> commit message should probably be more clear about that, I will fix
> that.
Hell yeah - you
On Mon, Aug 24, 2020 at 10:54:34AM +0200, Joerg Roedel wrote:
> +/* Needs to be called from non-instrumentable code */
> +bool noinstr sev_es_active(void)
> +{
> + return !!(sev_status & MSR_AMD64_SEV_ES_ENABLED);
You don't need the "!!" since you're returning bool.
--
Regards/Gruss,
On Mon, Aug 24, 2020 at 10:54:33AM +0200, Joerg Roedel wrote:
> From: Joerg Roedel
>
> Early exception handling will use rd/wrgsbase in paranoid_entry/exit.
> Enable the feature to avoid #UD exceptions on boot APs.
>
> Signed-off-by: Joerg Roedel
> Link:
On Mon, Aug 24, 2020 at 10:54:31AM +0200, Joerg Roedel wrote:
> @@ -385,3 +386,25 @@ void __init alloc_intr_gate(unsigned int n, const void
> *addr)
> if (!WARN_ON(test_and_set_bit(n, system_vectors)))
> set_intr_gate(n, addr);
> }
> +
> +void __init
On Mon, Aug 24, 2020 at 10:54:26AM +0200, Joerg Roedel wrote:
> diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
> index 2b2e91627221..800053219054 100644
> --- a/arch/x86/kernel/head_64.S
> +++ b/arch/x86/kernel/head_64.S
> @@ -78,6 +78,14 @@ SYM_CODE_START_NOALIGN(startup_64)
>
On Mon, Aug 24, 2020 at 10:54:24AM +0200, Joerg Roedel wrote:
> From: Joerg Roedel
>
> The code to setup idt_data is needed for early exception handling, but
> set_intr_gate() can't be used that early because it has pv-ops in its
> code path, which don't work that early.
>
> Split out the
On Mon, Aug 24, 2020 at 10:54:15AM +0200, Joerg Roedel wrote:
Just minor style issues to be fixed by committer or in case you have to
send a new version:
Subject: Re: [PATCH v6 20/76] x86/boot/compressed/64: Call
set_sev_encryption_mask earlier
set_sev_encryption_mask() <- it is a function.
>
On Tue, Aug 25, 2020 at 11:22:24AM +0200, Joerg Roedel wrote:
> I don't think so, if I look at the history of these checks their whole
> purpose seems to be to alert the developer/maintainer when their size
> changes and that they might not fit on the stack anymore. But that is
> taken care of in
On Mon, Aug 24, 2020 at 10:53:57AM +0200, Joerg Roedel wrote:
> static inline void __unused_size_checks(void)
> {
> - BUILD_BUG_ON(sizeof(struct vmcb_save_area) != 0x298);
> + BUILD_BUG_ON(sizeof(struct vmcb_save_area) != 1032);
> BUILD_BUG_ON(sizeof(struct vmcb_control_area) !=
On Tue, Jun 23, 2020 at 04:39:26PM +0100, Andrew Cooper wrote:
> P.S. did you also hear that with Rowhammer, userspace has a nonzero
> quantity of control over generating #MC, depending on how ECC is
> configured on the platform.
Where does that #MC point to? Can it control for which address to
On Tue, Jun 23, 2020 at 04:32:22PM +0100, Andrew Cooper wrote:
> MSR_MCG_STATUS.MCIP, and yes - any #MC before that point will
> immediately Shutdown. Any #MC between that point and IRET will clobber
> its IST stack and end up sad.
Well, at some point we should simply accept that we're living a
On Tue, Jun 23, 2020 at 12:51:03PM +0100, Andrew Cooper wrote:
> Crashing out hard if the hypervisor is misbehaving is acceptable. In a
> cloud, I as a customer would (threaten to?) take my credit card
> elsewhere, while for enterprise, I'd shout at my virtualisation vendor
> until a fix happened
On Thu, Jun 11, 2020 at 01:48:31PM +0200, Joerg Roedel wrote:
> The most important use-case is #VC->NMI->#VC. When an NMI hits while the
> #VC handler uses the GHCB and the NMI handler causes another #VC, then
> the contents of the GHCB needs to be backed up, so that it doesn't
> destroy the GHCB
On Thu, Jun 04, 2020 at 02:07:49PM +0200, Joerg Roedel wrote:
> This are IDT entry points and the names above follow the convention for
> them, like e.g. 'page_fault', 'nmi' or 'general_protection'. Should I
> still add the verbs or just add a comment explaining what those symbols
> are?
Hmmkay,
On Thu, Jun 04, 2020 at 01:54:13PM +0200, Joerg Roedel wrote:
> It is not only the trace-point, this would also eliminate exception
> handling in case the MSR access triggers a #GP. The "Unhandled MSR
> read/write" messages would turn into a "General Protection Fault"
> message.
But the early
On Thu, Jun 04, 2020 at 01:48:21PM +0200, Joerg Roedel wrote:
> Yeah, seems to work. Updated patch attached.
Looks nice, thanks!
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
___
Virtualization mailing list
On Tue, Apr 28, 2020 at 05:17:25PM +0200, Joerg Roedel wrote:
> From: Tom Lendacky
>
> Calling down to EFI runtime services can result in the firmware performing
> VMGEXIT calls. The firmware is likely to use the GHCB of the OS (e.g., for
> setting EFI variables), so each GHCB in the system
On Tue, Apr 28, 2020 at 05:17:24PM +0200, Joerg Roedel wrote:
> diff --git a/arch/x86/kernel/nmi.c b/arch/x86/kernel/nmi.c
> index 27d1016ec840..8898002e5600 100644
> --- a/arch/x86/kernel/nmi.c
> +++ b/arch/x86/kernel/nmi.c
> @@ -511,6 +511,13 @@ NOKPROBE_SYMBOL(is_debug_stack);
> dotraplinkage
On Tue, Apr 28, 2020 at 05:17:23PM +0200, Joerg Roedel wrote:
> From: Joerg Roedel
>
> Add a play_dead handler when running under SEV-ES. This is needed
> because the hypervisor can't deliver an SIPI request to restart the AP.
> Instead the kernel has to issue a VMGEXIT to halt the VCPU. When
On Tue, Apr 28, 2020 at 05:17:20PM +0200, Joerg Roedel wrote:
> From: Joerg Roedel
>
> The #VC exception will trigger very early in head_64.S, when the first
> CPUID instruction is executed. When secondary CPUs boot, they already
> load the real system IDT, which has the #VC handler configured
On Tue, Apr 28, 2020 at 05:17:19PM +0200, Joerg Roedel wrote:
> From: Tom Lendacky
>
> Setup the AP jump table to point to the SEV-ES trampoline code so that
> the APs can boot.
Tom, in his laconic way, doesn't want to explain to us why is this even
needed...
:)
/me reads the code
/me reads
On Tue, Apr 28, 2020 at 05:17:17PM +0200, Joerg Roedel wrote:
> From: Doug Covelli
>
> This change adds VMware specific handling for #VC faults caused by
s/This change adds/Add/
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
On Tue, May 19, 2020 at 10:16:37PM -0700, Sean Christopherson wrote:
> The whole cache on-demand approach seems like overkill. The number of CPUID
> leaves that are invoked after boot with any regularity can probably be counted
> on one hand. IIRC glibc invokes CPUID to gather TLB/cache info,
On Tue, Apr 28, 2020 at 05:17:04PM +0200, Joerg Roedel wrote:
> +static enum es_result vc_handle_dr7_write(struct ghcb *ghcb,
> + struct es_em_ctxt *ctxt)
> +{
> + struct sev_es_runtime_data *data = this_cpu_read(runtime_data);
> + long val, *reg =
On Tue, Apr 28, 2020 at 05:17:03PM +0200, Joerg Roedel wrote:
> From: Tom Lendacky
>
> Implement a handler for #VC exceptions caused by RDMSR/WRMSR
> instructions.
>
> Signed-off-by: Tom Lendacky
> [ jroe...@suse.de: Adapt to #VC handling infrastructure ]
> Co-developed-by: Joerg Roedel
>
On Tue, Apr 28, 2020 at 05:17:02PM +0200, Joerg Roedel wrote:
> From: Joerg Roedel
>
> Add handling for emulation the MOVS instruction on MMIO regions, as done
> by the memcpy_toio() and memcpy_fromio() functions.
>
> Signed-off-by: Joerg Roedel
> ---
> arch/x86/kernel/sev-es.c | 78
On Tue, Apr 28, 2020 at 05:17:01PM +0200, Joerg Roedel wrote:
> +static enum es_result vc_do_mmio(struct ghcb *ghcb, struct es_em_ctxt *ctxt,
> + unsigned int bytes, bool read)
> +{
> + u64 exit_code, exit_info_1, exit_info_2;
> + unsigned long ghcb_pa =
On Tue, Apr 28, 2020 at 05:16:59PM +0200, Joerg Roedel wrote:
> From: Joerg Roedel
>
> When a #VC exception is triggered by user-space the instruction decoder
> needs to read the instruction bytes from user addresses. Enhance
> vc_decode_insn() to safely fetch kernel and user instructions.
>
>
On Tue, Apr 28, 2020 at 05:16:57PM +0200, Joerg Roedel wrote:
> diff --git a/arch/x86/kernel/sev-es.c b/arch/x86/kernel/sev-es.c
> index a4fa7f351bf2..bc3a58427028 100644
> --- a/arch/x86/kernel/sev-es.c
> +++ b/arch/x86/kernel/sev-es.c
> @@ -10,6 +10,7 @@
> #include/* For show_regs() */
On Tue, Apr 28, 2020 at 05:16:55PM +0200, Joerg Roedel wrote:
> diff --git a/arch/x86/include/asm/stacktrace.h
> b/arch/x86/include/asm/stacktrace.h
> index 14db05086bbf..2f3534ef4b5f 100644
> --- a/arch/x86/include/asm/stacktrace.h
> +++ b/arch/x86/include/asm/stacktrace.h
> @@ -21,6 +21,10 @@
Dropping thellst...@vmware.com from Cc from now on because of some
microsloth mail rule not delivering my mails.
On Tue, Apr 28, 2020 at 05:16:54PM +0200, Joerg Roedel wrote:
> From: Joerg Roedel
>
> Allocate and map enough stacks for the #VC handler to support sufficient
> levels of nesting
On Tue, Apr 28, 2020 at 05:16:53PM +0200, Joerg Roedel wrote:
> @@ -198,6 +210,48 @@ static bool __init sev_es_setup_ghcb(void)
> return true;
> }
>
> +static void __init sev_es_alloc_runtime_data(int cpu)
> +{
> + struct sev_es_runtime_data *data;
> +
> + data =
On Tue, Apr 28, 2020 at 05:16:52PM +0200, Joerg Roedel wrote:
> diff --git a/arch/x86/include/asm/sev-es.h b/arch/x86/include/asm/sev-es.h
> index b2cbcd40b52e..e1ed963a57ec 100644
> --- a/arch/x86/include/asm/sev-es.h
> +++ b/arch/x86/include/asm/sev-es.h
> @@ -74,5 +74,6 @@ static inline u64
On Tue, Apr 28, 2020 at 05:16:50PM +0200, Joerg Roedel wrote:
> +static inline u64 sev_es_rd_ghcb_msr(void)
> +{
> + return native_read_msr(MSR_AMD64_SEV_ES_GHCB);
> +}
> +
> +static inline void sev_es_wr_ghcb_msr(u64 val)
> +{
> + u32 low, high;
> +
> + low = (u32)(val);
> + high
On Tue, Apr 28, 2020 at 05:16:48PM +0200, Joerg Roedel wrote:
> +bool sev_es_active(void)
> +{
> + return !!(sev_status & MSR_AMD64_SEV_ES_ENABLED);
> +}
> +EXPORT_SYMBOL_GPL(sev_es_active);
I don't see this being used in modules anywhere in the patchset. Or am I
missing something?
--
On Tue, Apr 28, 2020 at 05:16:45PM +0200, Joerg Roedel wrote:
> From: Joerg Roedel
>
> The code inserted by the stack protector does not work in the early
> boot environment because it uses the GS segment, at least with memory
> encryption enabled.
Can you elaborate on why is that a problem?
On Tue, Apr 28, 2020 at 05:16:41PM +0200, Joerg Roedel wrote:
> @@ -480,6 +500,22 @@ SYM_DATA_LOCAL(early_gdt_descr_base, .quad
> INIT_PER_CPU_VAR(gdt_page))
> SYM_DATA(phys_base, .quad 0x0)
> EXPORT_SYMBOL(phys_base)
>
> +/* Boot GDT used when kernel addresses are not mapped yet */
>
1 - 100 of 174 matches
Mail list logo