On 18.08.2025 19:18, Andrew Cooper wrote:
> On 18/08/2025 11:02 am, Jan Beulich wrote:
>> On 09.08.2025 01:49, Andrew Cooper wrote:
>>> --- a/xen/arch/x86/hvm/vmx/vmx.c
>>> +++ b/xen/arch/x86/hvm/vmx/vmx.c
>>> @@ -4209,8 +4209,18 @@ void asmlinkage vmx_vmexit_handler(struct 
>>> cpu_user_regs *regs)
>>>               ((intr_info & INTR_INFO_INTR_TYPE_MASK) ==
>>>                MASK_INSR(X86_ET_NMI, INTR_INFO_INTR_TYPE_MASK)) )
>>>          {
>>> -            do_nmi(regs);
>>> -            enable_nmis();
>>> +            /*
>>> +             * If we exited because of an NMI, NMIs are blocked in 
>>> hardware,
>>> +             * but software is expected to invoke the handler.
>>> +             *
>>> +             * Use INT $2.  Combined with the current state, it is the 
>>> correct
>>> +             * architectural state for the NMI handler,
>> Not quite, I would say: For profiling (and anything else which may want to
>> look at the outer context's register state from within the handler) we'd
>> always appear to have been in Xen when the NMI "occurred".
> 
> We are always inside Xen when the NMI "occurred".

How that? The perception is based on "regs", isn't it? They're representing
guest context here, just with ...

> In fact there's a latent bug I didn't spot before.  Nothing appears to,
> but if anything in do_nmi() were to to look at regs->entry_vector, it
> will see stack rubble (release build) or poison (debug build).

... a few fields (apparently wrongly) not filled.

Jan

Reply via email to