On 11/04/18 13:01, Simon Gaiser wrote:
> Andrew Cooper:
>> On 11/04/18 12:48, Simon Gaiser wrote:
>>> when I use early microcode loading with the microcode update with the
>>> BTI mitigations, resuming from suspend to RAM is broken.
>>> Based on added logging to enter_state() (from power.c) it doesn't
>>> survive the local_irq_restore(flags) call (at least a printk() after the
>>> call doesn't output anything on the serial console).
>>> I guess that some irq handler tries to use IBRS/IBPB. But the microcode
>>> is only loaded later.
>>> If I simply move the microcode_resume_cpu(0) directly before the
>>> local_irq_restore(flags) everything seems to work fine. But I'm not sure
>>> if this has unintended consequences.
>>> I tested the above with Xen 4.8.3 from Qubes which includes the BTI and
>>> microcode patches from staging-4.8. AFAICS there are no commits which
>>> changes the affected code or other commits which sound relevant so this
>>> probably affected also all the newer branches.
>> S3 support is a very unloved area of the hypervisor.
>> Yes - we definitely need to get microcode reloaded before interrupts are
> Do you see any problems with simply moving microcode_resume_cpu(0)
> directly before the local_irq_restore(flags) call? (I'm not familiar
> with the code at all and (early) resume handling sounds like something
> which is easy to break in non obvious ways)
Judging by what is going on, it wants to be between tboot_s3_error() and
the done label.
We only need to restore microcode if we successfully went into S3. The
done and enable_cpu labels are only used by paths which don't need to
OTOH, you should check the return value and panic if restoration
failed. As you've seen, the system won't survive trying to blindly
>> That said, I would have expected a backtrace complaining about
>> a GP fault if we had hit the use of IBRS/IBPB before the microcode was
> Yeah, not sure what's happening here. I don't get any output from after
> local_irq_restore(flags). If you have some ideas for more debug output I
> can easily test it.
In hindsight, I am. We take a #GP fault because of a bad MSR, and at
the head of the exception handler try to use the same bad MSR. It will
repeatedly fault until hitting a guard page (or other read-only page),
at which point we take a double fault, and suffer a #GP yet again.
Taking a #DF will reset the stack to a moderately sane value, and the
system will livelock taking faults.
This is an unfortunate consequence of having $MAGIC in the exception
Xen-devel mailing list