On 08/10/18 11:12, Jan Beulich wrote:
>>>> On 05.10.18 at 19:02, <[email protected]> wrote:
>> --- a/xen/arch/x86/hvm/svm/svm.c
>> +++ b/xen/arch/x86/hvm/svm/svm.c
>> @@ -649,13 +649,32 @@ void svm_update_guest_cr(struct vcpu *v, unsigned int 
>> cr, unsigned int flags)
>>  static void svm_update_guest_efer(struct vcpu *v)
>>  {
>>      struct vmcb_struct *vmcb = v->arch.hvm.svm.vmcb;
>> -    bool lma = v->arch.hvm.guest_efer & EFER_LMA;
>> -    uint64_t new_efer;
>> +    unsigned long guest_efer = v->arch.hvm.guest_efer,
>> +        xen_efer = read_efer();
>>  
>> -    new_efer = (v->arch.hvm.guest_efer | EFER_SVME) & ~EFER_LME;
>> -    if ( lma )
>> -        new_efer |= EFER_LME;
>> -    vmcb_set_efer(vmcb, new_efer);
>> +    if ( paging_mode_shadow(v->domain) )
>> +    {
>> +        /* EFER.NX is a Xen-owned bit and is not under guest control. */
>> +        guest_efer &= ~EFER_NX;
>> +        guest_efer |= xen_efer & EFER_NX;
>> +
>> +        /*
>> +         * CR0.PG is a Xen-owned bit, and remains set even when the guest 
>> has
>> +         * logically disabled paging.
>> +         *
>> +         * LMA was calculated using the guest CR0.PG setting, but LME needs
>> +         * clearing to avoid interacting with Xen's CR0.PG setting.  As 
>> writes
>> +         * to CR0 are intercepted, it is safe to leave LME clear at this
>> +         * point, and fix up both LME and LMA when CR0.PG is set.
>> +         */
>> +        if ( !(guest_efer & EFER_LMA) )
>> +            guest_efer &= ~EFER_LME;
>> +    }
> I think this wants an "else", either ASSERT()ing that what the removed
> code did is actually the case (arranged for by the callers), or
> retaining the original logic in some form.

No - the original logic does not want keeping.  It is a latent bug in
the HAP case, because if the guest could write to CR0, setting CR0.PG
would fail to activate long mode.

Nothing goes wrong because SVM doesn't have a guest/host mask which
allows selective updating of

> This looks particularly relevant
> when hvm_efer_valid() was called with -1 as its cr0_pg argument, as
> in that case there was not necessarily any correlation enforced
> between EFER.LMA and CR0.PG.

On the subject of passing -1,  all of that logic is horrible, but it is
only ever used when LME/LMA is being recalculated around other state
changes.

This patch brings the SVM logic in line with VT-x logic (c/s fd32dcfe4)
- SVM and VT-x are a whole lot less different than our code suggests. 
If there were problems with the common code, we've seen them already.

Either both functions want changing, or neither, but I don't think it is
a useful check to make.

~Andrew

_______________________________________________
Xen-devel mailing list
[email protected]
https://lists.xenproject.org/mailman/listinfo/xen-devel

Reply via email to