Re: [Xen-devel] [PATCH v2] xen/x86: vpmu: Unmap per-vCPU PMU page when the domain is destroyed

2019-11-28 Thread Durrant, Paul
> -Original Message-
> From: Julien Grall 
> Sent: 27 November 2019 19:42
> To: Durrant, Paul ; xen-devel@lists.xenproject.org
> Cc: Jan Beulich ; Andrew Cooper
> ; Wei Liu ; Roger Pau Monné
> ; Jun Nakajima ; Kevin Tian
> 
> Subject: Re: [PATCH v2] xen/x86: vpmu: Unmap per-vCPU PMU page when the
> domain is destroyed
> 
> Hi Paul,
> 
> On 27/11/2019 12:00, Paul Durrant wrote:
> > From: Julien Grall 
> >
> > A guest will setup a shared page with the hypervisor for each vCPU via
> > XENPMU_init. The page will then get mapped in the hypervisor and only
> > released when XENPMU_finish is called.
> >
> > This means that if the guest fails to invoke XENPMU_finish, e.g if it is
> > destroyed rather than cleanly shut down, the page will stay mapped in
> the
> > hypervisor. One of the consequences is the domain can never be fully
> > destroyed as a page reference is still held.
> >
> > As Xen should never rely on the guest to correctly clean-up any
> > allocation in the hypervisor, we should also unmap such pages during the
> > domain destruction if there are any left.
> >
> > We can re-use the same logic as in pvpmu_finish(). To avoid
> > duplication, move the logic in a new function that can also be called
> > from vpmu_destroy().
> >
> > NOTE: The call to vpmu_destroy() must also be moved from
> >arch_vcpu_destroy() into domain_relinquish_resources() such that
> the
> >reference on the mapped page does not prevent domain_destroy()
> (which
> >calls arch_vcpu_destroy()) from being called.
> >Also, whils it appears that vpmu_arch_destroy() is idempotent it
> is
> >by no means obvious. Hence move manipulation of the
> >VPMU_CONTEXT_ALLOCATED flag out of implementation specific code
> and
> >make sure it is cleared at the end of vpmu_arch_destroy().
> 
> If you resend the patch, it might be worth to add a line about the lack
> of XSA. Something like:
> 
> There is no associated XSA because vPMU  is not security supported (see
> XSA-163).

Sure, I'll add another note.

  Paul

> 
> Cheers,
___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v2] xen/x86: vpmu: Unmap per-vCPU PMU page when the domain is destroyed

2019-11-27 Thread Julien Grall

Hi Paul,

On 27/11/2019 12:00, Paul Durrant wrote:

From: Julien Grall 

A guest will setup a shared page with the hypervisor for each vCPU via
XENPMU_init. The page will then get mapped in the hypervisor and only
released when XENPMU_finish is called.

This means that if the guest fails to invoke XENPMU_finish, e.g if it is
destroyed rather than cleanly shut down, the page will stay mapped in the
hypervisor. One of the consequences is the domain can never be fully
destroyed as a page reference is still held.

As Xen should never rely on the guest to correctly clean-up any
allocation in the hypervisor, we should also unmap such pages during the
domain destruction if there are any left.

We can re-use the same logic as in pvpmu_finish(). To avoid
duplication, move the logic in a new function that can also be called
from vpmu_destroy().

NOTE: The call to vpmu_destroy() must also be moved from
   arch_vcpu_destroy() into domain_relinquish_resources() such that the
   reference on the mapped page does not prevent domain_destroy() (which
   calls arch_vcpu_destroy()) from being called.
   Also, whils it appears that vpmu_arch_destroy() is idempotent it is
   by no means obvious. Hence move manipulation of the
   VPMU_CONTEXT_ALLOCATED flag out of implementation specific code and
   make sure it is cleared at the end of vpmu_arch_destroy().


If you resend the patch, it might be worth to add a line about the lack 
of XSA. Something like:


There is no associated XSA because vPMU  is not security supported (see 
XSA-163).


Cheers,

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v2] xen/x86: vpmu: Unmap per-vCPU PMU page when the domain is destroyed

2019-11-27 Thread Durrant, Paul
> -Original Message-
> From: Boris Ostrovsky 
> Sent: 27 November 2019 16:32
> To: Jan Beulich ; Durrant, Paul 
> Cc: Grall, Julien ; Andrew Cooper
> ; Roger Pau Monné ; Jun
> Nakajima ; Kevin Tian ; Wei
> Liu ; xen-devel@lists.xenproject.org
> Subject: Re: [PATCH v2] xen/x86: vpmu: Unmap per-vCPU PMU page when the
> domain is destroyed
> 
> On 11/27/19 10:44 AM, Jan Beulich wrote:
> > On 27.11.2019 13:00, Paul Durrant wrote:
> >> --- a/xen/arch/x86/cpu/vpmu.c
> >> +++ b/xen/arch/x86/cpu/vpmu.c
> >> @@ -479,6 +479,8 @@ static int vpmu_arch_initialise(struct vcpu *v)
> >>
> >>  if ( ret )
> >>  printk(XENLOG_G_WARNING "VPMU: Initialization failed for
> %pv\n", v);
> >> +else
> >> +vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
> 
> That won't work I think.
> 
> On Intel the context is allocated lazily for HVM/PVH guests during the
> first MSR access. For example:
> 
> core2_vpmu_do_wrmsr() ->
>     core2_vpmu_msr_common_check()):
>         if ( unlikely(!vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED)) &&
>      !core2_vpmu_alloc_resource(current) )
>                 return 0;
> 
> For PV guests the context *is* allocated from vmx_vpmu_initialise().
> 
> I don't remember why only PV does eager allocation but I think doing it
> for all guests would make code much simpler and then this patch will be
> correct.
> 

Ok. Simpler if I leave setting the flag in the implementation code. I think 
clearing it in vcpu_arch_destroy() would still be correct in all cases.

  Paul

> -boris
> 
> 
> >>
> >>  return ret;
> >>  }
> >> @@ -576,11 +578,36 @@ static void vpmu_arch_destroy(struct vcpu *v)
> >>
> >>   vpmu->arch_vpmu_ops->arch_vpmu_destroy(v);
> >>  }
> >> +
> >> +vpmu_reset(vpmu, VPMU_CONTEXT_ALLOCATED);
> >>  }
> > Boris,
> >
> > I'd like to ask that you comment on this part of the change at
> > least, as I seem to vaguely recall that things were intentionally
> > not done this way originally.
> >
> > Paul,
> >
> > everything else looks god to me now.
> >
> > Jan

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v2] xen/x86: vpmu: Unmap per-vCPU PMU page when the domain is destroyed

2019-11-27 Thread Boris Ostrovsky
On 11/27/19 10:44 AM, Jan Beulich wrote:
> On 27.11.2019 13:00, Paul Durrant wrote:
>> --- a/xen/arch/x86/cpu/vpmu.c
>> +++ b/xen/arch/x86/cpu/vpmu.c
>> @@ -479,6 +479,8 @@ static int vpmu_arch_initialise(struct vcpu *v)
>>  
>>  if ( ret )
>>  printk(XENLOG_G_WARNING "VPMU: Initialization failed for %pv\n", v);
>> +else
>> +vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);

That won't work I think.

On Intel the context is allocated lazily for HVM/PVH guests during the
first MSR access. For example:

core2_vpmu_do_wrmsr() ->
    core2_vpmu_msr_common_check()):
        if ( unlikely(!vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED)) &&
     !core2_vpmu_alloc_resource(current) )
                return 0;

For PV guests the context *is* allocated from vmx_vpmu_initialise().

I don't remember why only PV does eager allocation but I think doing it
for all guests would make code much simpler and then this patch will be
correct.

-boris


>>  
>>  return ret;
>>  }
>> @@ -576,11 +578,36 @@ static void vpmu_arch_destroy(struct vcpu *v)
>>  
>>   vpmu->arch_vpmu_ops->arch_vpmu_destroy(v);
>>  }
>> +
>> +vpmu_reset(vpmu, VPMU_CONTEXT_ALLOCATED);
>>  }
> Boris,
>
> I'd like to ask that you comment on this part of the change at
> least, as I seem to vaguely recall that things were intentionally
> not done this way originally.
>
> Paul,
>
> everything else looks god to me now.
>
> Jan


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v2] xen/x86: vpmu: Unmap per-vCPU PMU page when the domain is destroyed

2019-11-27 Thread Jan Beulich
On 27.11.2019 13:00, Paul Durrant wrote:
> --- a/xen/arch/x86/cpu/vpmu.c
> +++ b/xen/arch/x86/cpu/vpmu.c
> @@ -479,6 +479,8 @@ static int vpmu_arch_initialise(struct vcpu *v)
>  
>  if ( ret )
>  printk(XENLOG_G_WARNING "VPMU: Initialization failed for %pv\n", v);
> +else
> +vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
>  
>  return ret;
>  }
> @@ -576,11 +578,36 @@ static void vpmu_arch_destroy(struct vcpu *v)
>  
>   vpmu->arch_vpmu_ops->arch_vpmu_destroy(v);
>  }
> +
> +vpmu_reset(vpmu, VPMU_CONTEXT_ALLOCATED);
>  }

Boris,

I'd like to ask that you comment on this part of the change at
least, as I seem to vaguely recall that things were intentionally
not done this way originally.

Paul,

everything else looks god to me now.

Jan

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel