Re: [Xen-devel] [PATCH] x86: don't deliver NMI to PVH Dom0

2014-12-11 Thread Andrew Cooper
On 11/12/14 10:47, Jan Beulich wrote:
 ... for the time being: The mechanism used depends on the domain's use
 of the IRET hypercall.

 Also drop two bogus code lines spotted while going through the involved
 code paths: Addresses of per-CPU variables can't possibly be NULL, and
 the setting of st-vcpu in send_guest_trap()'s MCE case is redundant
 with an earlier cmpxchgptr().

 Signed-off-by: Jan Beulich jbeul...@suse.com

Reviewed-by: Andrew Cooper andrew.coop...@citrix.com


 --- a/xen/arch/x86/traps.c
 +++ b/xen/arch/x86/traps.c
 @@ -3168,7 +3168,6 @@ static void nmi_mce_softirq(void)
  int cpu = smp_processor_id();
  struct softirq_trap *st = per_cpu(softirq_trap, cpu);
  
 -BUG_ON(st == NULL);
  BUG_ON(st-vcpu == NULL);
  
  /* Set the tmp value unconditionally, so that
 @@ -3233,7 +3232,7 @@ static void nmi_hwdom_report(unsigned in
  {
  struct domain *d = hardware_domain;
  
 -if ( (d == NULL) || (d-vcpu == NULL) || (d-vcpu[0] == NULL) )
 +if ( !d || !d-vcpu || !d-vcpu[0] || !is_pv_domain(d) /* PVH fixme */ )
  return;
  
  set_bit(reason_idx, nmi_reason(d));
 @@ -3674,7 +3673,6 @@ int send_guest_trap(struct domain *d, ui
  
  if ( !test_and_set_bool(v-mce_pending) ) {
  st-domain = d;
 -st-vcpu = v;
  st-processor = v-processor;
  
  /* not safe to wake up a vcpu here */



___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] x86: don't deliver NMI to PVH Dom0

2014-12-11 Thread Tim Deegan
At 10:47 + on 11 Dec (1418291241), Jan Beulich wrote:
 ... for the time being: The mechanism used depends on the domain's use
 of the IRET hypercall.

Can you elaborate?  Looking at the existing code it seems like what it
does is set v-nmi_pending and make sure the vcpu gets scheduled
appropriately.

The HVM code will deliver an NMI if it sees v-nmi_pending (which is
what vlapic_accept_irq does, for example).  Is there some other piece
I'm missing?

Cheers,

Tim.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] x86: don't deliver NMI to PVH Dom0

2014-12-11 Thread Jan Beulich
 On 11.12.14 at 14:00, t...@xen.org wrote:
 At 10:47 + on 11 Dec (1418291241), Jan Beulich wrote:
 ... for the time being: The mechanism used depends on the domain's use
 of the IRET hypercall.
 
 Can you elaborate?  Looking at the existing code it seems like what it
 does is set v-nmi_pending and make sure the vcpu gets scheduled
 appropriately.
 
 The HVM code will deliver an NMI if it sees v-nmi_pending (which is
 what vlapic_accept_irq does, for example).  Is there some other piece
 I'm missing?

Just so that others see the answer too: The problem is that the
temporary affinity adjustment gets undone in the HYPERVISOR_iret
handler, yet PVH can't call that hypercall.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] x86: don't deliver NMI to PVH Dom0

2014-12-11 Thread Tim Deegan
At 13:09 + on 11 Dec (1418299748), Jan Beulich wrote:
  On 11.12.14 at 14:00, t...@xen.org wrote:
  At 10:47 + on 11 Dec (1418291241), Jan Beulich wrote:
  ... for the time being: The mechanism used depends on the domain's use
  of the IRET hypercall.
  
  Can you elaborate?  Looking at the existing code it seems like what it
  does is set v-nmi_pending and make sure the vcpu gets scheduled
  appropriately.
  
  The HVM code will deliver an NMI if it sees v-nmi_pending (which is
  what vlapic_accept_irq does, for example).  Is there some other piece
  I'm missing?
 
 Just so that others see the answer too: The problem is that the
 temporary affinity adjustment gets undone in the HYPERVISOR_iret
 handler, yet PVH can't call that hypercall.

Thanks.  So to make this work for PVH we'd want to do something like
calling async_exception_cleanup() from vlapic_EOI_set()?

Tim.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] x86: don't deliver NMI to PVH Dom0

2014-12-11 Thread Jan Beulich
 On 11.12.14 at 14:16, t...@xen.org wrote:
 At 13:09 + on 11 Dec (1418299748), Jan Beulich wrote:
  On 11.12.14 at 14:00, t...@xen.org wrote:
  At 10:47 + on 11 Dec (1418291241), Jan Beulich wrote:
  ... for the time being: The mechanism used depends on the domain's use
  of the IRET hypercall.
  
  Can you elaborate?  Looking at the existing code it seems like what it
  does is set v-nmi_pending and make sure the vcpu gets scheduled
  appropriately.
  
  The HVM code will deliver an NMI if it sees v-nmi_pending (which is
  what vlapic_accept_irq does, for example).  Is there some other piece
  I'm missing?
 
 Just so that others see the answer too: The problem is that the
 temporary affinity adjustment gets undone in the HYPERVISOR_iret
 handler, yet PVH can't call that hypercall.
 
 Thanks.  So to make this work for PVH we'd want to do something like
 calling async_exception_cleanup() from vlapic_EOI_set()?

No, that's the wrong place - there's no EOI involved in ACK-ing an NMI.
Instead there ought to be (optional?) VM exits upon clearing of the
NMI masked state, and their handling code should presumably call that
function.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel