> -----Original Message-----
> From: Jan Beulich [mailto:jbeul...@suse.com]
> Sent: 14 February 2019 13:19
> To: Paul Durrant <paul.durr...@citrix.com>
> Cc: Andrew Cooper <andrew.coop...@citrix.com>; Roger Pau Monne
> <roger....@citrix.com>; Wei Liu <wei.l...@citrix.com>; xen-devel <xen-
> de...@lists.xenproject.org>; Juergen Gross <jgr...@suse.com>
> Subject: Re: [PATCH v3] viridian: fix the HvFlushVirtualAddress/List
> hypercall implementation
> 
> >>> On 14.02.19 at 13:49, <paul.durr...@citrix.com> wrote:
> > --- a/xen/arch/x86/hvm/hvm.c
> > +++ b/xen/arch/x86/hvm/hvm.c
> > @@ -3964,26 +3964,28 @@ static void hvm_s3_resume(struct domain *d)
> >      }
> >  }
> >
> > -static int hvmop_flush_tlb_all(void)
> > +bool hvm_flush_vcpu_tlb(bool (*flush_vcpu)(void *ctxt, struct vcpu *v),
> > +                        void *ctxt)
> >  {
> > +    static DEFINE_PER_CPU(cpumask_t, flush_cpumask);
> > +    cpumask_t *mask = &this_cpu(flush_cpumask);
> >      struct domain *d = current->domain;
> >      struct vcpu *v;
> >
> > -    if ( !is_hvm_domain(d) )
> > -        return -EINVAL;
> > -
> >      /* Avoid deadlock if more than one vcpu tries this at the same
> time. */
> >      if ( !spin_trylock(&d->hypercall_deadlock_mutex) )
> > -        return -ERESTART;
> > +        return false;
> >
> >      /* Pause all other vcpus. */
> >      for_each_vcpu ( d, v )
> > -        if ( v != current )
> > +        if ( v != current && flush_vcpu(ctxt, v) )
> >              vcpu_pause_nosync(v);
> >
> > +    cpumask_clear(mask);
> 
> I'd prefer if this was pulled further down as well, in particular outside
> the
> locked region.

True, I should have done that in v2.

> With this, which is easy enough to do while committing,
> Reviewed-by: Jan Beulich <jbeul...@suse.com>
> 

Ok, thanks.

  Paul

> Cc-ing Jürgen in the hopes for his R-a-b.
> 
> Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Reply via email to