Re: [Xen-devel] [PATCH v2 1/4] VMX: Properly handle pi when all the assigned devices are removed

2016-06-03 Thread Jan Beulich
>>> On 03.06.16 at 07:12,  wrote:
>> From: Jan Beulich [mailto:jbeul...@suse.com]
>> Sent: Tuesday, May 31, 2016 7:52 PM
>> >>> On 31.05.16 at 12:22,  wrote:
>> >> From: Jan Beulich [mailto:jbeul...@suse.com]
>> >> Sent: Friday, May 27, 2016 9:43 PM
>> >> >>> On 26.05.16 at 15:39,  wrote:
>> >> > --- a/xen/arch/x86/hvm/vmx/vmx.c
>> >> > +++ b/xen/arch/x86/hvm/vmx/vmx.c
>> >> > @@ -113,7 +113,19 @@ static void vmx_vcpu_block(struct vcpu *v)
>> >> > _cpu(vmx_pi_blocking, v->processor).lock;
>> >> >  struct pi_desc *pi_desc = >arch.hvm_vmx.pi_desc;
>> >> >
>> >> > -spin_lock_irqsave(pi_blocking_list_lock, flags);
>> >> > +spin_lock_irqsave(>arch.hvm_vmx.pi_hotplug_lock, flags);
>> >> > +if ( unlikely(v->arch.hvm_vmx.pi_blocking_cleaned_up) )
>> >> > +{
>> >> > +/*
>> >> > + * The vcpu is to be destroyed and it has already been removed
>> >> > + * from the per-CPU list if it is blocking, we shouldn't add
>> >> > + * new vCPU to the list.
>> >> > + */
>> >> > +spin_unlock_irqrestore(>arch.hvm_vmx.pi_hotplug_lock, 
>> >> > flags);
>> >> > +return;
>> >> > +}
>> >> > +
>> >> > +spin_lock(pi_blocking_list_lock);
>> >>
>> >> There doesn't appear to be an active problem with this, but
>> >> taking a global lock inside a per-vCPU one feels bad. Both here
>> >> and in vmx_pi_blocking_cleanup() I can't see why the global
>> >> lock alone won't do - you acquire it anyway.
>> >
>> > The purpose of ' v->arch.hvm_vmx.pi_hotplug_lock ' is to make
>> > sure things go right when vmx_pi_blocking_cleanup() and
>> > vmx_vcpu_block() are running concurrently. However, the lock
>> > 'pi_blocking_list_lock' in vmx_pi_blocking_cleanup() refers to
>> > ' v->arch.hvm_vmx.pi_blocking.lock ' while 'pi_blocking_list_lock'
>> > in vmx_vcpu_block() is '_cpu(vmx_pi_blocking, v->processor).lock'.
>> > These two can be different, please consider the following scenario:
>> >
>> > vmx_pi_blocking_cleanup() gets called when the vcpu is in blocking
>> > state and 'v->arch.hvm_vmx.pi_blocking.lock' points to the per-cpu
>> > lock of pCPU0, and when acquiring the lock in this function, another
>> > one wins, (such as vmx_pi_do_resume() or pi_wakeup_interrupt()),
>> > so we are spinning in vmx_pi_blocking_cleanup(). After the vCPU
>> > is woken up, it is running on pCPU1, then sometime later, the vCPU
>> > is blocking again and in vmx_vcpu_block() and if the previous
>> > vmx_pi_blocking_cleanup() still doesn't finish its job. Then we
>> > acquire a different lock (it is pCPU1 this time)in vmx_vcpu_block()
>> > compared to the one in vmx_pi_blocking_cleanup. This can break
>> > things, since we might still put the vCPU to the per-cpu blocking list
>> > after vCPU is destroyed or the last assigned device is detached.
>> 
>> Makes sense. Then the next question is whether you really need
>> to hold both locks at the same time.
> 
> I think we need to hold both. The two spinlocks have different purpose:
> 'v->arch.hvm_vmx.pi_hotplug_lock' is to protect some race case while
> device hotplug or the domain is being down, while the other one is
> used to normally protect accesses to the blocking list.

I understand they have different purposes, but that doesn't mean
they need to be held at the same time. In particular (looking back
at the full patch), the newly added lock frames not only the newly
added code, but also the code inside the other locked region. The
natural thing to expect would be that you need one lock for that
new code, and the other for the pre-existing one. I.e. the function
would still acquire both locks, but not one inside the other.

>> Please understand that I'd
>> really prefer for this to have a simpler solution - the original code
>> was already more complicated than one would really think it should
>> be, and now it's getting worse. While I can't immediately suggest
>> alternatives, it feels like the solution to the current problem may
>> rather be simplification instead of making things even more
>> complicated.
> 
> To be honest, comments like this make one frustrated. If you have
> any other better solutions, we can discuss it here. However, just
> saying the current solution is too complicated is not a good way
> to speed up the process.

I can understand your frustration. But please understand that I'm
also getting frustrated - by more and more difficult to maintain code
getting added. Please understand that addressing the immediate
problem is only one aspect; another is the long term maintainability
of whatever gets added for that purpose. But see below.

It is a more general observation of mine (also, just fyi, in my own
code) that if complicated things need even more complicated fixes,
then quite often something is wrong with the original complicated
arrangements / design.

>> >> >  void vmx_pi_hooks_deassign(struct domain *d)
>> >> >  {
>> >> > +struct vcpu *v;
>> 

Re: [Xen-devel] [PATCH v2 1/4] VMX: Properly handle pi when all the assigned devices are removed

2016-06-02 Thread Wu, Feng


> -Original Message-
> From: Jan Beulich [mailto:jbeul...@suse.com]
> Sent: Tuesday, May 31, 2016 7:52 PM
> To: Wu, Feng 
> Cc: andrew.coop...@citrix.com; dario.faggi...@citrix.com;
> george.dun...@eu.citrix.com; Tian, Kevin ; xen-
> de...@lists.xen.org; konrad.w...@oracle.com; k...@xen.org
> Subject: RE: [PATCH v2 1/4] VMX: Properly handle pi when all the assigned
> devices are removed
> 
> >>> On 31.05.16 at 12:22,  wrote:
> >> From: Jan Beulich [mailto:jbeul...@suse.com]
> >> Sent: Friday, May 27, 2016 9:43 PM
> >> >>> On 26.05.16 at 15:39,  wrote:
> >> > --- a/xen/arch/x86/hvm/vmx/vmx.c
> >> > +++ b/xen/arch/x86/hvm/vmx/vmx.c
> >> > @@ -113,7 +113,19 @@ static void vmx_vcpu_block(struct vcpu *v)
> >> >  _cpu(vmx_pi_blocking, v->processor).lock;
> >> >  struct pi_desc *pi_desc = >arch.hvm_vmx.pi_desc;
> >> >
> >> > -spin_lock_irqsave(pi_blocking_list_lock, flags);
> >> > +spin_lock_irqsave(>arch.hvm_vmx.pi_hotplug_lock, flags);
> >> > +if ( unlikely(v->arch.hvm_vmx.pi_blocking_cleaned_up) )
> >> > +{
> >> > +/*
> >> > + * The vcpu is to be destroyed and it has already been removed
> >> > + * from the per-CPU list if it is blocking, we shouldn't add
> >> > + * new vCPU to the list.
> >> > + */
> >>
> >> This comment appears to be stale wrt the implementation further
> >> down. But see there for whether it's the comment or the code
> >> that need to change.
> >>
> >> > +spin_unlock_irqrestore(>arch.hvm_vmx.pi_hotplug_lock, flags);
> >> > +return;
> >> > +}
> >> > +
> >> > +spin_lock(pi_blocking_list_lock);
> >>
> >> There doesn't appear to be an active problem with this, but
> >> taking a global lock inside a per-vCPU one feels bad. Both here
> >> and in vmx_pi_blocking_cleanup() I can't see why the global
> >> lock alone won't do - you acquire it anyway.
> >
> > The purpose of ' v->arch.hvm_vmx.pi_hotplug_lock ' is to make
> > sure things go right when vmx_pi_blocking_cleanup() and
> > vmx_vcpu_block() are running concurrently. However, the lock
> > 'pi_blocking_list_lock' in vmx_pi_blocking_cleanup() refers to
> > ' v->arch.hvm_vmx.pi_blocking.lock ' while 'pi_blocking_list_lock'
> > in vmx_vcpu_block() is '_cpu(vmx_pi_blocking, v->processor).lock'.
> > These two can be different, please consider the following scenario:
> >
> > vmx_pi_blocking_cleanup() gets called when the vcpu is in blocking
> > state and 'v->arch.hvm_vmx.pi_blocking.lock' points to the per-cpu
> > lock of pCPU0, and when acquiring the lock in this function, another
> > one wins, (such as vmx_pi_do_resume() or pi_wakeup_interrupt()),
> > so we are spinning in vmx_pi_blocking_cleanup(). After the vCPU
> > is woken up, it is running on pCPU1, then sometime later, the vCPU
> > is blocking again and in vmx_vcpu_block() and if the previous
> > vmx_pi_blocking_cleanup() still doesn't finish its job. Then we
> > acquire a different lock (it is pCPU1 this time)in vmx_vcpu_block()
> > compared to the one in vmx_pi_blocking_cleanup. This can break
> > things, since we might still put the vCPU to the per-cpu blocking list
> > after vCPU is destroyed or the last assigned device is detached.
> 
> Makes sense. Then the next question is whether you really need
> to hold both locks at the same time.

I think we need to hold both. The two spinlocks have different purpose:
'v->arch.hvm_vmx.pi_hotplug_lock' is to protect some race case while
device hotplug or the domain is being down, while the other one is
used to normally protect accesses to the blocking list.

> Please understand that I'd
> really prefer for this to have a simpler solution - the original code
> was already more complicated than one would really think it should
> be, and now it's getting worse. While I can't immediately suggest
> alternatives, it feels like the solution to the current problem may
> rather be simplification instead of making things even more
> complicated.

To be honest, comments like this make one frustrated. If you have
any other better solutions, we can discuss it here. However, just
saying the current solution is too complicated is not a good way
to speed up the process.

> 
> >> >  void vmx_pi_hooks_deassign(struct domain *d)
> >> >  {
> >> > +struct vcpu *v;
> >> > +
> >> >  if ( !iommu_intpost || !has_hvm_container_domain(d) )
> >> >  return;
> >> >
> >> > -ASSERT(d->arch.hvm_domain.vmx.vcpu_block);
> >> > -
> >> > -d->arch.hvm_domain.vmx.vcpu_block = NULL;
> >> > -d->arch.hvm_domain.vmx.pi_switch_from = NULL;
> >> > -d->arch.hvm_domain.vmx.pi_switch_to = NULL;
> >> > -d->arch.hvm_domain.vmx.pi_do_resume = NULL;
> >> > +for_each_vcpu ( d, v )
> >> > +vmx_pi_blocking_cleanup(v);
> >>
> >> If you keep the hooks in place, why is it relevant to do the cleanup
> >> here instead of just at domain death? As suggested before, if you
> 

Re: [Xen-devel] [PATCH v2 1/4] VMX: Properly handle pi when all the assigned devices are removed

2016-05-31 Thread Jan Beulich
>>> On 31.05.16 at 12:22,  wrote:
>> From: Jan Beulich [mailto:jbeul...@suse.com]
>> Sent: Friday, May 27, 2016 9:43 PM
>> >>> On 26.05.16 at 15:39,  wrote:
>> > --- a/xen/arch/x86/hvm/vmx/vmx.c
>> > +++ b/xen/arch/x86/hvm/vmx/vmx.c
>> > @@ -113,7 +113,19 @@ static void vmx_vcpu_block(struct vcpu *v)
>> >_cpu(vmx_pi_blocking, v->processor).lock;
>> >  struct pi_desc *pi_desc = >arch.hvm_vmx.pi_desc;
>> >
>> > -spin_lock_irqsave(pi_blocking_list_lock, flags);
>> > +spin_lock_irqsave(>arch.hvm_vmx.pi_hotplug_lock, flags);
>> > +if ( unlikely(v->arch.hvm_vmx.pi_blocking_cleaned_up) )
>> > +{
>> > +/*
>> > + * The vcpu is to be destroyed and it has already been removed
>> > + * from the per-CPU list if it is blocking, we shouldn't add
>> > + * new vCPU to the list.
>> > + */
>> 
>> This comment appears to be stale wrt the implementation further
>> down. But see there for whether it's the comment or the code
>> that need to change.
>> 
>> > +spin_unlock_irqrestore(>arch.hvm_vmx.pi_hotplug_lock, flags);
>> > +return;
>> > +}
>> > +
>> > +spin_lock(pi_blocking_list_lock);
>> 
>> There doesn't appear to be an active problem with this, but
>> taking a global lock inside a per-vCPU one feels bad. Both here
>> and in vmx_pi_blocking_cleanup() I can't see why the global
>> lock alone won't do - you acquire it anyway.
> 
> The purpose of ' v->arch.hvm_vmx.pi_hotplug_lock ' is to make
> sure things go right when vmx_pi_blocking_cleanup() and
> vmx_vcpu_block() are running concurrently. However, the lock
> 'pi_blocking_list_lock' in vmx_pi_blocking_cleanup() refers to
> ' v->arch.hvm_vmx.pi_blocking.lock ' while 'pi_blocking_list_lock'
> in vmx_vcpu_block() is '_cpu(vmx_pi_blocking, v->processor).lock'.
> These two can be different, please consider the following scenario:
> 
> vmx_pi_blocking_cleanup() gets called when the vcpu is in blocking
> state and 'v->arch.hvm_vmx.pi_blocking.lock' points to the per-cpu
> lock of pCPU0, and when acquiring the lock in this function, another
> one wins, (such as vmx_pi_do_resume() or pi_wakeup_interrupt()),
> so we are spinning in vmx_pi_blocking_cleanup(). After the vCPU
> is woken up, it is running on pCPU1, then sometime later, the vCPU
> is blocking again and in vmx_vcpu_block() and if the previous
> vmx_pi_blocking_cleanup() still doesn't finish its job. Then we
> acquire a different lock (it is pCPU1 this time)in vmx_vcpu_block()
> compared to the one in vmx_pi_blocking_cleanup. This can break
> things, since we might still put the vCPU to the per-cpu blocking list
> after vCPU is destroyed or the last assigned device is detached.

Makes sense. Then the next question is whether you really need
to hold both locks at the same time. Please understand that I'd
really prefer for this to have a simpler solution - the original code
was already more complicated than one would really think it should
be, and now it's getting worse. While I can't immediately suggest
alternatives, it feels like the solution to the current problem may
rather be simplification instead of making things even more
complicated.

>> >  void vmx_pi_hooks_deassign(struct domain *d)
>> >  {
>> > +struct vcpu *v;
>> > +
>> >  if ( !iommu_intpost || !has_hvm_container_domain(d) )
>> >  return;
>> >
>> > -ASSERT(d->arch.hvm_domain.vmx.vcpu_block);
>> > -
>> > -d->arch.hvm_domain.vmx.vcpu_block = NULL;
>> > -d->arch.hvm_domain.vmx.pi_switch_from = NULL;
>> > -d->arch.hvm_domain.vmx.pi_switch_to = NULL;
>> > -d->arch.hvm_domain.vmx.pi_do_resume = NULL;
>> > +for_each_vcpu ( d, v )
>> > +vmx_pi_blocking_cleanup(v);
>> 
>> If you keep the hooks in place, why is it relevant to do the cleanup
>> here instead of just at domain death? As suggested before, if you
>> did it there, you'd likely get away without adding the new per-vCPU
>> flag.
> 
> I don't quite understand this. Adding the cleanup here is to handle
> the corner case when the last assigned device is detached from the
> domain.

There's nothing to be cleaned up really if that de-assign isn't a
result of the domain being brought down.

> Why do you think we don't need to per-vCPU flag, we need
> to set it here to indicate that the vCPU is cleaned up, and in
> vmx_vcpu_block(), we cannot put the vCPUs with this flag set to the
> per-cpu blocking list. Do I miss something?

When clean up is done only at domain destruction time, there's
per-domain state already that can be checked instead of this
per-vCPU flag.

>> Furthermore, if things remain the way they are now, a per-domain
>> flag would suffice.
> 
> vmx_pi_blocking_cleanup() is called in vmx_cpu_destroy(), which can
> be called during domain destroy or vcpu_initialise() if it meets some
> errors. For the latter case, if we use per-domain flag and set it in
> vmx_pi_blocking_clean(), that should be problematic, since it will
> 

Re: [Xen-devel] [PATCH v2 1/4] VMX: Properly handle pi when all the assigned devices are removed

2016-05-31 Thread Wu, Feng


> -Original Message-
> From: Jan Beulich [mailto:jbeul...@suse.com]
> Sent: Friday, May 27, 2016 9:43 PM
> To: Wu, Feng 
> Cc: andrew.coop...@citrix.com; dario.faggi...@citrix.com;
> george.dun...@eu.citrix.com; Tian, Kevin ; xen-
> de...@lists.xen.org; konrad.w...@oracle.com; k...@xen.org
> Subject: Re: [PATCH v2 1/4] VMX: Properly handle pi when all the assigned
> devices are removed
> 
> >>> On 26.05.16 at 15:39,  wrote:
> > --- a/xen/arch/x86/hvm/vmx/vmx.c
> > +++ b/xen/arch/x86/hvm/vmx/vmx.c
> > @@ -113,7 +113,19 @@ static void vmx_vcpu_block(struct vcpu *v)
> > _cpu(vmx_pi_blocking, v->processor).lock;
> >  struct pi_desc *pi_desc = >arch.hvm_vmx.pi_desc;
> >
> > -spin_lock_irqsave(pi_blocking_list_lock, flags);
> > +spin_lock_irqsave(>arch.hvm_vmx.pi_hotplug_lock, flags);
> > +if ( unlikely(v->arch.hvm_vmx.pi_blocking_cleaned_up) )
> > +{
> > +/*
> > + * The vcpu is to be destroyed and it has already been removed
> > + * from the per-CPU list if it is blocking, we shouldn't add
> > + * new vCPU to the list.
> > + */
> 
> This comment appears to be stale wrt the implementation further
> down. But see there for whether it's the comment or the code
> that need to change.
> 
> > +spin_unlock_irqrestore(>arch.hvm_vmx.pi_hotplug_lock, flags);
> > +return;
> > +}
> > +
> > +spin_lock(pi_blocking_list_lock);
> 
> There doesn't appear to be an active problem with this, but
> taking a global lock inside a per-vCPU one feels bad. Both here
> and in vmx_pi_blocking_cleanup() I can't see why the global
> lock alone won't do - you acquire it anyway.

The purpose of ' v->arch.hvm_vmx.pi_hotplug_lock ' is to make
sure things go right when vmx_pi_blocking_cleanup() and
vmx_vcpu_block() are running concurrently. However, the lock
'pi_blocking_list_lock' in vmx_pi_blocking_cleanup() refers to
' v->arch.hvm_vmx.pi_blocking.lock ' while 'pi_blocking_list_lock'
in vmx_vcpu_block() is '_cpu(vmx_pi_blocking, v->processor).lock'.
These two can be different, please consider the following scenario:

vmx_pi_blocking_cleanup() gets called when the vcpu is in blocking
state and 'v->arch.hvm_vmx.pi_blocking.lock' points to the per-cpu
lock of pCPU0, and when acquiring the lock in this function, another
one wins, (such as vmx_pi_do_resume() or pi_wakeup_interrupt()),
so we are spinning in vmx_pi_blocking_cleanup(). After the vCPU
is woken up, it is running on pCPU1, then sometime later, the vCPU
is blocking again and in vmx_vcpu_block() and if the previous
vmx_pi_blocking_cleanup() still doesn't finish its job. Then we
acquire a different lock (it is pCPU1 this time)in vmx_vcpu_block()
compared to the one in vmx_pi_blocking_cleanup. This can break
things, since we might still put the vCPU to the per-cpu blocking list
after vCPU is destroyed or the last assigned device is detached.

> 
> > +static void vmx_pi_blocking_cleanup(struct vcpu *v)
> > +{
> > +unsigned long flags;
> > +spinlock_t *pi_blocking_list_lock;
> > +
> > +if ( !iommu_intpost )
> > +return;
> 
> If the function is to remain to be called from just the body of a loop
> over all vCPU-s, please make that loop conditional upon iommu_intpost
> instead of checking it here (and bailing) for every vCPU.
> 
> > +spin_lock_irqsave(>arch.hvm_vmx.pi_hotplug_lock, flags);
> > +v->arch.hvm_vmx.pi_blocking_cleaned_up = 1;
> > +
> > +pi_blocking_list_lock = v->arch.hvm_vmx.pi_blocking.lock;
> > +if (pi_blocking_list_lock == NULL)
> 
> Coding style.
> 
> > +{
> > +spin_unlock_irqrestore(>arch.hvm_vmx.pi_hotplug_lock, flags);
> > +return;
> > +}
> > +
> > +spin_lock(pi_blocking_list_lock);
> > +if ( v->arch.hvm_vmx.pi_blocking.lock != NULL )
> > +{
> > +ASSERT(v->arch.hvm_vmx.pi_blocking.lock == pi_blocking_list_lock);
> > +list_del(>arch.hvm_vmx.pi_blocking.list);
> > +v->arch.hvm_vmx.pi_blocking.lock = NULL;
> > +}
> > +spin_unlock(pi_blocking_list_lock);
> > +spin_unlock_irqrestore(>arch.hvm_vmx.pi_hotplug_lock, flags);
> > +}
> > +
> >  /* This function is called when pcidevs_lock is held */
> >  void vmx_pi_hooks_assign(struct domain *d)
> >  {
> > +struct vcpu *v;
> > +
> >  if ( !iommu_intpost || !has_hvm_container_domain(d) )
> >  return;
> >
> > -ASSERT(!d->arch.hvm_domain.vmx.vcpu_block);
> > +for_each_vcpu ( d, v )
> > +v->arch.hvm_vmx.pi_blocking_cleaned_up = 0;
> >
> > -d->arch.hvm_domain.vmx.vcpu_block = vmx_vcpu_block;
> > -d->arch.hvm_domain.vmx.pi_switch_from = vmx_pi_switch_from;
> > -d->arch.hvm_domain.vmx.pi_switch_to = vmx_pi_switch_to;
> > -d->arch.hvm_domain.vmx.pi_do_resume = vmx_pi_do_resume;
> > +if ( !d->arch.hvm_domain.vmx.vcpu_block )
> > +{
> > +d->arch.hvm_domain.vmx.vcpu_block = vmx_vcpu_block;
> > +

Re: [Xen-devel] [PATCH v2 1/4] VMX: Properly handle pi when all the assigned devices are removed

2016-05-27 Thread Jan Beulich
>>> On 26.05.16 at 15:39,  wrote:
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -113,7 +113,19 @@ static void vmx_vcpu_block(struct vcpu *v)
>   _cpu(vmx_pi_blocking, v->processor).lock;
>  struct pi_desc *pi_desc = >arch.hvm_vmx.pi_desc;
>  
> -spin_lock_irqsave(pi_blocking_list_lock, flags);
> +spin_lock_irqsave(>arch.hvm_vmx.pi_hotplug_lock, flags);
> +if ( unlikely(v->arch.hvm_vmx.pi_blocking_cleaned_up) )
> +{
> +/*
> + * The vcpu is to be destroyed and it has already been removed
> + * from the per-CPU list if it is blocking, we shouldn't add
> + * new vCPU to the list.
> + */

This comment appears to be stale wrt the implementation further
down. But see there for whether it's the comment or the code
that need to change.

> +spin_unlock_irqrestore(>arch.hvm_vmx.pi_hotplug_lock, flags);
> +return;
> +}
> +
> +spin_lock(pi_blocking_list_lock);

There doesn't appear to be an active problem with this, but
taking a global lock inside a per-vCPU one feels bad. Both here
and in vmx_pi_blocking_cleanup() I can't see why the global
lock alone won't do - you acquire it anyway.

> +static void vmx_pi_blocking_cleanup(struct vcpu *v)
> +{
> +unsigned long flags;
> +spinlock_t *pi_blocking_list_lock;
> +
> +if ( !iommu_intpost )
> +return;

If the function is to remain to be called from just the body of a loop
over all vCPU-s, please make that loop conditional upon iommu_intpost
instead of checking it here (and bailing) for every vCPU.

> +spin_lock_irqsave(>arch.hvm_vmx.pi_hotplug_lock, flags);
> +v->arch.hvm_vmx.pi_blocking_cleaned_up = 1;
> +
> +pi_blocking_list_lock = v->arch.hvm_vmx.pi_blocking.lock;
> +if (pi_blocking_list_lock == NULL)

Coding style.

> +{
> +spin_unlock_irqrestore(>arch.hvm_vmx.pi_hotplug_lock, flags);
> +return;
> +}
> +
> +spin_lock(pi_blocking_list_lock);
> +if ( v->arch.hvm_vmx.pi_blocking.lock != NULL )
> +{
> +ASSERT(v->arch.hvm_vmx.pi_blocking.lock == pi_blocking_list_lock);
> +list_del(>arch.hvm_vmx.pi_blocking.list);
> +v->arch.hvm_vmx.pi_blocking.lock = NULL;
> +}
> +spin_unlock(pi_blocking_list_lock);
> +spin_unlock_irqrestore(>arch.hvm_vmx.pi_hotplug_lock, flags);
> +}
> +
>  /* This function is called when pcidevs_lock is held */
>  void vmx_pi_hooks_assign(struct domain *d)
>  {
> +struct vcpu *v;
> +
>  if ( !iommu_intpost || !has_hvm_container_domain(d) )
>  return;
>  
> -ASSERT(!d->arch.hvm_domain.vmx.vcpu_block);
> +for_each_vcpu ( d, v )
> +v->arch.hvm_vmx.pi_blocking_cleaned_up = 0;
>  
> -d->arch.hvm_domain.vmx.vcpu_block = vmx_vcpu_block;
> -d->arch.hvm_domain.vmx.pi_switch_from = vmx_pi_switch_from;
> -d->arch.hvm_domain.vmx.pi_switch_to = vmx_pi_switch_to;
> -d->arch.hvm_domain.vmx.pi_do_resume = vmx_pi_do_resume;
> +if ( !d->arch.hvm_domain.vmx.vcpu_block )
> +{
> +d->arch.hvm_domain.vmx.vcpu_block = vmx_vcpu_block;
> +d->arch.hvm_domain.vmx.pi_switch_from = vmx_pi_switch_from;
> +d->arch.hvm_domain.vmx.pi_switch_to = vmx_pi_switch_to;
> +d->arch.hvm_domain.vmx.pi_do_resume = vmx_pi_do_resume;
> +}
>  }
>  
>  /* This function is called when pcidevs_lock is held */
>  void vmx_pi_hooks_deassign(struct domain *d)
>  {
> +struct vcpu *v;
> +
>  if ( !iommu_intpost || !has_hvm_container_domain(d) )
>  return;
>  
> -ASSERT(d->arch.hvm_domain.vmx.vcpu_block);
> -
> -d->arch.hvm_domain.vmx.vcpu_block = NULL;
> -d->arch.hvm_domain.vmx.pi_switch_from = NULL;
> -d->arch.hvm_domain.vmx.pi_switch_to = NULL;
> -d->arch.hvm_domain.vmx.pi_do_resume = NULL;
> +for_each_vcpu ( d, v )
> +vmx_pi_blocking_cleanup(v);

If you keep the hooks in place, why is it relevant to do the cleanup
here instead of just at domain death? As suggested before, if you
did it there, you'd likely get away without adding the new per-vCPU
flag.

Furthermore, if things remain the way they are now, a per-domain
flag would suffice.

And finally - do you really need to retain all four hooks? Since if not,
one that gets zapped here could be used in place of such a per-
domain flag.

> --- a/xen/include/asm-x86/hvm/vmx/vmcs.h
> +++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
> @@ -231,6 +231,9 @@ struct arch_vmx_struct {
>   * pCPU and wakeup the related vCPU.
>   */
>  struct pi_blocking_vcpu pi_blocking;
> +
> +spinlock_tpi_hotplug_lock;

Being through all of the patch, I have a problem seeing the hotplug
nature of this.

> +bool_tpi_blocking_cleaned_up;

If this flag is to remain, it should move into its designated structure
(right above your addition).

Jan

___
Xen-devel mailing list
Xen-devel@lists.xen.org

[Xen-devel] [PATCH v2 1/4] VMX: Properly handle pi when all the assigned devices are removed

2016-05-26 Thread Feng Wu
This patch handles some concern case when the last assigned device
is removed from the domain. In this case we should carefully handle
pi descriptor and the per-cpu blocking list, to make sure:
- all the PI descriptor are in the right state when next time a
devices is assigned to the domain again. This is achrived by always
making all the pi hooks available, so the pi descriptor is updated
during scheduling, which make it always up-to-data.
- No remaining vcpus of the domain in the per-cpu blocking list.

Signed-off-by: Feng Wu 
---
 xen/arch/x86/hvm/vmx/vmx.c | 75 +++---
 xen/include/asm-x86/hvm/vmx/vmcs.h |  3 ++
 2 files changed, 65 insertions(+), 13 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index bc4410f..65f5288 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -113,7 +113,19 @@ static void vmx_vcpu_block(struct vcpu *v)
_cpu(vmx_pi_blocking, v->processor).lock;
 struct pi_desc *pi_desc = >arch.hvm_vmx.pi_desc;
 
-spin_lock_irqsave(pi_blocking_list_lock, flags);
+spin_lock_irqsave(>arch.hvm_vmx.pi_hotplug_lock, flags);
+if ( unlikely(v->arch.hvm_vmx.pi_blocking_cleaned_up) )
+{
+/*
+ * The vcpu is to be destroyed and it has already been removed
+ * from the per-CPU list if it is blocking, we shouldn't add
+ * new vCPU to the list.
+ */
+spin_unlock_irqrestore(>arch.hvm_vmx.pi_hotplug_lock, flags);
+return;
+}
+
+spin_lock(pi_blocking_list_lock);
 old_lock = cmpxchg(>arch.hvm_vmx.pi_blocking.lock, NULL,
pi_blocking_list_lock);
 
@@ -126,7 +138,9 @@ static void vmx_vcpu_block(struct vcpu *v)
 
 list_add_tail(>arch.hvm_vmx.pi_blocking.list,
   _cpu(vmx_pi_blocking, v->processor).list);
-spin_unlock_irqrestore(pi_blocking_list_lock, flags);
+spin_unlock(pi_blocking_list_lock);
+
+spin_unlock_irqrestore(>arch.hvm_vmx.pi_hotplug_lock, flags);
 
 ASSERT(!pi_test_sn(pi_desc));
 
@@ -199,32 +213,65 @@ static void vmx_pi_do_resume(struct vcpu *v)
 spin_unlock_irqrestore(pi_blocking_list_lock, flags);
 }
 
+static void vmx_pi_blocking_cleanup(struct vcpu *v)
+{
+unsigned long flags;
+spinlock_t *pi_blocking_list_lock;
+
+if ( !iommu_intpost )
+return;
+
+spin_lock_irqsave(>arch.hvm_vmx.pi_hotplug_lock, flags);
+v->arch.hvm_vmx.pi_blocking_cleaned_up = 1;
+
+pi_blocking_list_lock = v->arch.hvm_vmx.pi_blocking.lock;
+if (pi_blocking_list_lock == NULL)
+{
+spin_unlock_irqrestore(>arch.hvm_vmx.pi_hotplug_lock, flags);
+return;
+}
+
+spin_lock(pi_blocking_list_lock);
+if ( v->arch.hvm_vmx.pi_blocking.lock != NULL )
+{
+ASSERT(v->arch.hvm_vmx.pi_blocking.lock == pi_blocking_list_lock);
+list_del(>arch.hvm_vmx.pi_blocking.list);
+v->arch.hvm_vmx.pi_blocking.lock = NULL;
+}
+spin_unlock(pi_blocking_list_lock);
+spin_unlock_irqrestore(>arch.hvm_vmx.pi_hotplug_lock, flags);
+}
+
 /* This function is called when pcidevs_lock is held */
 void vmx_pi_hooks_assign(struct domain *d)
 {
+struct vcpu *v;
+
 if ( !iommu_intpost || !has_hvm_container_domain(d) )
 return;
 
-ASSERT(!d->arch.hvm_domain.vmx.vcpu_block);
+for_each_vcpu ( d, v )
+v->arch.hvm_vmx.pi_blocking_cleaned_up = 0;
 
-d->arch.hvm_domain.vmx.vcpu_block = vmx_vcpu_block;
-d->arch.hvm_domain.vmx.pi_switch_from = vmx_pi_switch_from;
-d->arch.hvm_domain.vmx.pi_switch_to = vmx_pi_switch_to;
-d->arch.hvm_domain.vmx.pi_do_resume = vmx_pi_do_resume;
+if ( !d->arch.hvm_domain.vmx.vcpu_block )
+{
+d->arch.hvm_domain.vmx.vcpu_block = vmx_vcpu_block;
+d->arch.hvm_domain.vmx.pi_switch_from = vmx_pi_switch_from;
+d->arch.hvm_domain.vmx.pi_switch_to = vmx_pi_switch_to;
+d->arch.hvm_domain.vmx.pi_do_resume = vmx_pi_do_resume;
+}
 }
 
 /* This function is called when pcidevs_lock is held */
 void vmx_pi_hooks_deassign(struct domain *d)
 {
+struct vcpu *v;
+
 if ( !iommu_intpost || !has_hvm_container_domain(d) )
 return;
 
-ASSERT(d->arch.hvm_domain.vmx.vcpu_block);
-
-d->arch.hvm_domain.vmx.vcpu_block = NULL;
-d->arch.hvm_domain.vmx.pi_switch_from = NULL;
-d->arch.hvm_domain.vmx.pi_switch_to = NULL;
-d->arch.hvm_domain.vmx.pi_do_resume = NULL;
+for_each_vcpu ( d, v )
+vmx_pi_blocking_cleanup(v);
 }
 
 static int vmx_domain_initialise(struct domain *d)
@@ -256,6 +303,8 @@ static int vmx_vcpu_initialise(struct vcpu *v)
 
 INIT_LIST_HEAD(>arch.hvm_vmx.pi_blocking.list);
 
+spin_lock_init(>arch.hvm_vmx.pi_hotplug_lock);
+
 v->arch.schedule_tail= vmx_do_resume;
 v->arch.ctxt_switch_from = vmx_ctxt_switch_from;
 v->arch.ctxt_switch_to   = vmx_ctxt_switch_to;
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h