Re: [Xen-devel] [PATCHv1 2/2] passthrough: improve locking when iterating over interrupts bound to VMs

2015-10-28 Thread Konrad Rzeszutek Wilk
On Fri, Oct 23, 2015 at 12:05:22PM +0100, David Vrabel wrote:
> radix_tree_gang_lookup() only requires a RCU read lock, not the
> per-domain event_lock.

Don't you need to make some form of 'spin_lock_init' call?

> 
> Introduce a new RCU read lock and take the per-interrupt lock before
> calling the callback instead.
> 
> This eliminates all contention on the event_lock when injecting
> interrupts from passthrough devices.

This pt_pirq_iterate is called from:

hvm_migrate_pirqs (which only uses irq_desc lock) - so you could
also drop the event_lock there.

pt_irq_time_out (which  is only called for legacy interrupts and
only at 8ms). Uses event_lock.

pci_clean_dpci_irqs (during domain shutdown, uses event_lock).

Also should this rcu lock be put in 'pirq_guest_unmask' (which oddly
enough has no lock?!)?

Reading the radix-tree.h t says: "
..
The notable exceptions to this rule are the following functions:
 ...
  radix_tree_gang_lookup

The first 7 functions are able to be called locklessly, using RCU. The
caller must ensure calls to these functions are made within rcu_read_lock()
regions."

Great, so that is what your patch does. Thought it would imply that
everybody using radix_tree_gang_lookup on pirq_tree needs to use RCU?

> 
> Signed-off-by: David Vrabel 
> ---
>  xen/drivers/passthrough/io.c | 10 +++---
>  xen/include/xen/sched.h  |  1 +
>  2 files changed, 8 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/drivers/passthrough/io.c b/xen/drivers/passthrough/io.c
> index 7c86c20..9b51ef0 100644
> --- a/xen/drivers/passthrough/io.c
> +++ b/xen/drivers/passthrough/io.c
> @@ -601,7 +601,7 @@ int pt_pirq_iterate(struct domain *d,
>  unsigned int pirq = 0, n, i;
>  struct pirq *pirqs[8];
>  
> -ASSERT(spin_is_locked(>event_lock));
> +rcu_read_lock(>pirq_rcu_lock);
>  
>  do {
>  n = radix_tree_gang_lookup(>pirq_tree, (void **)pirqs, pirq,
> @@ -610,12 +610,18 @@ int pt_pirq_iterate(struct domain *d,
>  {
>  struct hvm_pirq_dpci *pirq_dpci = pirq_dpci(pirqs[i]);
>  
> +spin_lock(_dpci->lock);
> +
>  pirq = pirqs[i]->pirq;
>  if ( (pirq_dpci->flags & HVM_IRQ_DPCI_MAPPED) )
>  rc = cb(d, pirq_dpci, arg);
> +
> +spin_unlock(_dpci->lock);
>  }
>  } while ( !rc && ++pirq < d->nr_pirqs && n == ARRAY_SIZE(pirqs) );
>  
> +rcu_read_unlock(>pirq_rcu_lock);
> +
>  return rc;
>  }
>  
> @@ -678,9 +684,7 @@ void hvm_dpci_msi_eoi(struct domain *d, int vector)
>  if ( !iommu_enabled || !d->arch.hvm_domain.irq.dpci )
> return;
>  
> -spin_lock(>event_lock);
>  pt_pirq_iterate(d, _hvm_dpci_msi_eoi, (void *)(long)vector);
> -spin_unlock(>event_lock);
>  }
>  
>  static void hvm_dirq_assist(struct domain *d, struct hvm_pirq_dpci 
> *pirq_dpci)
> diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
> index 3729b0f..ae98c1e 100644
> --- a/xen/include/xen/sched.h
> +++ b/xen/include/xen/sched.h
> @@ -355,6 +355,7 @@ struct domain
>   */
>  struct radix_tree_root pirq_tree;
>  unsigned int nr_pirqs;
> +rcu_read_lock_t pirq_rcu_lock;
>  
>  enum guest_type guest_type;
>  
> -- 
> 2.1.4
> 
> 
> ___
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCHv1 2/2] passthrough: improve locking when iterating over interrupts bound to VMs

2015-10-27 Thread David Vrabel
On 27/10/15 12:44, Jan Beulich wrote:
 On 23.10.15 at 13:05,  wrote:
>> radix_tree_gang_lookup() only requires a RCU read lock, not the
>> per-domain event_lock.
> 
> ... when not caring about a consistent snapshot.
> 
>> @@ -678,9 +684,7 @@ void hvm_dpci_msi_eoi(struct domain *d, int vector)
>>  if ( !iommu_enabled || !d->arch.hvm_domain.irq.dpci )
>> return;
>>  
>> -spin_lock(>event_lock);
>>  pt_pirq_iterate(d, _hvm_dpci_msi_eoi, (void *)(long)vector);
>> -spin_unlock(>event_lock);
>>  }
> 
> How about other callers of pt_pirq_iterate()?

They weren't hot so I didn't look at them.

David

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCHv1 2/2] passthrough: improve locking when iterating over interrupts bound to VMs

2015-10-27 Thread Jan Beulich
>>> On 23.10.15 at 13:05,  wrote:
> radix_tree_gang_lookup() only requires a RCU read lock, not the
> per-domain event_lock.

... when not caring about a consistent snapshot.

> @@ -678,9 +684,7 @@ void hvm_dpci_msi_eoi(struct domain *d, int vector)
>  if ( !iommu_enabled || !d->arch.hvm_domain.irq.dpci )
> return;
>  
> -spin_lock(>event_lock);
>  pt_pirq_iterate(d, _hvm_dpci_msi_eoi, (void *)(long)vector);
> -spin_unlock(>event_lock);
>  }

How about other callers of pt_pirq_iterate()?

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [PATCHv1 2/2] passthrough: improve locking when iterating over interrupts bound to VMs

2015-10-23 Thread David Vrabel
radix_tree_gang_lookup() only requires a RCU read lock, not the
per-domain event_lock.

Introduce a new RCU read lock and take the per-interrupt lock before
calling the callback instead.

This eliminates all contention on the event_lock when injecting
interrupts from passthrough devices.

Signed-off-by: David Vrabel 
---
 xen/drivers/passthrough/io.c | 10 +++---
 xen/include/xen/sched.h  |  1 +
 2 files changed, 8 insertions(+), 3 deletions(-)

diff --git a/xen/drivers/passthrough/io.c b/xen/drivers/passthrough/io.c
index 7c86c20..9b51ef0 100644
--- a/xen/drivers/passthrough/io.c
+++ b/xen/drivers/passthrough/io.c
@@ -601,7 +601,7 @@ int pt_pirq_iterate(struct domain *d,
 unsigned int pirq = 0, n, i;
 struct pirq *pirqs[8];
 
-ASSERT(spin_is_locked(>event_lock));
+rcu_read_lock(>pirq_rcu_lock);
 
 do {
 n = radix_tree_gang_lookup(>pirq_tree, (void **)pirqs, pirq,
@@ -610,12 +610,18 @@ int pt_pirq_iterate(struct domain *d,
 {
 struct hvm_pirq_dpci *pirq_dpci = pirq_dpci(pirqs[i]);
 
+spin_lock(_dpci->lock);
+
 pirq = pirqs[i]->pirq;
 if ( (pirq_dpci->flags & HVM_IRQ_DPCI_MAPPED) )
 rc = cb(d, pirq_dpci, arg);
+
+spin_unlock(_dpci->lock);
 }
 } while ( !rc && ++pirq < d->nr_pirqs && n == ARRAY_SIZE(pirqs) );
 
+rcu_read_unlock(>pirq_rcu_lock);
+
 return rc;
 }
 
@@ -678,9 +684,7 @@ void hvm_dpci_msi_eoi(struct domain *d, int vector)
 if ( !iommu_enabled || !d->arch.hvm_domain.irq.dpci )
return;
 
-spin_lock(>event_lock);
 pt_pirq_iterate(d, _hvm_dpci_msi_eoi, (void *)(long)vector);
-spin_unlock(>event_lock);
 }
 
 static void hvm_dirq_assist(struct domain *d, struct hvm_pirq_dpci *pirq_dpci)
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 3729b0f..ae98c1e 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -355,6 +355,7 @@ struct domain
  */
 struct radix_tree_root pirq_tree;
 unsigned int nr_pirqs;
+rcu_read_lock_t pirq_rcu_lock;
 
 enum guest_type guest_type;
 
-- 
2.1.4


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel