Hi Andre,
On 21/07/17 21:00, Andre Przywara wrote:
Instead of using an atomic access and hoping for the best, let's use
the new pending_irq lock now to make sure we read a sane version of
the target VCPU.
How this is going to bring a saner version?
You only read the vCPU and well nothing prevent it to change between the
time you get it and lock it in vgic_vcpu_inject_irq.
Cheers,
That still doesn't solve the problem mentioned in the comment, but
paves the way for future improvements.
Signed-off-by: Andre Przywara <andre.przyw...@arm.com>
---
xen/arch/arm/gic-v3-lpi.c | 14 ++++++++------
1 file changed, 8 insertions(+), 6 deletions(-)
diff --git a/xen/arch/arm/gic-v3-lpi.c b/xen/arch/arm/gic-v3-lpi.c
index 2306b58..9db26ed 100644
--- a/xen/arch/arm/gic-v3-lpi.c
+++ b/xen/arch/arm/gic-v3-lpi.c
@@ -140,20 +140,22 @@ void vgic_vcpu_inject_lpi(struct domain *d, unsigned int
virq)
{
/*
* TODO: this assumes that the struct pending_irq stays valid all of
- * the time. We cannot properly protect this with the current locking
- * scheme, but the future per-IRQ lock will solve this problem.
+ * the time. We cannot properly protect this with the current code,
+ * but a future refcounting will solve this problem.
*/
struct pending_irq *p = irq_to_pending(d->vcpu[0], virq);
+ unsigned long flags;
unsigned int vcpu_id;
if ( !p )
return;
- vcpu_id = ACCESS_ONCE(p->vcpu_id);
- if ( vcpu_id >= d->max_vcpus )
- return;
+ vgic_irq_lock(p, flags);
+ vcpu_id = p->vcpu_id;
+ vgic_irq_unlock(p, flags);
- vgic_vcpu_inject_irq(d->vcpu[vcpu_id], virq);
+ if ( vcpu_id < d->max_vcpus )
+ vgic_vcpu_inject_irq(d->vcpu[vcpu_id], virq);
}
/*
--
Julien Grall
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel