Re: [Xen-devel] [PATCH v2 19/45] ARM: new VGIC: Add IRQ sync/flush framework

2018-03-19 Thread Andre Przywara
Hi,

On 19/03/18 14:17, Julien Grall wrote:
> Hi,
> 
> On 03/15/2018 08:30 PM, Andre Przywara wrote:
>> Implement the framework for syncing IRQs between our emulation and the
>> list registers, which represent the guest's view of IRQs.
>> This is done in vgic_sync_from_lrs() and vgic_sync_to_lrs(), which
>> get called on guest entry and exit, respectively.
>> The code talking to the actual GICv2/v3 hardware is added in the
>> following patches.
>>
>> This is based on Linux commit 0919e84c0fc1, written by Marc Zyngier.
>>
>> Signed-off-by: Andre Przywara 
>> ---
>> Changelog v1 ... v2:
>> - make functions void
> 
> Hmmm, the function were already void in the previous version. However,
> you switch from static inline to static. Did I miss anything?

Ah right, I just saw the diff hitting on the prototype line ;-)
For the records: static inline in a .c file is a red herring:
https://www.kernel.org/doc/local/inline.html
summary: Use static inline in header files, just static in .c files.
inline is a hint, and the compiler knows best what to do. Really.

Will adjust the change log.

Cheers,
Andre.

> 
> [...]
> 
>> +static void vgic_set_underflow(struct vcpu *vcpu)
>> +{
>> +    ASSERT(vcpu == current);
>> +
>> +    gic_hw_ops->update_hcr_status(GICH_HCR_UIE, 1);
> 
> The second is a boolean, so please use true.
> 
> Cheers,
> 

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v2 19/45] ARM: new VGIC: Add IRQ sync/flush framework

2018-03-19 Thread Julien Grall

Hi,

On 03/15/2018 08:30 PM, Andre Przywara wrote:

Implement the framework for syncing IRQs between our emulation and the
list registers, which represent the guest's view of IRQs.
This is done in vgic_sync_from_lrs() and vgic_sync_to_lrs(), which
get called on guest entry and exit, respectively.
The code talking to the actual GICv2/v3 hardware is added in the
following patches.

This is based on Linux commit 0919e84c0fc1, written by Marc Zyngier.

Signed-off-by: Andre Przywara 
---
Changelog v1 ... v2:
- make functions void


Hmmm, the function were already void in the previous version. However, 
you switch from static inline to static. Did I miss anything?


[...]


+static void vgic_set_underflow(struct vcpu *vcpu)
+{
+ASSERT(vcpu == current);
+
+gic_hw_ops->update_hcr_status(GICH_HCR_UIE, 1);


The second is a boolean, so please use true.

Cheers,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH v2 19/45] ARM: new VGIC: Add IRQ sync/flush framework

2018-03-15 Thread Andre Przywara
Implement the framework for syncing IRQs between our emulation and the
list registers, which represent the guest's view of IRQs.
This is done in vgic_sync_from_lrs() and vgic_sync_to_lrs(), which
get called on guest entry and exit, respectively.
The code talking to the actual GICv2/v3 hardware is added in the
following patches.

This is based on Linux commit 0919e84c0fc1, written by Marc Zyngier.

Signed-off-by: Andre Przywara 
---
Changelog v1 ... v2:
- make functions void
- do underflow setting directly (no v2/v3 indirection)
- fix multiple SGIs injections (as the late Linux bugfix)

 xen/arch/arm/vgic/vgic.c | 232 +++
 xen/arch/arm/vgic/vgic.h |   2 +
 2 files changed, 234 insertions(+)

diff --git a/xen/arch/arm/vgic/vgic.c b/xen/arch/arm/vgic/vgic.c
index 7306a80dd3..e82d498766 100644
--- a/xen/arch/arm/vgic/vgic.c
+++ b/xen/arch/arm/vgic/vgic.c
@@ -399,6 +399,238 @@ void vgic_inject_irq(struct domain *d, struct vcpu *vcpu, 
unsigned int intid,
 return;
 }
 
+/**
+ * vgic_prune_ap_list() - Remove non-relevant interrupts from the ap_list
+ *
+ * @vcpu:   The VCPU of which the ap_list should be pruned.
+ *
+ * Go over the list of interrupts on a VCPU's ap_list, and prune those that
+ * we won't have to consider in the near future.
+ * This removes interrupts that have been successfully handled by the guest,
+ * or that have otherwise became obsolete (not pending anymore).
+ * Also this moves interrupts between VCPUs, if their affinity has changed.
+ */
+static void vgic_prune_ap_list(struct vcpu *vcpu)
+{
+struct vgic_cpu *vgic_cpu = >arch.vgic;
+struct vgic_irq *irq, *tmp;
+unsigned long flags;
+
+retry:
+spin_lock_irqsave(_cpu->ap_list_lock, flags);
+
+list_for_each_entry_safe( irq, tmp, _cpu->ap_list_head, ap_list )
+{
+struct vcpu *target_vcpu, *vcpuA, *vcpuB;
+
+spin_lock(>irq_lock);
+
+BUG_ON(vcpu != irq->vcpu);
+
+target_vcpu = vgic_target_oracle(irq);
+
+if ( !target_vcpu )
+{
+/*
+ * We don't need to process this interrupt any
+ * further, move it off the list.
+ */
+list_del(>ap_list);
+irq->vcpu = NULL;
+spin_unlock(>irq_lock);
+
+/*
+ * This vgic_put_irq call matches the
+ * vgic_get_irq_kref in vgic_queue_irq_unlock,
+ * where we added the LPI to the ap_list. As
+ * we remove the irq from the list, we drop
+ * also drop the refcount.
+ */
+vgic_put_irq(vcpu->domain, irq);
+continue;
+}
+
+if ( target_vcpu == vcpu )
+{
+/* We're on the right CPU */
+spin_unlock(>irq_lock);
+continue;
+}
+
+/* This interrupt looks like it has to be migrated. */
+
+spin_unlock(>irq_lock);
+spin_unlock_irqrestore(_cpu->ap_list_lock, flags);
+
+/*
+ * Ensure locking order by always locking the smallest
+ * ID first.
+ */
+if ( vcpu->vcpu_id < target_vcpu->vcpu_id )
+{
+vcpuA = vcpu;
+vcpuB = target_vcpu;
+}
+else
+{
+vcpuA = target_vcpu;
+vcpuB = vcpu;
+}
+
+spin_lock_irqsave(>arch.vgic.ap_list_lock, flags);
+spin_lock(>arch.vgic.ap_list_lock);
+spin_lock(>irq_lock);
+
+/*
+ * If the affinity has been preserved, move the
+ * interrupt around. Otherwise, it means things have
+ * changed while the interrupt was unlocked, and we
+ * need to replay this.
+ *
+ * In all cases, we cannot trust the list not to have
+ * changed, so we restart from the beginning.
+ */
+if ( target_vcpu == vgic_target_oracle(irq) )
+{
+struct vgic_cpu *new_cpu = _vcpu->arch.vgic;
+
+list_del(>ap_list);
+irq->vcpu = target_vcpu;
+list_add_tail(>ap_list, _cpu->ap_list_head);
+}
+
+spin_unlock(>irq_lock);
+spin_unlock(>arch.vgic.ap_list_lock);
+spin_unlock_irqrestore(>arch.vgic.ap_list_lock, flags);
+goto retry;
+}
+
+spin_unlock_irqrestore(_cpu->ap_list_lock, flags);
+}
+
+static void vgic_fold_lr_state(struct vcpu *vcpu)
+{
+}
+
+/* Requires the irq_lock to be held. */
+static void vgic_populate_lr(struct vcpu *vcpu,
+ struct vgic_irq *irq, int lr)
+{
+ASSERT(spin_is_locked(>irq_lock));
+}
+
+static void vgic_set_underflow(struct vcpu *vcpu)
+{
+ASSERT(vcpu == current);
+
+gic_hw_ops->update_hcr_status(GICH_HCR_UIE, 1);
+}
+
+/* Requires the ap_list_lock to be held. */
+static int compute_ap_list_depth(struct vcpu *vcpu)
+{
+struct vgic_cpu *vgic_cpu = >arch.vgic;
+struct vgic_irq *irq;
+int count = 0;
+
+