Re: [Xen-devel] [RFC PATCH v2 09/26] ARM: GICv3: introduce separate pending_irq structs for LPIs

2017-01-12 Thread Andre Przywara
Hi,

On 12/01/17 19:15, Stefano Stabellini wrote:
> On Thu, 12 Jan 2017, Andre Przywara wrote:
>> Hi Stefano,
>>
>> On 05/01/17 21:36, Stefano Stabellini wrote:
>>> On Thu, 22 Dec 2016, Andre Przywara wrote:
 For the same reason that allocating a struct irq_desc for each
 possible LPI is not an option, having a struct pending_irq for each LPI
 is also not feasible. However we actually only need those when an
 interrupt is on a vCPU (or is about to be injected).
 Maintain a list of those structs that we can use for the lifecycle of
 a guest LPI. We allocate new entries if necessary, however reuse
 pre-owned entries whenever possible.
 I added some locking around this list here, however my gut feeling is
 that we don't need one because this a per-VCPU structure anyway.
 If someone could confirm this, I'd be grateful.
>>>
>>> I don't think the list should be per-VCPU, because the underlying LPIs
>>> are global. 
>>
>> But _pending_ IRQs (regardless of their type) are definitely a per-VCPU
>> thing. I consider struct pending_irq something like an LR precursor or a
>> software representation of it.
>> Also the pending bitmap is a per-redistributor table.
>>
>> The problem is that struct pending_irq is pretty big, 56 Bytes on arm64
>> if I did the math correctly. So the structs for the 32 private
>> interrupts per VCPU alone account for 1792 Byte. Actually I tried to add
>> another list head to it to be able to reuse the structure, but that
>> broke the build because struct vcpu got bigger than 4K.
>>
>> So the main reason I went for a dynamic pending_irq approach was that
>> the potentially big number of LPIs could lead to a lot of memory to be
>> allocated by Xen. And the ITS architecture does not provides any memory
>> table (provided by the guest) to be used for storing this information.
>> Also ...
> 
> Dynamic pending_irqs are good, but why one list per vcpu, rather than
> one list per domain, given that in our current design they can only be
> targeting one vcpu at any given time?

I believe the specs demands that: one LPI can only be pending at one
redistributor for any given point in time, but I will ask Marc tomorrow.
I believe it isn't spelled out in the spec, but can be deducted.
But as the pending table is per-redistributor, I was assuming that
modelling this per VCPU is the right thing.
I will think about it, your rationale of the LPIs being global makes
some sense ...

> 
>>> Similarly, the struct pending_irq array is per-domain, only
>>> the first 32 (PPIs) are per vcpu. Besides, it shouldn't be a list :-)
>>
>> In reality the number of interrupts which are on a VCPU at any given
>> point in time is expected to be very low, in some previous experiments I
>> found never more than four. This is even more true for LPIs, which, due
>> to the lack of an active state, fall out of the system as soon as the
>> VCPU reads the ICC_IAR register.
>> So the list will be very short, usually, which made it very appealing to
>> just go with a list, especially for an RFC.
>>
>> Also having potentially thousands of those structures lying around
>> mostly unused doesn't sound very clever to me. Actually I was thinking
>> about using the same approach for the other interrupts (SPI/PPI/SGI) as
>> well, but I guess that doesn't give us much, apart from breaking
>> everything ;-)
>>
>> But that being said:
>> If we can prove that the number of LPIs actually is limited (because it
>> is bounded by the number of devices, which Dom0 controls), I am willing
>> to investigate if we can switch over to using one struct pending_irq per
>> LPI.
>> Or do you want me to just use a more advanced data structure for that?
> 
> I think we misunderstood each other. I am definitely not suggesting to
> have one struct pending_irq per LPI. Your idea of dynamically allocating
> them is good.

Sorry for the misunderstanding - I am relieved that I don't have to
change that ;-)

> The things I am concerned about are:
> 
> 1) the choice of a list as a data structure, instead of an hashtable or
>a tree (see alpine.DEB.2.10.1610271725280.9978@sstabellini-ThinkPad-X260)
> 2) the choice of having one data structure (list or whatever) per vcpu,
>rather than one per domain
> 
> In both cases, it's not a matter of opinion but a matter of numbers and
> performance. I would like to see some numbers to prove our choices right
> or wrong.

Got it. I will see what I can do. In the moment my
number-and-performance gathering capabilities are severely limited due
to me running everything on the model.

Cheers,
Andre.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC PATCH v2 09/26] ARM: GICv3: introduce separate pending_irq structs for LPIs

2017-01-12 Thread Stefano Stabellini
On Thu, 12 Jan 2017, Andre Przywara wrote:
> Hi Stefano,
> 
> On 05/01/17 21:36, Stefano Stabellini wrote:
> > On Thu, 22 Dec 2016, Andre Przywara wrote:
> >> For the same reason that allocating a struct irq_desc for each
> >> possible LPI is not an option, having a struct pending_irq for each LPI
> >> is also not feasible. However we actually only need those when an
> >> interrupt is on a vCPU (or is about to be injected).
> >> Maintain a list of those structs that we can use for the lifecycle of
> >> a guest LPI. We allocate new entries if necessary, however reuse
> >> pre-owned entries whenever possible.
> >> I added some locking around this list here, however my gut feeling is
> >> that we don't need one because this a per-VCPU structure anyway.
> >> If someone could confirm this, I'd be grateful.
> > 
> > I don't think the list should be per-VCPU, because the underlying LPIs
> > are global. 
> 
> But _pending_ IRQs (regardless of their type) are definitely a per-VCPU
> thing. I consider struct pending_irq something like an LR precursor or a
> software representation of it.
> Also the pending bitmap is a per-redistributor table.
> 
> The problem is that struct pending_irq is pretty big, 56 Bytes on arm64
> if I did the math correctly. So the structs for the 32 private
> interrupts per VCPU alone account for 1792 Byte. Actually I tried to add
> another list head to it to be able to reuse the structure, but that
> broke the build because struct vcpu got bigger than 4K.
> 
> So the main reason I went for a dynamic pending_irq approach was that
> the potentially big number of LPIs could lead to a lot of memory to be
> allocated by Xen. And the ITS architecture does not provides any memory
> table (provided by the guest) to be used for storing this information.
> Also ...

Dynamic pending_irqs are good, but why one list per vcpu, rather than
one list per domain, given that in our current design they can only be
targeting one vcpu at any given time?


> > Similarly, the struct pending_irq array is per-domain, only
> > the first 32 (PPIs) are per vcpu. Besides, it shouldn't be a list :-)
> 
> In reality the number of interrupts which are on a VCPU at any given
> point in time is expected to be very low, in some previous experiments I
> found never more than four. This is even more true for LPIs, which, due
> to the lack of an active state, fall out of the system as soon as the
> VCPU reads the ICC_IAR register.
> So the list will be very short, usually, which made it very appealing to
> just go with a list, especially for an RFC.
> 
> Also having potentially thousands of those structures lying around
> mostly unused doesn't sound very clever to me. Actually I was thinking
> about using the same approach for the other interrupts (SPI/PPI/SGI) as
> well, but I guess that doesn't give us much, apart from breaking
> everything ;-)
> 
> But that being said:
> If we can prove that the number of LPIs actually is limited (because it
> is bounded by the number of devices, which Dom0 controls), I am willing
> to investigate if we can switch over to using one struct pending_irq per
> LPI.
> Or do you want me to just use a more advanced data structure for that?

I think we misunderstood each other. I am definitely not suggesting to
have one struct pending_irq per LPI. Your idea of dynamically allocating
them is good.

The things I am concerned about are:

1) the choice of a list as a data structure, instead of an hashtable or
   a tree (see alpine.DEB.2.10.1610271725280.9978@sstabellini-ThinkPad-X260)
2) the choice of having one data structure (list or whatever) per vcpu,
   rather than one per domain

In both cases, it's not a matter of opinion but a matter of numbers and
performance. I would like to see some numbers to prove our choices right
or wrong.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC PATCH v2 09/26] ARM: GICv3: introduce separate pending_irq structs for LPIs

2017-01-12 Thread Andre Przywara
Hi Stefano,

On 05/01/17 21:36, Stefano Stabellini wrote:
> On Thu, 22 Dec 2016, Andre Przywara wrote:
>> For the same reason that allocating a struct irq_desc for each
>> possible LPI is not an option, having a struct pending_irq for each LPI
>> is also not feasible. However we actually only need those when an
>> interrupt is on a vCPU (or is about to be injected).
>> Maintain a list of those structs that we can use for the lifecycle of
>> a guest LPI. We allocate new entries if necessary, however reuse
>> pre-owned entries whenever possible.
>> I added some locking around this list here, however my gut feeling is
>> that we don't need one because this a per-VCPU structure anyway.
>> If someone could confirm this, I'd be grateful.
> 
> I don't think the list should be per-VCPU, because the underlying LPIs
> are global. 

But _pending_ IRQs (regardless of their type) are definitely a per-VCPU
thing. I consider struct pending_irq something like an LR precursor or a
software representation of it.
Also the pending bitmap is a per-redistributor table.

The problem is that struct pending_irq is pretty big, 56 Bytes on arm64
if I did the math correctly. So the structs for the 32 private
interrupts per VCPU alone account for 1792 Byte. Actually I tried to add
another list head to it to be able to reuse the structure, but that
broke the build because struct vcpu got bigger than 4K.

So the main reason I went for a dynamic pending_irq approach was that
the potentially big number of LPIs could lead to a lot of memory to be
allocated by Xen. And the ITS architecture does not provides any memory
table (provided by the guest) to be used for storing this information.
Also ...

> Similarly, the struct pending_irq array is per-domain, only
> the first 32 (PPIs) are per vcpu. Besides, it shouldn't be a list :-)

In reality the number of interrupts which are on a VCPU at any given
point in time is expected to be very low, in some previous experiments I
found never more than four. This is even more true for LPIs, which, due
to the lack of an active state, fall out of the system as soon as the
VCPU reads the ICC_IAR register.
So the list will be very short, usually, which made it very appealing to
just go with a list, especially for an RFC.

Also having potentially thousands of those structures lying around
mostly unused doesn't sound very clever to me. Actually I was thinking
about using the same approach for the other interrupts (SPI/PPI/SGI) as
well, but I guess that doesn't give us much, apart from breaking
everything ;-)

But that being said:
If we can prove that the number of LPIs actually is limited (because it
is bounded by the number of devices, which Dom0 controls), I am willing
to investigate if we can switch over to using one struct pending_irq per
LPI.
Or do you want me to just use a more advanced data structure for that?

>> Teach the existing VGIC functions to find the right pointer when being
>> given a virtual LPI number.
>>
>> Signed-off-by: Andre Przywara 
> 
> Most of my comments on the previous version of the patch are still
> unaddressed.

Indeed, I just found that your reply wasn't tagged in my mailer, so I
missed it. Sorry about that! Will look at it now.

Cheers,
Andre.

> 
> 
>> ---
>>  xen/arch/arm/gic.c   |  3 +++
>>  xen/arch/arm/vgic-v3.c   | 11 
>>  xen/arch/arm/vgic.c  | 64 
>> +---
>>  xen/include/asm-arm/domain.h |  2 ++
>>  xen/include/asm-arm/vgic.h   | 10 +++
>>  5 files changed, 87 insertions(+), 3 deletions(-)
>>
>> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
>> index a5348f2..6f25501 100644
>> --- a/xen/arch/arm/gic.c
>> +++ b/xen/arch/arm/gic.c
>> @@ -509,6 +509,9 @@ static void gic_update_one_lr(struct vcpu *v, int i)
>>  struct vcpu *v_target = vgic_get_target_vcpu(v, irq);
>>  irq_set_affinity(p->desc, cpumask_of(v_target->processor));
>>  }
>> +/* If this was an LPI, mark this struct as available again. */
>> +if ( p->irq >= 8192 )
>> +p->irq = 0;
>>  }
>>  }
>>  }
>> diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
>> index d61479d..0ffde74 100644
>> --- a/xen/arch/arm/vgic-v3.c
>> +++ b/xen/arch/arm/vgic-v3.c
>> @@ -331,6 +331,14 @@ read_unknown:
>>  return 1;
>>  }
>>  
>> +int vgic_lpi_get_priority(struct domain *d, uint32_t vlpi)
>> +{
>> +if ( vlpi >= d->arch.vgic.nr_lpis )
>> +return GIC_PRI_IRQ;
>> +
>> +return d->arch.vgic.proptable[vlpi - 8192] & 0xfc;
>> +}
>> +
>>  static int __vgic_v3_rdistr_rd_mmio_write(struct vcpu *v, mmio_info_t *info,
>>uint32_t gicr_reg,
>>register_t r)
>> @@ -1426,6 +1434,9 @@ static int vgic_v3_vcpu_init(struct vcpu *v)
>>  if ( v->vcpu_id == last_cpu || (v->vcpu_id == (d->max_vcpus - 1)) )
>>  v->arch.vgic.flags |= 

Re: [Xen-devel] [RFC PATCH v2 09/26] ARM: GICv3: introduce separate pending_irq structs for LPIs

2017-01-05 Thread Stefano Stabellini
On Thu, 22 Dec 2016, Andre Przywara wrote:
> For the same reason that allocating a struct irq_desc for each
> possible LPI is not an option, having a struct pending_irq for each LPI
> is also not feasible. However we actually only need those when an
> interrupt is on a vCPU (or is about to be injected).
> Maintain a list of those structs that we can use for the lifecycle of
> a guest LPI. We allocate new entries if necessary, however reuse
> pre-owned entries whenever possible.
> I added some locking around this list here, however my gut feeling is
> that we don't need one because this a per-VCPU structure anyway.
> If someone could confirm this, I'd be grateful.

I don't think the list should be per-VCPU, because the underlying LPIs
are global. Similarly, the struct pending_irq array is per-domain, only
the first 32 (PPIs) are per vcpu. Besides, it shouldn't be a list :-)


> Teach the existing VGIC functions to find the right pointer when being
> given a virtual LPI number.
> 
> Signed-off-by: Andre Przywara 

Most of my comments on the previous version of the patch are still
unaddressed.


> ---
>  xen/arch/arm/gic.c   |  3 +++
>  xen/arch/arm/vgic-v3.c   | 11 
>  xen/arch/arm/vgic.c  | 64 
> +---
>  xen/include/asm-arm/domain.h |  2 ++
>  xen/include/asm-arm/vgic.h   | 10 +++
>  5 files changed, 87 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index a5348f2..6f25501 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -509,6 +509,9 @@ static void gic_update_one_lr(struct vcpu *v, int i)
>  struct vcpu *v_target = vgic_get_target_vcpu(v, irq);
>  irq_set_affinity(p->desc, cpumask_of(v_target->processor));
>  }
> +/* If this was an LPI, mark this struct as available again. */
> +if ( p->irq >= 8192 )
> +p->irq = 0;
>  }
>  }
>  }
> diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
> index d61479d..0ffde74 100644
> --- a/xen/arch/arm/vgic-v3.c
> +++ b/xen/arch/arm/vgic-v3.c
> @@ -331,6 +331,14 @@ read_unknown:
>  return 1;
>  }
>  
> +int vgic_lpi_get_priority(struct domain *d, uint32_t vlpi)
> +{
> +if ( vlpi >= d->arch.vgic.nr_lpis )
> +return GIC_PRI_IRQ;
> +
> +return d->arch.vgic.proptable[vlpi - 8192] & 0xfc;
> +}
> +
>  static int __vgic_v3_rdistr_rd_mmio_write(struct vcpu *v, mmio_info_t *info,
>uint32_t gicr_reg,
>register_t r)
> @@ -1426,6 +1434,9 @@ static int vgic_v3_vcpu_init(struct vcpu *v)
>  if ( v->vcpu_id == last_cpu || (v->vcpu_id == (d->max_vcpus - 1)) )
>  v->arch.vgic.flags |= VGIC_V3_RDIST_LAST;
>  
> +spin_lock_init(>arch.vgic.pending_lpi_list_lock);
> +INIT_LIST_HEAD(>arch.vgic.pending_lpi_list);
> +
>  return 0;
>  }
>  
> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> index de77aaa..f15eb3e 100644
> --- a/xen/arch/arm/vgic.c
> +++ b/xen/arch/arm/vgic.c
> @@ -31,6 +31,8 @@
>  #include 
>  #include 
>  #include 
> +#include 
> +#include 
>  
>  static inline struct vgic_irq_rank *vgic_get_rank(struct vcpu *v, int rank)
>  {
> @@ -61,7 +63,7 @@ struct vgic_irq_rank *vgic_rank_irq(struct vcpu *v, 
> unsigned int irq)
>  return vgic_get_rank(v, rank);
>  }
>  
> -static void vgic_init_pending_irq(struct pending_irq *p, unsigned int virq)
> +void vgic_init_pending_irq(struct pending_irq *p, unsigned int virq)
>  {
>  INIT_LIST_HEAD(>inflight);
>  INIT_LIST_HEAD(>lr_queue);
> @@ -247,10 +249,14 @@ struct vcpu *vgic_get_target_vcpu(struct vcpu *v, 
> unsigned int virq)
>  
>  static int vgic_get_virq_priority(struct vcpu *v, unsigned int virq)
>  {
> -struct vgic_irq_rank *rank = vgic_rank_irq(v, virq);
> +struct vgic_irq_rank *rank;
>  unsigned long flags;
>  int priority;
>  
> +if ( virq >= 8192 )
> +return vgic_lpi_get_priority(v->domain, virq);
> +
> +rank = vgic_rank_irq(v, virq);
>  vgic_lock_rank(v, rank, flags);
>  priority = rank->priority[virq & INTERRUPT_RANK_MASK];
>  vgic_unlock_rank(v, rank, flags);
> @@ -449,13 +455,63 @@ bool vgic_to_sgi(struct vcpu *v, register_t sgir, enum 
> gic_sgi_mode irqmode,
>  return true;
>  }
>  
> +/*
> + * Holding struct pending_irq's for each possible virtual LPI in each domain
> + * requires too much Xen memory, also a malicious guest could potentially
> + * spam Xen with LPI map requests. We cannot cover those with (guest 
> allocated)
> + * ITS memory, so we use a dynamic scheme of allocating struct pending_irq's
> + * on demand.
> + */
> +struct pending_irq *lpi_to_pending(struct vcpu *v, unsigned int lpi,
> +   bool allocate)
> +{
> +struct lpi_pending_irq *lpi_irq, *empty = NULL;
> +
> +spin_lock(>arch.vgic.pending_lpi_list_lock);
> +

[Xen-devel] [RFC PATCH v2 09/26] ARM: GICv3: introduce separate pending_irq structs for LPIs

2016-12-22 Thread Andre Przywara
For the same reason that allocating a struct irq_desc for each
possible LPI is not an option, having a struct pending_irq for each LPI
is also not feasible. However we actually only need those when an
interrupt is on a vCPU (or is about to be injected).
Maintain a list of those structs that we can use for the lifecycle of
a guest LPI. We allocate new entries if necessary, however reuse
pre-owned entries whenever possible.
I added some locking around this list here, however my gut feeling is
that we don't need one because this a per-VCPU structure anyway.
If someone could confirm this, I'd be grateful.
Teach the existing VGIC functions to find the right pointer when being
given a virtual LPI number.

Signed-off-by: Andre Przywara 
---
 xen/arch/arm/gic.c   |  3 +++
 xen/arch/arm/vgic-v3.c   | 11 
 xen/arch/arm/vgic.c  | 64 +---
 xen/include/asm-arm/domain.h |  2 ++
 xen/include/asm-arm/vgic.h   | 10 +++
 5 files changed, 87 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index a5348f2..6f25501 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -509,6 +509,9 @@ static void gic_update_one_lr(struct vcpu *v, int i)
 struct vcpu *v_target = vgic_get_target_vcpu(v, irq);
 irq_set_affinity(p->desc, cpumask_of(v_target->processor));
 }
+/* If this was an LPI, mark this struct as available again. */
+if ( p->irq >= 8192 )
+p->irq = 0;
 }
 }
 }
diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
index d61479d..0ffde74 100644
--- a/xen/arch/arm/vgic-v3.c
+++ b/xen/arch/arm/vgic-v3.c
@@ -331,6 +331,14 @@ read_unknown:
 return 1;
 }
 
+int vgic_lpi_get_priority(struct domain *d, uint32_t vlpi)
+{
+if ( vlpi >= d->arch.vgic.nr_lpis )
+return GIC_PRI_IRQ;
+
+return d->arch.vgic.proptable[vlpi - 8192] & 0xfc;
+}
+
 static int __vgic_v3_rdistr_rd_mmio_write(struct vcpu *v, mmio_info_t *info,
   uint32_t gicr_reg,
   register_t r)
@@ -1426,6 +1434,9 @@ static int vgic_v3_vcpu_init(struct vcpu *v)
 if ( v->vcpu_id == last_cpu || (v->vcpu_id == (d->max_vcpus - 1)) )
 v->arch.vgic.flags |= VGIC_V3_RDIST_LAST;
 
+spin_lock_init(>arch.vgic.pending_lpi_list_lock);
+INIT_LIST_HEAD(>arch.vgic.pending_lpi_list);
+
 return 0;
 }
 
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index de77aaa..f15eb3e 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -31,6 +31,8 @@
 #include 
 #include 
 #include 
+#include 
+#include 
 
 static inline struct vgic_irq_rank *vgic_get_rank(struct vcpu *v, int rank)
 {
@@ -61,7 +63,7 @@ struct vgic_irq_rank *vgic_rank_irq(struct vcpu *v, unsigned 
int irq)
 return vgic_get_rank(v, rank);
 }
 
-static void vgic_init_pending_irq(struct pending_irq *p, unsigned int virq)
+void vgic_init_pending_irq(struct pending_irq *p, unsigned int virq)
 {
 INIT_LIST_HEAD(>inflight);
 INIT_LIST_HEAD(>lr_queue);
@@ -247,10 +249,14 @@ struct vcpu *vgic_get_target_vcpu(struct vcpu *v, 
unsigned int virq)
 
 static int vgic_get_virq_priority(struct vcpu *v, unsigned int virq)
 {
-struct vgic_irq_rank *rank = vgic_rank_irq(v, virq);
+struct vgic_irq_rank *rank;
 unsigned long flags;
 int priority;
 
+if ( virq >= 8192 )
+return vgic_lpi_get_priority(v->domain, virq);
+
+rank = vgic_rank_irq(v, virq);
 vgic_lock_rank(v, rank, flags);
 priority = rank->priority[virq & INTERRUPT_RANK_MASK];
 vgic_unlock_rank(v, rank, flags);
@@ -449,13 +455,63 @@ bool vgic_to_sgi(struct vcpu *v, register_t sgir, enum 
gic_sgi_mode irqmode,
 return true;
 }
 
+/*
+ * Holding struct pending_irq's for each possible virtual LPI in each domain
+ * requires too much Xen memory, also a malicious guest could potentially
+ * spam Xen with LPI map requests. We cannot cover those with (guest allocated)
+ * ITS memory, so we use a dynamic scheme of allocating struct pending_irq's
+ * on demand.
+ */
+struct pending_irq *lpi_to_pending(struct vcpu *v, unsigned int lpi,
+   bool allocate)
+{
+struct lpi_pending_irq *lpi_irq, *empty = NULL;
+
+spin_lock(>arch.vgic.pending_lpi_list_lock);
+list_for_each_entry(lpi_irq, >arch.vgic.pending_lpi_list, entry)
+{
+if ( lpi_irq->pirq.irq == lpi )
+{
+spin_unlock(>arch.vgic.pending_lpi_list_lock);
+return _irq->pirq;
+}
+
+if ( lpi_irq->pirq.irq == 0 && !empty )
+empty = lpi_irq;
+}
+
+if ( !allocate )
+{
+spin_unlock(>arch.vgic.pending_lpi_list_lock);
+return NULL;
+}
+
+if ( !empty )
+{
+empty = xzalloc(struct lpi_pending_irq);
+vgic_init_pending_irq(>pirq, lpi);
+list_add_tail(>entry,