IRQFD support with GICv3 ITS (WAS: RE: [PATCH 00/13] arm64: KVM: GICv3 ITS emulation)

2015-06-10 Thread Pavel Fedin
 Hello guys!

 Currently on ARM, irqfd supports routing an host eventfd towards a
 virtual SPI:
 eventfd - vSPI = gsi+32
 parameters of irqfd are the eventfd and the gsi.

 Yes, but this works only with GICv2m, because it actually turns MSI data into 
SPI number.
ITS works in a completely different way.

 2) now we have virtual msi injection, we could use msi routing to inject
 virtual LPI's. But is it what you need for your qemu integration?

 Actually this is what i wanted to discuss here...
 I have studied a little bit IRQ routing mechanism... And it comes to a 
question what is
'GSI'. As far as i could understand, on x86 GSI is a completely virtual entity, 
which can
be tied either to irqchip's pin (physical IRQ) or MSI event. There is totally no
correspondence between GSI numbers and guest IRQ numbers. GSIs are just 
allocated by the
userspace starting from 0 and on. Is my understanding correct?
 On ARM, i see, completely different approach is used. For KVM_IRQ_LINE ioctl 
GSI is
actually a raw GIC IRQ number plus some extra bits for target and type. For 
KVM_IRQFD with
GICv2m GSI is actually SPI number (starting from zero, so that IRQ = GSI + 32).
 First of all, i would say that we already have an inconsistence in ARM API. 
The same
thing called GSI has two different meanings for different functions.
 I think it would be a bad idea to introduce a third, separate meaning for 
MSIs. However,
this is what we could do:

 Approach 1: GICv2m way.
 We could add one more IOCTL which would decode MSI data into IRQ (in our case 
it's LPI)
number. What it would return is LPI - 32, to keep in line with existing 
convention.
 Pros: does not bring any more inconsistence into KVM API.
 Cons: requires adding one more IOCTL and one more MSI handling mechanism. 
Isn't there too
many of them already?

 Approach 2: IRQ routing.
 We could implement MSI routing using virtual GSI numbers. In order to stay 
compatible
with what we have, we could say that GSI numbers below 8192 are SPI GSIs, and 
everything
starting from 8192 is MSI. Then we could use KVM_SET_GSI_ROUTING ioctl to 
assign these
GSIs to actual MSIs which then will go full-cycle through ITS.
 Pros: Does not introduce any new APIs.
 Cons:
- Introduces third meaning for GSI on ARM.
- Slower than approach 1 because in that case every interrupt is 
pre-translated while
here we engage ITS every time.

 Personally i have already tried approach 1 and i can say that it works. There 
is no
problem with target specification because current ITS code stores everything in 
a single
bunch so that i anyway have to locate a particular ITTE corresponding to an LPI 
and get
collection ID from there. However, yes, i agree, this approach has the same 
performance
drawback as my suggested approach 2.

 Any thoughts / ideas ?

Kind regards,
Pavel Fedin
Expert Engineer
Samsung Electronics Research center Russia


___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: IRQFD support with GICv3 ITS (WAS: RE: [PATCH 00/13] arm64: KVM: GICv3 ITS emulation)

2015-06-10 Thread Eric Auger
Hi,
On 06/10/2015 10:31 AM, Pavel Fedin wrote:
  Hello guys!
 
 Currently on ARM, irqfd supports routing an host eventfd towards a
 virtual SPI:
 eventfd - vSPI = gsi+32
 parameters of irqfd are the eventfd and the gsi.
 
  Yes, but this works only with GICv2m, because it actually turns MSI data 
 into SPI number.
 ITS works in a completely different way.
 
 2) now we have virtual msi injection, we could use msi routing to inject
 virtual LPI's. But is it what you need for your qemu integration?
 
  Actually this is what i wanted to discuss here...
  I have studied a little bit IRQ routing mechanism... And it comes to a 
 question what is
 'GSI'. As far as i could understand, on x86 GSI is a completely virtual 
 entity, which can
 be tied either to irqchip's pin (physical IRQ) or MSI event. There is totally 
 no
 correspondence between GSI numbers and guest IRQ numbers. GSIs are just 
 allocated by the
 userspace starting from 0 and on. Is my understanding correct?
Well I think as long as you use irqchip routing, gsi is not random. When
looking at arch/x86/kvm/irq_comm.c and in kvm_set_routing_entry you can
see there is an offset applied on the gsi and irqchip/pin depending on
type of irqchip (pic_master, pic_slave, ioapic). originally done by
BIOS? This is the default routing. Now in qemu the irqchip routing
entries are built by kvm_irqchip_add_irq_route in hw/intc/openpic_kvm.c,
hw/i386/kvm/ioapic.c, ... and then committed.

When looking at MSI routing and qemu integration, in case
kvm_gsi_direct_mapping is NOT used, kvm_irqchp_get_virq indeed finds out
a gsi belonging to the gsi range and not mapped with irqchip entries.

if kvm_gsi_direct_mapping is used, an irqchip mapped gsi is used
instead. At least this is my understanding

  On ARM, i see, completely different approach is used. For KVM_IRQ_LINE ioctl 
 GSI is
 actually a raw GIC IRQ number plus some extra bits for target and type. For 
 KVM_IRQFD with
 GICv2m GSI is actually SPI number (starting from zero, so that IRQ = GSI + 
 32).
  First of all, i would say that we already have an inconsistence in ARM API. 
 The same
 thing called GSI has two different meanings for different functions.
well that's true. This avoided to have ARM archi specific adaptations
for VFIO and MMIO VHOST-NET proto at that time. Now with the MSI advent,
those adaptations becomes needed anyway. Also we concluded it was not
meaningful to inject PPIs. gsi irqfd argument directly matches the IRQ
number found in guest device tree.

  I think it would be a bad idea to introduce a third, separate meaning for 
 MSIs. However,
 this is what we could do:
 
  Approach 1: GICv2m way.
  We could add one more IOCTL which would decode MSI data into IRQ (in our 
 case it's LPI)
 number. What it would return is LPI - 32, to keep in line with existing 
 convention.
  Pros: does not bring any more inconsistence into KVM API.
  Cons: requires adding one more IOCTL and one more MSI handling mechanism. 
 Isn't there too
 many of them already?

indeed in newly added qemu kvm-all.c kvm_arch_msi_data_to_gsi we could
call a new ioctl that translates the data + deviceid? into an LPI and
program irqfd with that LPI. This is done once when setting irqfd up.
This also means extending irqfd support to lpi injection, gsi being the
LPI index if gsi = 8192. in that case we continue using
kvm_gsi_direct_mapping and gsi still is an IRQ index.
 
  Approach 2: IRQ routing.
  We could implement MSI routing using virtual GSI numbers. In order to stay 
 compatible
 with what we have, we could say that GSI numbers below 8192 are SPI GSIs, and 
 everything
 starting from 8192 is MSI.
I think the gsi can be considered as an index:
0 - 1020: SPI index
= 8192: LPI index
 Then we could use KVM_SET_GSI_ROUTING ioctl to assign these
 GSIs to actual MSIs which then will go full-cycle through ITS.
  Pros: Does not introduce any new APIs.
  Cons:
 - Introduces third meaning for GSI on ARM.
 - Slower than approach 1 because in that case every interrupt is 
 pre-translated while
 here we engage ITS every time.

KVM GSI routing, even if only used for MSI routing then mandates to
build entries for non MSI IRQs, using irqchip routing entries. Then you
draw the irqchip.c kvm_irq_routing_table
chip[KVM_NR_IRQCHIPS][KVM_IRQCHIP_NUM_PINS] static allocation issue. I
guess this code would need to be revisited to accomodate large space and
variable pin number of GIC.

Hope it helps

Best Regards

Eric

 
  Personally i have already tried approach 1 and i can say that it works. 
 There is no
 problem with target specification because current ITS code stores everything 
 in a single
 bunch so that i anyway have to locate a particular ITTE corresponding to an 
 LPI and get
 collection ID from there. However, yes, i agree, this approach has the same 
 performance
 drawback as my suggested approach 2.
 
  Any thoughts / ideas ?
 
 Kind regards,
 Pavel Fedin
 Expert Engineer
 Samsung Electronics Research center Russia
 
 


RE: IRQFD support with GICv3 ITS (WAS: RE: [PATCH 00/13] arm64: KVM: GICv3 ITS emulation)

2015-06-10 Thread Pavel Fedin
 Hi!

 indeed in newly added qemu kvm-all.c kvm_arch_msi_data_to_gsi we could
 call a new ioctl that translates the data + deviceid? into an LPI and
 program irqfd with that LPI. This is done once when setting irqfd up.
 This also means extending irqfd support to lpi injection, gsi being the
 LPI index if gsi = 8192. in that case we continue using
 kvm_gsi_direct_mapping and gsi still is an IRQ index.

 This is exactly what i have done in my kernel + qemu. I have added a new KVM 
capability
and then in qemu i do this:
--- cut ---
if (kvm_gsi_kernel_mapping()) {
struct kvm_msi msi;

msi.address_lo = (uint32_t)msg.address;
msi.address_hi = msg.address  32;
msi.data = le32_to_cpu(msg.data);
memset(msi.pad, 0, sizeof(msi.pad));

if (dev) {
msi.devid = (pci_bus_num(dev-bus)  8) | dev-devfn;
msi.flags = KVM_MSI_VALID_DEVID;
} else {
msi.devid = 0;
msi.flags = 0;
}

return kvm_vm_ioctl(s, KVM_TRANSLATE_MSI, msi);
}
--- cut ---
 KVM_TRANSLATE_MSI returns an LPI number. This seemed to be the simplest and 
fastest thing
to do.
 If someone is interested, i could prepare an RFC patch series for this, which 
would apply
on top of Andre's ITS implementation.

Kind regards,
Pavel Fedin
Expert Engineer
Samsung Electronics Research center Russia


___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


RE: IRQFD support with GICv3 ITS (WAS: RE: [PATCH 00/13] arm64: KVM: GICv3 ITS emulation)

2015-06-10 Thread Pavel Fedin
 Hello!

 KVM GSI routing, even if only used for MSI routing then mandates to
 build entries for non MSI IRQs, using irqchip routing entries. Then you
 draw the irqchip.c kvm_irq_routing_table
 chip[KVM_NR_IRQCHIPS][KVM_IRQCHIP_NUM_PINS] static allocation issue.

 Sorry for this add-on, needed time to look at the code.
 Actually, if we don't use this code at all, and implement our own 
kvm_set_irq_routing()
and kvm_free_irq_routing(), we don't have to bother about all these limitations.
 The simplest thing to do there would be to store GSI number in struct 
its_itte. In this
case raising an MSI by GSI would not differ from what i currently do.

Kind regards,
Pavel Fedin
Expert Engineer
Samsung Electronics Research center Russia

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH 00/13] arm64: KVM: GICv3 ITS emulation

2015-06-08 Thread Marc Zyngier
On 08/06/15 11:54, Pavel Fedin wrote:
  Hi!
 
 I'm afraid this is not enough. A write to GICR_TRANSLATER (DID+EID)
 results in a (LPI,CPU) pair. Can you easily express the CPU part in
 irqfd (this is a genuine question, I'm not familiar enough with that
 part of the core)?
 
  But... As far as i could understand, LPI is added to a collection as a part 
 of setup. And
 collection actually represents a destination CPU, doesn't it? And we can't 
 have multiple
 LPIs sharing the same number and going to different CPUs. Or am i wrong? 
 Unfortunately i
 don't have GICv3 arch reference manual.

This is true to some extent. But the point is that the result of the
translation is both an LPI and a CPU. My question was how you would
indicate convey the notion of a target vcpu when using irqfd. As far as
I know this doesn't really fit, unless we start introducing the dreaded
GSI routing...

Do we really want to go down that road?

 Another concern
 would be the support of GICv4, which relies on the command queue
 handling to be handled in the kernel
 
  Wow, i didn't know about GICv4.

I wish I didn't know about it.

M.
-- 
Jazz is not dead. It just smells funny...
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


RE: [PATCH 00/13] arm64: KVM: GICv3 ITS emulation

2015-06-08 Thread Pavel Fedin
 Hello everybody!

 The GICv3 ITS (Interrupt Translation Service) is a part of the
 ARM GICv3 interrupt controller used for implementing MSIs.
 It specifies a new kind of interrupts (LPIs), which are mapped to
 establish a connection between a device, its MSI payload value and
 the target processor the IRQ is eventually delivered to.
 In order to allow using MSIs in an ARM64 KVM guest, we emulate this
 ITS widget in the kernel.

 I have tested the patch and got some more ideas for future extension...

 First of all, it would be nice to have a possibility to directly inject LPIs 
by number.
This will be useful for irqfd support in qemu.
 Next, irqfd support currently poses a problem. We need to somehow know IRQ 
number from
MSI-X data (device ID plus event ID). ITS has all this information, so it would 
be nice to
be able to query for the translation from within userspace. The question is - 
how to do
it? Should we add some ioctl for this purpose? Currently i am experimenting 
with extra
KVM_TRANSLATE_MSI ioctl which, given MSI data, would return LPI number.
 Actually before your patch came out i have almost done the same thing. But 
instead i
decided to implement ITS in qemu while leaving LPI handling to kernel. In this 
case my
qemu would have everything needed.
 By the way, why did you decide to put everything into kernel? Yes, in-kernel 
emulation is
faster, but ITS is not accessed frequently.

Kind regards,
Pavel Fedin
Expert Engineer
Samsung Electronics Research center Russia

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


RE: [PATCH 00/13] arm64: KVM: GICv3 ITS emulation

2015-06-08 Thread Pavel Fedin
 Hi!

 I'm afraid this is not enough. A write to GICR_TRANSLATER (DID+EID)
 results in a (LPI,CPU) pair. Can you easily express the CPU part in
 irqfd (this is a genuine question, I'm not familiar enough with that
 part of the core)?

 But... As far as i could understand, LPI is added to a collection as a part of 
setup. And
collection actually represents a destination CPU, doesn't it? And we can't have 
multiple
LPIs sharing the same number and going to different CPUs. Or am i wrong? 
Unfortunately i
don't have GICv3 arch reference manual.

 Another concern
 would be the support of GICv4, which relies on the command queue
 handling to be handled in the kernel

 Wow, i didn't know about GICv4.

Kind regards,
Pavel Fedin
Expert Engineer
Samsung Electronics Research center Russia


___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm