On 03/29/2012 09:14 PM, Jan Kiszka wrote:
> Currently, MSI messages can only be injected to in-kernel irqchips by
> defining a corresponding IRQ route for each message. This is not only
> unhandy if the MSI messages are generated "on the fly" by user space,
> IRQ routes are a limited resource that user space has to manage
> carefully.
>
> By providing a direct injection path, we can both avoid using up limited
> resources and simplify the necessary steps for user land.
>
> diff --git a/Documentation/virtual/kvm/api.txt
> b/Documentation/virtual/kvm/api.txt
> index 81ff39f..ed27d1b 100644
> --- a/Documentation/virtual/kvm/api.txt
> +++ b/Documentation/virtual/kvm/api.txt
> @@ -1482,6 +1482,27 @@ See KVM_ASSIGN_DEV_IRQ for the data structure. The
> target device is specified
> by assigned_dev_id. In the flags field, only KVM_DEV_ASSIGN_MASK_INTX is
> evaluated.
>
> +4.61 KVM_SIGNAL_MSI
> +
> +Capability: KVM_CAP_SIGNAL_MSI
> +Architectures: x86
> +Type: vm ioctl
> +Parameters: struct kvm_msi (in)
> +Returns: >0 on delivery, 0 if guest blocked the MSI, and -1 on error
> +
> +Directly inject a MSI message. Only valid with in-kernel irqchip that handles
> +MSI messages.
> +
> +struct kvm_msi {
> + __u32 address_lo;
> + __u32 address_hi;
> + __u32 data;
> + __u32 flags;
> + __u8 pad[16];
> +};
> +
> +No flags are defined so far. The corresponding field must be 0.
>
There are two ways in which this can be generalized:
struct kvm_general_irq {
__u32 type; // line | MSI
__u32 op; // raise/lower/trigger
union {
... line;
struct kvm_msi msi;
}
};
so we have a single ioctl for all interrupt handling. This allows
eventual removal of the line-oriented ioctls.
The other alternative is to have a dma interface, similar to the kvm_run
mmio interface but with the kernel acting as destination. The advantage
here is that we can handle dma from a device to any kernel-emulated
device, not just the APIC MSI range. A downside is that we can't return
values related to interrupt coalescing.
A performance note: delivering an interrupt needs to search all vcpus
for an APIC ID match. The previous plan was to cache (or pre-calculate)
this lookup in the irq routing table. Now it looks like we'll need a
separate cache for this.
(yes, I said on the call I don't anticipate objections but preparing to
apply a patch always triggers more critical thinking)
--
error compiling committee.c: too many arguments to function
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html