On 06/08/2012 07:01 PM, Michael S. Tsirkin wrote:
> We can deliver certain interrupts, notably MSIX,
> from atomic context. Add a new API kvm_set_irq_inatomic,
> that does exactly that, and use it to implement
> an irq handler for msi.
>
> This reduces the pressure on scheduler in case
> where host and guest irq share a host cpu.
>
Looks nice.
>
> +/*
> + * Deliver an IRQ in an atomic context if we can, or return a failure,
> + * user can retry in a process context.
> + * Return value:
> + * -EWOULDBLOCK - Can't deliver in atomic context: retry in a process
> context.
> + * Other values - No need to retry.
> + */
> +int kvm_set_irq_inatomic(struct kvm *kvm, int irq_source_id, u32 irq, int
> level)
> +{
> + struct kvm_kernel_irq_routing_entry *e;
> + int ret = -EINVAL;
> + struct kvm_irq_routing_table *irq_rt;
> + struct hlist_node *n;
> +
> + trace_kvm_set_irq(irq, level, irq_source_id);
> +
> + /*
> + * We know MSI are safe in interrupt context. They are also
> + * easy as there's a single routing entry for these GSIs.
> + * So only handle MSI in an atomic context, for now.
> + *
> + * This shares some code with kvm_set_irq: this
> + * version is optimized for a single entry MSI only case.
> + */
> + rcu_read_lock();
> + irq_rt = rcu_dereference(kvm->irq_routing);
> + if (irq < irq_rt->nr_rt_entries)
> + hlist_for_each_entry(e, n, &irq_rt->map[irq], link) {
> + if (likely(e->type == KVM_IRQ_ROUTING_MSI))
> + ret = kvm_set_msi(e, kvm, irq_source_id, level);
> + else
> + ret = -EWOULDBLOCK;
> + break;
> + }
> + rcu_read_unlock();
> + return ret;
> +}
> +
kvm_set_msi() contains a loop over all vcpus to match the APIC ID (or to
broadcast). I'd rather see a cache that contains the vcpu pointer and
vector number (assumes delivery mode of fixed and not the other junk),
if the cache is filled fire away, otherwise slow path to a thread and
fill the cache if possible.
--
error compiling committee.c: too many arguments to function
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html