On 07/29/2012 11:00 PM, Michael S. Tsirkin wrote:
> I've been looking at adding caching for IRQs so that we don't need to
> scan all VCPUs on each interrupt. One issue I had a problem with, is
> how the cache structure can be used from both a thread (to fill out the
> cache) and interrupt (to actually send if cache is valid).
>
> For now just added a lock field in the cache so we don't need to worry
> about this, and with such a lock in place we don't have to worry about
> RCU as cache can be invalidated simply under this lock.
>
> For now this just declares the structure and updates the APIs
> so it's not intended to be applied, but just to give you
> the idea.
>
> Comments, flames wellcome.
I hope you aren't expecting any of the latter.
>
> +struct kvm_irq_cache {
> + spinlock_t lock;
> + bool valid;
> + struct kvm_vcpu *dest;
> + /* For now we only cache lapic irqs */
> + struct kvm_lapic_irq irq;
> + /* Protected by kvm->irq_cache_lock */
> + struct list_head list;
> +};
> +
Why an external structure?
Why not add something to kvm_kernel_irq_routing_entry? It is already
protected by rcu.
The atomic context code could look like this:
if (kire->cache.valid) {
kvm_apic_set_irq(kire->cache.vcpu, &kire->cache.irq);
return;
}
queue_work(process_irq_slowpath);
The slow path tries to fills the cache, and processes the interrupt in
any case.
--
error compiling committee.c: too many arguments to function
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html