On Tue, Nov 26, 2013 at 2:47 PM, Paolo Bonzini <pbonz...@redhat.com> wrote:
> Il 26/11/2013 13:40, Zhanghaoyu (A) ha scritto:
>> When guest set irq smp_affinity, VMEXIT occurs, then the vcpu thread
will IOCTL return to QEMU from hypervisor, then vcpu thread ask the
hypervisor to update the irq routing table,
>> in kvm_set_irq_routing, synchronize_rcu is called, current vcpu thread
is blocked for so much time to wait RCU grace period, and during this
period, this vcpu cannot provide service to VM,
>> so those interrupts delivered to this vcpu cannot be handled in time,
and the apps running on this vcpu cannot be serviced too.
>> It's unacceptable in some real-time scenario, e.g. telecom.
>>
>> So, I want to create a single workqueue for each VM, to asynchronously
performing the RCU synchronization for irq routing table,
>> and let the vcpu thread return and VMENTRY to service VM immediately, no
more need to blocked to wait RCU grace period.
>> And, I have implemented a raw patch, took a test in our telecom
environment, above problem disappeared.
>
> I don't think a workqueue is even needed.  You just need to use call_rcu
> to free "old" after releasing kvm->irq_lock.
>
> What do you think?

Can this cause an interrupt to be delivered to the wrong (old) vcpu?

The way Linux sets interrupt affinity, it cannot, since changing the
affinity is (IIRC) done in the interrupt handler, so the next interrupt
cannot be in flight and thus pick up the old interrupt routing table.

However it may be vulnerable in other ways.

Reply via email to