On Tue, Nov 26, 2013 at 06:14:27PM +0200, Gleb Natapov wrote:
On Tue, Nov 26, 2013 at 06:05:37PM +0200, Michael S. Tsirkin wrote:
On Tue, Nov 26, 2013 at 02:56:10PM +0200, Gleb Natapov wrote:
On Tue, Nov 26, 2013 at 01:47:03PM +0100, Paolo Bonzini wrote:
Il 26/11/2013 13:40, Zhanghaoyu
Il 28/11/2013 07:27, Zhanghaoyu (A) ha scritto:
Without synchronize_rcu you could have
VCPU writes to routing table
e = entry from IRQ routing table
kvm_irq_routing_update(kvm, new);
VCPU resumes execution
No, this would be exactly the same code that is running now:
mutex_lock(kvm-irq_lock);
old = kvm-irq_routing;
kvm_irq_routing_update(kvm, new);
mutex_unlock(kvm-irq_lock);
synchronize_rcu();
kfree(old);
On Thu, Nov 28, 2013 at 09:55:42AM +0100, Paolo Bonzini wrote:
Il 28/11/2013 07:27, Zhanghaoyu (A) ha scritto:
Without synchronize_rcu you could have
VCPU writes to routing table
e = entry from IRQ routing table
kvm_irq_routing_update(kvm,
On Thu, Nov 28, 2013 at 09:14:22AM +, Zhanghaoyu (A) wrote:
No, this would be exactly the same code that is running now:
mutex_lock(kvm-irq_lock);
old = kvm-irq_routing;
kvm_irq_routing_update(kvm, new);
Il 28/11/2013 10:19, Gleb Natapov ha scritto:
Not changing current behaviour is certainly safer, but I am still not 100%
convinced we have to ensure this.
Suppose guest does:
1: change msi interrupt by writing to pci register
2: read the pci register to flush the write
3: zero idt
I
On Thu, Nov 28, 2013 at 10:29:36AM +0100, Paolo Bonzini wrote:
Il 28/11/2013 10:19, Gleb Natapov ha scritto:
Not changing current behaviour is certainly safer, but I am still not 100%
convinced we have to ensure this.
Suppose guest does:
1: change msi interrupt by writing to pci
On 11/28/2013 11:19 AM, Gleb Natapov wrote:
On Thu, Nov 28, 2013 at 09:55:42AM +0100, Paolo Bonzini wrote:
Il 28/11/2013 07:27, Zhanghaoyu (A) ha scritto:
Without synchronize_rcu you could have
VCPU writes to routing table
e = entry from IRQ routing
Il 28/11/2013 10:49, Avi Kivity ha scritto:
Linux is safe, it does interrupt migration from within the interrupt
handler. If you do that before the device-specific EOI, you won't get
another interrupt until programming the MSI is complete.
Is virtio safe? IIRC it can post multiple
On Thu, Nov 28, 2013 at 11:49:00AM +0200, Avi Kivity wrote:
On 11/28/2013 11:19 AM, Gleb Natapov wrote:
On Thu, Nov 28, 2013 at 09:55:42AM +0100, Paolo Bonzini wrote:
Il 28/11/2013 07:27, Zhanghaoyu (A) ha scritto:
Without synchronize_rcu you could have
VCPU writes to routing table
On 11/28/2013 12:11 PM, Gleb Natapov wrote:
On Thu, Nov 28, 2013 at 11:49:00AM +0200, Avi Kivity wrote:
On 11/28/2013 11:19 AM, Gleb Natapov wrote:
On Thu, Nov 28, 2013 at 09:55:42AM +0100, Paolo Bonzini wrote:
Il 28/11/2013 07:27, Zhanghaoyu (A) ha scritto:
Without synchronize_rcu you could
On 11/28/2013 11:53 AM, Paolo Bonzini wrote:
Il 28/11/2013 10:49, Avi Kivity ha scritto:
Linux is safe, it does interrupt migration from within the interrupt
handler. If you do that before the device-specific EOI, you won't get
another interrupt until programming the MSI is complete.
Is
Il 28/11/2013 11:16, Avi Kivity ha scritto:
The QRCU I linked would work great latency-wise (it has roughly the same
latency of an rwsem but readers are lock-free). However, the locked
operations in the read path would hurt because of cache misses, so it's
not good either.
I guess srcu
On 11/28/2013 12:40 PM, Paolo Bonzini wrote:
Il 28/11/2013 11:16, Avi Kivity ha scritto:
The QRCU I linked would work great latency-wise (it has roughly the same
latency of an rwsem but readers are lock-free). However, the locked
operations in the read path would hurt because of cache misses,
On Thu, Nov 28, 2013 at 12:12:55PM +0200, Avi Kivity wrote:
On 11/28/2013 12:11 PM, Gleb Natapov wrote:
On Thu, Nov 28, 2013 at 11:49:00AM +0200, Avi Kivity wrote:
On 11/28/2013 11:19 AM, Gleb Natapov wrote:
On Thu, Nov 28, 2013 at 09:55:42AM +0100, Paolo Bonzini wrote:
Il 28/11/2013 07:27,
On Thu, Nov 28, 2013 at 11:40:06AM +0100, Paolo Bonzini wrote:
Il 28/11/2013 11:16, Avi Kivity ha scritto:
The QRCU I linked would work great latency-wise (it has roughly the same
latency of an rwsem but readers are lock-free). However, the locked
operations in the read path would hurt
Il 28/11/2013 12:09, Gleb Natapov ha scritto:
- if there are no callbacks, but there are readers, synchronize_srcu
busy-loops for some time checking if the readers complete. After a
while (20 us for synchronize_srcu, 120 us for
synchronize_srcu_expedited) it gives up and starts using a
On 11/28/2013 01:10 PM, Paolo Bonzini wrote:
Il 28/11/2013 12:09, Gleb Natapov ha scritto:
- if there are no callbacks, but there are readers, synchronize_srcu
busy-loops for some time checking if the readers complete. After a
while (20 us for synchronize_srcu, 120 us for
On 11/28/2013 01:02 PM, Gleb Natapov wrote:
On Thu, Nov 28, 2013 at 12:12:55PM +0200, Avi Kivity wrote:
On 11/28/2013 12:11 PM, Gleb Natapov wrote:
On Thu, Nov 28, 2013 at 11:49:00AM +0200, Avi Kivity wrote:
On 11/28/2013 11:19 AM, Gleb Natapov wrote:
On Thu, Nov 28, 2013 at 09:55:42AM
On Thu, Nov 28, 2013 at 01:18:54PM +0200, Avi Kivity wrote:
On 11/28/2013 01:02 PM, Gleb Natapov wrote:
On Thu, Nov 28, 2013 at 12:12:55PM +0200, Avi Kivity wrote:
On 11/28/2013 12:11 PM, Gleb Natapov wrote:
On Thu, Nov 28, 2013 at 11:49:00AM +0200, Avi Kivity wrote:
On 11/28/2013 11:19 AM,
On Thu, Nov 28, 2013 at 12:10:40PM +0100, Paolo Bonzini wrote:
Il 28/11/2013 12:09, Gleb Natapov ha scritto:
- if there are no callbacks, but there are readers, synchronize_srcu
busy-loops for some time checking if the readers complete. After a
while (20 us for synchronize_srcu, 120 us
On Thu, Nov 28, 2013 at 01:22:45PM +0200, Gleb Natapov wrote:
On Thu, Nov 28, 2013 at 01:18:54PM +0200, Avi Kivity wrote:
On 11/28/2013 01:02 PM, Gleb Natapov wrote:
On Thu, Nov 28, 2013 at 12:12:55PM +0200, Avi Kivity wrote:
On 11/28/2013 12:11 PM, Gleb Natapov wrote:
On Thu, Nov 28,
On 11/28/2013 01:22 PM, Gleb Natapov wrote:
On Thu, Nov 28, 2013 at 01:18:54PM +0200, Avi Kivity wrote:
On 11/28/2013 01:02 PM, Gleb Natapov wrote:
On Thu, Nov 28, 2013 at 12:12:55PM +0200, Avi Kivity wrote:
On 11/28/2013 12:11 PM, Gleb Natapov wrote:
On Thu, Nov 28, 2013 at 11:49:00AM
Il 28/11/2013 12:23, Gleb Natapov ha scritto:
Unless what ? :) Unless reader is scheduled out?
Yes. Or unless my brain is scheduled out in the middle of a sentence.
So we will have to disable preemption in a reader to prevent big latencies for
a writer, no?
I don't think that's
On Thu, Nov 28, 2013 at 01:33:48PM +0200, Michael S. Tsirkin wrote:
On Thu, Nov 28, 2013 at 01:22:45PM +0200, Gleb Natapov wrote:
On Thu, Nov 28, 2013 at 01:18:54PM +0200, Avi Kivity wrote:
On 11/28/2013 01:02 PM, Gleb Natapov wrote:
On Thu, Nov 28, 2013 at 12:12:55PM +0200, Avi Kivity
I don't think a workqueue is even needed. You just need to use
call_rcu to free old after releasing kvm-irq_lock.
What do you think?
It should be rate limited somehow. Since it guest triggarable guest
may cause host to allocate a lot of memory this way.
Why does use call_rcu to
I understood the proposal was also to eliminate the
synchronize_rcu(), so while new interrupts would see the new
routing table, interrupts already in flight could pick up the old one.
Isn't that always the case with RCU? (See my answer above: the
vcpus already see the new routing
Hi all,
When guest set irq smp_affinity, VMEXIT occurs, then the vcpu thread will IOCTL
return to QEMU from hypervisor, then vcpu thread ask the hypervisor to update
the irq routing table,
in kvm_set_irq_routing, synchronize_rcu is called, current vcpu thread is
blocked for so much time to
Il 26/11/2013 13:40, Zhanghaoyu (A) ha scritto:
When guest set irq smp_affinity, VMEXIT occurs, then the vcpu thread will
IOCTL return to QEMU from hypervisor, then vcpu thread ask the hypervisor to
update the irq routing table,
in kvm_set_irq_routing, synchronize_rcu is called, current vcpu
On Tue, Nov 26, 2013 at 12:40:36PM +, Zhanghaoyu (A) wrote:
Hi all,
When guest set irq smp_affinity, VMEXIT occurs, then the vcpu thread will
IOCTL return to QEMU from hypervisor, then vcpu thread ask the hypervisor to
update the irq routing table,
Why vcpu thread ask the hypervisor to
On Tue, Nov 26, 2013 at 02:48:10PM +0200, Gleb Natapov wrote:
On Tue, Nov 26, 2013 at 12:40:36PM +, Zhanghaoyu (A) wrote:
Hi all,
When guest set irq smp_affinity, VMEXIT occurs, then the vcpu thread will
IOCTL return to QEMU from hypervisor, then vcpu thread ask the hypervisor
to
On Tue, Nov 26, 2013 at 01:47:03PM +0100, Paolo Bonzini wrote:
Il 26/11/2013 13:40, Zhanghaoyu (A) ha scritto:
When guest set irq smp_affinity, VMEXIT occurs, then the vcpu thread will
IOCTL return to QEMU from hypervisor, then vcpu thread ask the hypervisor
to update the irq routing
Il 26/11/2013 13:56, Gleb Natapov ha scritto:
I don't think a workqueue is even needed. You just need to use call_rcu
to free old after releasing kvm-irq_lock.
What do you think?
It should be rate limited somehow. Since it guest triggarable guest may cause
host to allocate a lot of
On Tue, Nov 26, 2013 at 2:47 PM, Paolo Bonzini pbonz...@redhat.com wrote:
Il 26/11/2013 13:40, Zhanghaoyu (A) ha scritto:
When guest set irq smp_affinity, VMEXIT occurs, then the vcpu thread
will IOCTL return to QEMU from hypervisor, then vcpu thread ask the
hypervisor to update the irq routing
Il 26/11/2013 14:18, Avi Kivity ha scritto:
I don't think a workqueue is even needed. You just need to use call_rcu
to free old after releasing kvm-irq_lock.
What do you think?
Can this cause an interrupt to be delivered to the wrong (old) vcpu?
No, this would be exactly the same code
On Tue, Nov 26, 2013 at 3:47 PM, Paolo Bonzini pbonz...@redhat.com wrote:
Il 26/11/2013 14:18, Avi Kivity ha scritto:
I don't think a workqueue is even needed. You just need to use call_rcu
to free old after releasing kvm-irq_lock.
What do you think?
Can this cause an interrupt to
Il 26/11/2013 15:36, Avi Kivity ha scritto:
No, this would be exactly the same code that is running now:
mutex_lock(kvm-irq_lock);
old = kvm-irq_routing;
kvm_irq_routing_update(kvm, new);
mutex_unlock(kvm-irq_lock);
On 11/26/2013 04:46 PM, Paolo Bonzini wrote:
Il 26/11/2013 15:36, Avi Kivity ha scritto:
No, this would be exactly the same code that is running now:
mutex_lock(kvm-irq_lock);
old = kvm-irq_routing;
kvm_irq_routing_update(kvm, new);
Il 26/11/2013 16:03, Gleb Natapov ha scritto:
I understood the proposal was also to eliminate the synchronize_rcu(),
so while new interrupts would see the new routing table, interrupts
already in flight could pick up the old one.
Isn't that always the case with RCU? (See my answer above:
On 11/26/2013 05:03 PM, Gleb Natapov wrote:
On Tue, Nov 26, 2013 at 04:54:44PM +0200, Avi Kivity wrote:
On 11/26/2013 04:46 PM, Paolo Bonzini wrote:
Il 26/11/2013 15:36, Avi Kivity ha scritto:
No, this would be exactly the same code that is running now:
On 11/26/2013 05:20 PM, Paolo Bonzini wrote:
Il 26/11/2013 16:03, Gleb Natapov ha scritto:
I understood the proposal was also to eliminate the synchronize_rcu(),
so while new interrupts would see the new routing table, interrupts
already in flight could pick up the old one.
Isn't that always
Il 26/11/2013 16:25, Avi Kivity ha scritto:
If we want to ensure, we need to use a different mechanism for
synchronization than the global RCU. QRCU would work; readers are not
wait-free but only if there is a concurrent synchronize_qrcu, which
should be rare.
An alternative path is to
On 11/26/2013 05:28 PM, Paolo Bonzini wrote:
Il 26/11/2013 16:25, Avi Kivity ha scritto:
If we want to ensure, we need to use a different mechanism for
synchronization than the global RCU. QRCU would work; readers are not
wait-free but only if there is a concurrent synchronize_qrcu, which
Il 26/11/2013 16:35, Avi Kivity ha scritto:
If we want to ensure, we need to use a different mechanism for
synchronization than the global RCU. QRCU would work; readers are not
wait-free but only if there is a concurrent synchronize_qrcu, which
should be rare.
An alternative path is to
On Tue, Nov 26, 2013 at 02:56:10PM +0200, Gleb Natapov wrote:
On Tue, Nov 26, 2013 at 01:47:03PM +0100, Paolo Bonzini wrote:
Il 26/11/2013 13:40, Zhanghaoyu (A) ha scritto:
When guest set irq smp_affinity, VMEXIT occurs, then the vcpu thread will
IOCTL return to QEMU from hypervisor,
On 11/26/2013 05:58 PM, Paolo Bonzini wrote:
Il 26/11/2013 16:35, Avi Kivity ha scritto:
If we want to ensure, we need to use a different mechanism for
synchronization than the global RCU. QRCU would work; readers are not
wait-free but only if there is a concurrent synchronize_qrcu, which
On Tue, Nov 26, 2013 at 06:06:26PM +0200, Avi Kivity wrote:
On 11/26/2013 05:58 PM, Paolo Bonzini wrote:
Il 26/11/2013 16:35, Avi Kivity ha scritto:
If we want to ensure, we need to use a different mechanism for
synchronization than the global RCU. QRCU would work; readers are not
wait-free
On Tue, Nov 26, 2013 at 06:14:27PM +0200, Gleb Natapov wrote:
On Tue, Nov 26, 2013 at 06:05:37PM +0200, Michael S. Tsirkin wrote:
On Tue, Nov 26, 2013 at 02:56:10PM +0200, Gleb Natapov wrote:
On Tue, Nov 26, 2013 at 01:47:03PM +0100, Paolo Bonzini wrote:
Il 26/11/2013 13:40, Zhanghaoyu
On 11/26/2013 06:11 PM, Michael S. Tsirkin wrote:
On Tue, Nov 26, 2013 at 06:06:26PM +0200, Avi Kivity wrote:
On 11/26/2013 05:58 PM, Paolo Bonzini wrote:
Il 26/11/2013 16:35, Avi Kivity ha scritto:
If we want to ensure, we need to use a different mechanism for
synchronization than the global
On Tue, Nov 26, 2013 at 04:20:27PM +0100, Paolo Bonzini wrote:
Il 26/11/2013 16:03, Gleb Natapov ha scritto:
I understood the proposal was also to eliminate the
synchronize_rcu(),
so while new interrupts would see the new routing table, interrupts
already in flight could pick up the
On 11/26/2013 06:24 PM, Gleb Natapov wrote:
On Tue, Nov 26, 2013 at 04:20:27PM +0100, Paolo Bonzini wrote:
Il 26/11/2013 16:03, Gleb Natapov ha scritto:
I understood the proposal was also to eliminate the synchronize_rcu(),
so while new interrupts would see the new routing table, interrupts
On 11/26/2013 06:28 PM, Paolo Bonzini wrote:
Il 26/11/2013 17:24, Gleb Natapov ha scritto:
VCPU writes to routing table
e = entry from IRQ routing table
kvm_irq_routing_update(kvm, new);
VCPU resumes execution
On Tue, Nov 26, 2013 at 04:58:53PM +0100, Paolo Bonzini wrote:
Il 26/11/2013 16:35, Avi Kivity ha scritto:
If we want to ensure, we need to use a different mechanism for
synchronization than the global RCU. QRCU would work; readers are not
wait-free but only if there is a concurrent
On Tue, Nov 26, 2013 at 05:28:23PM +0100, Paolo Bonzini wrote:
Il 26/11/2013 17:24, Gleb Natapov ha scritto:
VCPU writes to routing table
e = entry from IRQ routing table
kvm_irq_routing_update(kvm, new);
VCPU resumes execution
On Tue, Nov 26, 2013 at 06:27:47PM +0200, Avi Kivity wrote:
On 11/26/2013 06:24 PM, Gleb Natapov wrote:
On Tue, Nov 26, 2013 at 04:20:27PM +0100, Paolo Bonzini wrote:
Il 26/11/2013 16:03, Gleb Natapov ha scritto:
I understood the proposal was also to eliminate the
synchronize_rcu(),
so
Il 26/11/2013 17:21, Avi Kivity ha scritto:
It's indeed safe, but I think there's a nice win to be had if we
drop the assumption.
I'm not arguing with that, but a minor commoent below:
(BTW, PCI memory writes are posted, but configuration writes are not).
MSIs are configured via PCI memory
On Tue, Nov 26, 2013 at 06:05:37PM +0200, Michael S. Tsirkin wrote:
On Tue, Nov 26, 2013 at 02:56:10PM +0200, Gleb Natapov wrote:
On Tue, Nov 26, 2013 at 01:47:03PM +0100, Paolo Bonzini wrote:
Il 26/11/2013 13:40, Zhanghaoyu (A) ha scritto:
When guest set irq smp_affinity, VMEXIT occurs,
57 matches
Mail list logo