Avi Kivity wrote: >> I will argue the necessaity of this code movement. A VM >> Exit/Resume itself probably will block system responsibility >> (can't deliver irq) in the level of 1K cycles, interrupt injection is >> clearly far less than this granularity. Probably less than 100cycles. >> >> > > The reason for the code movement is that interrupt injection in real > mode on Intel must be emulated, which means writing into guest memory, > which means sleeping operations once swap is enabled. A side
OK, that is the difference between Xen & KVM. An alternative is to lock the pages when in rmode before we disable irq. But let us see if we can live with this movement. > benefit is > removing the lapic calculations from the critical section. > > But I still don't understand your object. The interrupt has to be > committed sometime. We move the commit point earlier by the time it > takes to ->prepare_guest_switch() (normally zero, but sometimes some > thousand cycles), and kvm_load_guest_fpu() (usually zero as > well). The > timing is sometimes worse, but there was always a window where a high > priority interrupt is ignored over low priority interrupts. Yes even in rare native when 2 irqs arrives at almost same time. But now we have frequent happenings. Guest frequently see higher irq pending. Let us keep an eye to see if it will hurt something. Then probably need this patch? diff --git a/drivers/kvm/kvm_main.c b/drivers/kvm/kvm_main.c index aab465d..46ad3b7 100644 --- a/drivers/kvm/kvm_main.c +++ b/drivers/kvm/kvm_main.c @@ -2165,6 +2165,7 @@ again: if (unlikely(r)) goto out; + vcpu->guest_mode = 1; kvm_inject_pending_timer_irqs(vcpu); clear_bit(KVM_REQ_INTR, &vcpu->requests); if (irqchip_in_kernel(vcpu->kvm)) @@ -2183,6 +2184,7 @@ again: local_irq_enable(); preempt_enable(); r = -EINTR; + vcpu->guest_mode = 0; kvm_run->exit_reason = KVM_EXIT_INTR; ++vcpu->stat.signal_exits; goto out; @@ -2192,6 +2194,7 @@ again: if (test_and_clear_bit(KVM_REQ_TLB_FLUSH, &vcpu->requests)) kvm_x86_ops->tlb_flush(vcpu); if (test_bit(KVM_REQ_INTR, &vcpu->requests)) { + vcpu->guest_mode = 0; local_irq_enable(); preempt_enable(); r = 1; @@ -2199,7 +2202,6 @@ again: } } - vcpu->guest_mode = 1; kvm_guest_enter(); kvm_x86_ops->run(vcpu, kvm_run); > > [we can try to improve it by using vm86 interrupt redirection > which may > allow event injection using VT instead of writing to the guest stack]. :) Love this much more personaly, or lock pages. > >> > > If the vcpu is descheduled, then both interrupts are delayed, by much > more than it takes to process a single one. If we're descheduled then > interrupt priority is a very small problem with respect to timing. > >> Another serious issue in current code is that due to previous >> injection, the irq is already consumed, (IRR->ISR). So if there >> are higher priority IRQs arrive later after the previous injection, >> now guest will see 2 IRQs consumed, i.e. 2 ISR bits are set. >> >> > > I don't understand why. If an event is pending in > VM_ENTRY_INTR_INFO_FIELD, then we don't call > kvm_cpu_get_interrupt() at all. > This is for protect mode. real mode is not setting this one, and thus double consumed. But this is fixable with other additional logic. My point is that we may see more potential issues now :-( Eddie
ll
Description: ll
------------------------------------------------------------------------- This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel