On 09/20/2016 06:42 PM, Marc Zyngier wrote:
On 20/09/16 15:31, Alexander Graf wrote:
On 09/20/2016 02:37 PM, Marc Zyngier wrote:

We also need to know "timer line low + timer line masked", as otherwise
we might get spurious interrupts in the guest, no?
Yes. Though you can't really know about this one, and you'll have to
wait until the next natural exit to find out. As long as the spurious is
We can provoke a special exit for it, no?
How? The guest decides to disable its timer. That doesn't trigger any
exit whatsoever. You'll have to wait until the next exit from the guest
to notice it.
Before we inject a timer interrupt, we can check whether the pending
semantics of user space / kernel space match. If they don't match, we
can exit before we inject the interrupt and allow user space to disable
the pending state again.
Let's rewind a bit, because I've long lost track of what you're trying
to do to handle what.

You need two signals:

(1) TIMER_LEVEL: the output of the timer line, having accounted for the
IMASK bit. This is conveniently the value of timer->irq.level.

(2) TIMER_IRQ_MASK: an indication from userspace that a timer interrupt
is pending, and that the physical line should be masked.

You need a number of rules:

(a) On exit to userspace, the kernel always exposes the value of

(b) On kernel entry, userspace always exposes the required
TIMER_IRQ_MASK, depending on what has been exposed to it by TIMER_LEVEL.

(c) If on guest exit, TIMER_LEVEL==1 and TIMER_IRQ_MASK==0, perform a
userspace exit, because the emulated GIC needs to make the interrupt

This should be "before guest entry", because the timer might have expired in between.

(d) If on guest exit, TIMER_LEVEL==0 and TIMER_IRQ_MASK==1, perform a
userspace exit, because the guest has disabled its timer before taking
the interrupt, and the emulated GIC needs to retire the pending state.

and that's it. Nothing else. The kernel tells userspace the state of the
timer, and userspace drives the masking of the physical interrupt.
Conveniently, this matches what the current code does.

Yup. It seems to work. It also does feel slower than the previous code, but maybe that's just me. It definitely is way more correct.

I'll trace around a bit more to see whether I can spot any obviously low hanging performance fruits, then prettify the patches and send them out :).


kvmarm mailing list

Reply via email to