Ok, I've studied the ARM KVM implementation in the kernel, and it's pretty
clear to me that it schedules a timer to keep track of physical time, and
makes the virtual timer in the simulation track with that physical time.
When reading the timer register values, even when the guest isn't running,
you'll get a live value which increments in real time which is just offset
to match what the guest expects. Also, I see where the kernel checks if
this live timer *should* have fired while the guest was away, and if so
injects an interrupt into the VGIC for the guest to receive when it wakes
up again.

There are a few ways we could address this problem. First, and least
practically, we could modify the kernel so that it stops guest timers when
the guest isn't running. That's not how you want timers to work if you're
trying to virtualize a guest that will primarily interact with the real
world (web server, etc.), but is what we want to get a well behaved
simulation with long periods of a descheduled guest.

Next, we could attempt to scrape the timer count out of the guest when it
stops, and then reinstall that count right before it starts again. That
will cover up a lot of the time spent in the host, but because the timer is
still live, it will not cover time spent after leaving the guest but before
reaching the line in gem5 that records the value, or between when gem5
restores that value and the guest actually starts up again. I don't know
how the kernel schedules things, but I suspect those events could be
arbitrarily far apart from each other. It occurs to me that this might be
ok though, or at least consistent with assumptions we're already making. I
think we limit the time we spend in the VM by scheduling a host signal to
happen in the future on the host, and that would have the same issue. If we
have access to a high accuracy host timer, we could also compute and
maintain our own virtual timer offset which might mitigate the problem.

Finally, we could stop using the in kernel GIC. That's bad performance
wise, but according to a comment in the source:

/*
* Enable the arch timers only if we have an in-kernel VGIC
* and it has been properly initialized, since we cannot handle
* interrupts from the virtual timer with a userspace gic.
*/

That would avoid the timer funny business and let us manually manage the
architectural timer. Realistically, if we're already stopping the VM every,
say, 1ms to maintain sync with gem5 hardware devices, etc., then I don't
*think* the extra VM exits for dealing with the GIC or the timer will be
that bad. Hopefully interrupts happen within an order of magnitude or so of
1 per ms. We would need to also provide a GIC implementation which would
satisfy the kernel as well. I'm not sure how much work would be necessary
to plug the non-kvm gem5 GIC into KVM as a user space virtualized device.
If the TC interface it uses is fairly robust, maybe it would mostly just
work?


Another thought experiment is to try to imagine how this might work if I
were trying to use qemu to run ARM on ARM, and how this might or might not
work differently there. I don't think it would, since this seems to be a
fairly fundamental limitation for how KVM is implemented on ARM. The only
thing I can think of that might explain how qemu would work is that it
might be a lot lighter weight than gem5 and so not overwhelm the fairly
meager hardware I'm trying to use quite so badly, avoiding having the guest
descheduled for long periods of time.

Thoughts?

Gabe

On Thu, Apr 5, 2018 at 3:12 PM, Gabe Black <gabebl...@google.com> wrote:

>
>
> On Thu, Apr 5, 2018 at 8:14 AM, Andreas Sandberg <andreas.sandb...@arm.com
> > wrote:
>
>>
>>
>> On 05/04/2018 03:42, Gabe Black wrote:
>>
>>> Hi folks. I'm continuing to try to iron out problems with KVM on ARM, and
>>> the problem I'm working on specifically right now is that the mouse
>>> device
>>> gets spurious bad command bytes which panics gem5.
>>>
>>> What I've found so far is that the guest kernel will frequently time out
>>> while waiting for an ACK to a byte it sent to the mouse, even though the
>>> timeout looks like it should be 200ms, the simulation quantum I'm using
>>> is
>>> 1ms, and the delay between an event and the corresponding interrupt is
>>> configured to be 1us. I think this eventually throws the PS2 driver out
>>> of
>>> whack, and it ends up sending a data byte (or something else?) to the
>>> mouse
>>> which the mouse misinterprets as a command, causing the panic.
>>>
>>
>> Last time I looked at this, I suspected that the PS/2 model wasn't
>> clearing some interrupts. The GIC model in gem5 normally doesn't worry
>> about that and raises an interrupt every time someone calls the
>> sendInt(). The behaviour I have observed from the kernel is that it
>> doesn't post a new interrupt unless you first clear the old interrupt.
>> This caused some issues with a other models in the past (IIRC, the UART).
>>
>> Make sure you test this in a single-threaded simulator as well to avoid
>> other weirdness due to thread syncrhonisation in gem5. I assume you're
>> already doing this though.
>
>
>
> That's an interesting point, and I'll look into how the interrupts are
> being handled. If the command is processed but the interrupt doesn't reach
> the kernel for some reason, it would time out then too. I've been
> purposefully looking at a 2 cpu system because, as you point out, that's
> where more weird problems crop up. I think this is likely one of them,
> although it's worth confirming that more carefully.
>
>
>
>>
>>
>> My current theory for why that's happening is that even when the VM is not
>>> running, the hardware supported virtual timer the CPU may have scheduled
>>> to
>>> keep track of its timeout may be "running" in the sense that the kernel
>>> will update it to reflect the descheduled time once the VM is running
>>> again. That could mean that 200ms of real time could pass, looking like
>>> 200ms of simulated time to the VCPU even if a smaller amount of actual
>>> execution time was supposed to happen. I'm not sure if that's a correct
>>> interpretation, but this ASPLOS paper *seems* to say something like that
>>> is
>>> possible.
>>>
>>> http://www.cs.columbia.edu/~cdall/pubs/asplos019-dall.pdf
>>>
>>
>> I have never been happy with the way we handle the timer on the Arm KVM
>> CPUs. It's possible to re-sync the virtual counter when entering into
>> KVM.  A simple way to test that would be to update KVM_REG_ARM_TIMER_CNT
>> / MISCREG_CNTVCT whenever entering into KVM. The Linux side should
>> update the virtual timer offset when you write an absolute time to this
>> register.
>>
>> This should work for Linux, but you might have issues with other OSes
>> that insist on using the physical timer instead of the virtual timer.
>
>
>
> I think I'll look at the interrupts a little first, but this is good
> information.
>
>
>
>>
>>
>> I've also seen very weird behavior as far as how many instructions KVM
>>> thinks are being executed per tick, so I wouldn't discount there being
>>> something off about how it's keeping track of time. I haven't been able
>>> to
>>> attach GDB to the KVM VCPUs for instance, even though it looks like all
>>> the
>>> pieces are there for that to work. It seems that KVM is supposed to exit
>>> after a given number of instructions, but it's just not for some reason.
>>>
>>
>> I have used GDB in the past, but the support is very flaky. To use GDB
>> with KVM, I had to force a thread context sync on every KVM entry/exit.
>> You can do this by setting the alwaysSyncTC  param, but it will kill
>> your performance. The proper fix for this issue is to implement a custom
>> KVM thread context that lazily synchronises individual registers instead
>> of only synchronising on drain (and some other calls).
>>
>
>
> This sounds to me like you had problems with it giving you valid
> information or running commands properly. I had problems with it even
> breaking into gdb in the first place, with the vcpus just running free
> until gdb gave up. I saw messages about the event which was supposed to
> cause the CPUs to stop already being scheduled, so I think it was just
> never getting triggered by the kvm cpu for some reason. We're going to be
> getting a bigger and better machine to run KVM simulations on in the
> relatively near future, and my hope is that some of these weird issues
> magically go away on different hardware.
>
>
>
>>
>> Cheers,
>> Andreas
>>
>> IMPORTANT NOTICE: The contents of this email and any attachments are
>> confidential and may also be privileged. If you are not the intended
>> recipient, please notify the sender immediately and do not disclose the
>> contents to any other person, use it for any purpose, or store or copy the
>> information in any medium. Thank you.
>>
>
>
_______________________________________________
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev

Reply via email to