> > In some cases we don't even do that, and just reschedule the event some > > arbitrarily small amount of time later. This assumes the guest to do > > useful work in that time. In a single threaded environment this is > > probably true - qemu got enough CPU to inject the first interrupt, so > > will probably manage to execute some guest code before the end of its > > timeslice. In an environment where interrupt processing/delivery and > > execution of the guest code happen in different threads this becomes > > increasingly likely to fail. > > So any voodoo around timer events is doomed to fail in some cases.
Depends on the level of voodoo. My guess is that common guest operating systems require hacks which result in demonstrably incorrect behavior. > What's the amount of hacks what we want then? Is there any generic > solution, like slowing down the guest system to the point where we can > guarantee the interrupt rate vs. CPU execution speed? The "-icount N" option gives deterministic virtual realtime behavior, however teh guuest if completely decoupled from real-world time. The "-icount auto" option gives semi-deterministic behavior while maintaining overall consistency with the real world. This may introduce some small-scale time jitter, but will still satisfy all but the most demanding hard-real-time assumptions. Neither of these options work with KVM. It may be possible to implement something using using performance counters. I don't know how much additional overhead this would involve. Paul