Jens Nerche wrote:


> Hm, accuracy in timing and virtualization are two things. With only
> one os on one hardware, it's no problem: the time the os sees is
> the time the hardware delivers and this is the time the clock on
> the wall shows. All times are running uniformly. But what to do
> with our guest? We can try to let the "guest clock" running like
> the "hardware clock", what's like the wall clock. But the the
> guest will recognize "leaps": for instance, one time the execution
> of an instruction takes 4 cycles, another time 6000 (or what a
> large number), because other os'es are running in the meantime,
> interrupts are to be handled and so on. May be, this leaps confuse linux.
> We can also pretend a continous time to the guest to beware this
> leaps, but this isn't only involved (and costly), this leads to
> other problems. For instance in driving devices like CD-R-burners
> or radio controlled clocks.
> Guess you mean "accurate" with regard to the first case: let the
> guests clock run as close as possible to the hardware- and wall
> clock, no? Then we have to trade of two things: let the guest run
> as long as possible without interferences and being interruptable
> as good as possible. Not easy, yes.

For what I said about timing, forget about host timing altogether.
It means nothing to the guest.  I don't intend to derive timing
from the host in any way except to deliver timing events to
user code such as the screen refresh.  Those kinds of things
have to do with wall-clock time, and have nothing to do with
the VM.  (the VM doesn't care how often you see updates).

Let's see.  Timing seems to be the hardest thing to explain.
Well, the "bottom line" is that the VM needs to see timing
based on how many cycles of execution it has used.  It turns
out this is easy to measure.  We sample the TSC before returning
to guest execution.  We sample the TSC during an interrupt/exception.
Guest ran for ((T1 - T0) - overhead of IRET/interrupt).  Pass
this delta to the timing framework which distributes events to
devices like the PIT.

The accuracy of this time reference is very high, as it's based
on machine cycles which the guest has consumed.  We could add in
some additional cycles for instructions which were emulated.

Here's a diagram which depicts things:

HHHHHHHH                      HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH
        MMM        MMM     MMM
           GGGGGGGG   GGGGG

  H = Host
  M = Monitor
  G = Guest

We are sampling the TSC every M->G and G->M transition.  So we
have a very accurate reading on how long the guest actually
executed for.  We don't care how long the host ran for.  We
don't care how long the monitor ran for.  Our PIT timers
will get our time reference from how long the guest ran for.

So relative to the guest code, guest events will be delivered
where they normally occur.

Now, the real issue, is what happens if we have an event that
needs to be delivered sometime during the execution of the
guest, but before the next host timer tick goes off.  How
do we interrupt the guest at that time?  I gave a list of
things we might do to generate that interrupt.

So there are 2 precisions here.  Our time reference can be
extrememely accurate (based on machine cycles).  This
is very easy to get.  Our event delivery resolution is only as good
as the interrupt period of the host timer tick, unless we do one of
the things I talked about, which would boost it to the domain of
machine clock cycle accuracy or PIT clock cycle accuracy.

For now, let's forget about clock skewing, and emulating
hardware that is host clock time based like a sound card.

-Kevin

Reply via email to