Kevin Lawton wrote:
> Keep in mind that the TSC is a host time based reference.  We sample
> this to compute the duration of a segment of guest execution.  As
> you mention, we can skew the clock in the monitor between segments
> of execution of the guest code.  Now let's assume we need an event
> delivered in the middle of some guest code.  Whether or not the
> event is based simply on guest execution cycles, or skewed (compressed)
> timing, let's assume we know algebra and can figure out where to
> deliver it.  Thus, the real issue is to find a way to interrupt
> us at that point.

Yes, I do understand this!! but it doesn't solve the problem, as
I see it (though it does reduce it).  Let me try to explain
again.

Consider this: you can always skew your timing, but you can never
skew the amount of instructions the processor can execute in a
certain amount of time.  Consider two quanta of the VM, which
(for whatever reason, load change or whatever) have different
skewing parameters (in the one the virtual clock goes faster
than in the other).  That means, that X amount of BogoMIPS loops
will actually correspond to *different* virtual lengths of time !
The only way to avoid this (as I see it) is to somehow make
sure that the skewing parameters remain more or less constant
during the VM run.

> As far as running "real-time" software like video/audio players,
> the way to best take care of this is to emulate a real or (even
> better yet) a pseudo device, ship the data to the host side,
> buffer it, and play it on the host side.  Seems like you could
> put in requested timeframe delimiters, so the host software
> knows where the time boundaries are, and can process the data
> according to these time references, based on real host time.

Of course this would be nice, but running old DOS software (for
instance) would still require pretty accurate timing... and you'd
like at least halfway decent performance even for guests that
you did not write custom drivers for (yet).

-- Ramon


Reply via email to