Ramon van Handel wrote:
> We've talked about this before, and I'll spew my point of view again:
> this is not a good idea. The guest OS will not "feel" right. On the little
> details side, for instance, we'll find that the little clock in the corner
> of the windows taskbar runs completely asynchronously (and wrong) wrt the
> linux clock --- which also means that if you have date-relates programs (like
> you basic mailer application), all times will be wrong !! On the more
> important side, we'll have
> - Any even remotely real-time application will go wrong.
> For instance, movies would be played at too low speeds, sound would come
> in chunks, etc.
> - Driving of time-related hardware (CD burners) will go wrong, as you
> mention below.
> We do not want these effects !! To me, the idea that a multimedia
> application will not run inside the VM is completely unacceptable in our
> modern multimedia-driven computing world.
>
> The problem is that clock skewing, our proposed solution, will make timing
> unstable again -- the clock skew only works *per quantum* (we can speed up
> or slow down the virtual time per quantum, depending on the previous
> execution times, but not inside the quantum!) That means that delay loop
> calibration will probably go all wrong again (though it'll probably work
> better than it does now, and it would be pretty reliable on a constant
> load). I don't really have any idea how to get around this at the
> moment...
Here we go again. :^) OK, guess the short answer didn't cut it.
Some more spewage...
Keep in mind that the TSC is a host time based reference. We sample
this to compute the duration of a segment of guest execution. As
you mention, we can skew the clock in the monitor between segments
of execution of the guest code. Now let's assume we need an event
delivered in the middle of some guest code. Whether or not the
event is based simply on guest execution cycles, or skewed (compressed)
timing, let's assume we know algebra and can figure out where to
deliver it. Thus, the real issue is to find a way to interrupt
us at that point. I presented a list of possible
methods of doing this. If we can do this, then we can deliver
either skewed or non-skewed timing events to components of the
device emulation. This can be done by the same code which reads
the TSC. Thus we can provide even skewing, before/after and during
guest execution, granted we can make that interrupt happen when we
want.
Give this, we might as well put in the framework to do timing like
this, and skewing is just a slight addition to that framework later,
essentially computing lag time and a corresponding instantaneous
weighting factor to event delivery.
As far as running "real-time" software like video/audio players,
the way to best take care of this is to emulate a real or (even
better yet) a pseudo device, ship the data to the host side,
buffer it, and play it on the host side. Seems like you could
put in requested timeframe delimiters, so the host software
knows where the time boundaries are, and can process the data
according to these time references, based on real host time.
-Kevin