>> Two reasons.  One is that I don't want the guest OS to run
>> slow in the VM.  Under the wrong conditions performance
>> could be sluggish, only because of bogus delay loops.
>
>Why?  If a single RDTSC takes long, then the loop is simply
>executed fewer times ...   The same could e.g. happen on
>Linux if an interrupt happens while looping; the total time
>spent in the loop will still be the same.
>
>Only timing *accuracy* might be affected.  I'm not sure
>this a problem, though.

No, it won't --- udelay() is, to my knowledge, only used to wait for I/O and
the like; and the plex hardware emulation isn't THAT accurate, that it requires
this :).  I don't expect timing accuracy to be important in this case.

Also, as I mentioned previously, polling loops like the one above are only
used for very short delays --- likely, the loop will only execute once if
rdtsc has
a lot of overhead.

>> A second reason is that there is a host OS running, which
>> may be able to schedule a user task to do something useful,
>> if the udelay() value is high.  I don't want to bog down
>> the host OS either while wasting time virtualizing a
>> guest busy loop.
>
>If the udelay() value is high, this is a bug anyway, as it
>would stop the OS running natively for extended periods of
>time, which is not supposed to happen.

Exactly.

>> Given the RDTSC code in the Linux kernel, I think we'll
>> be cool without any changes except to let RDTSC execute
>> normally, and save/restore it during transitions between
>> host<-->monitor/guest.
>
>For Linux this is probably be OK, but I'm not sure in general:
>normally, you have a correlation between RDTSC values and
>external timer interrupts, for example.  If we skew the
>interrupts, but don't skew the RDTSC values (or skew them
>differently: just cutting out the host execution time is
>also a form of skewing), some guest OSes might notice?

I wonder... I don't remember exactly how this works, but IIRC one TSC tick
does not have a fixed length on different processors (depends on the bus speed?)
We may be able to "hack" the bus speed setting to make it look to the guest
that a TSC tick is shorter than it is in the host.  However, we would get stuck
under a varying system load, as most OSes check the bus speed setting only
once...

Though a scheme like this may be usable, I think it will cause a number of
problems.  How about we first emulate it, and then put "native RDTSC" on
the
PERFORMANCE list and have a look at it later...

-- Ramon




Reply via email to