I agree.  I am pretty sure that pulling this type of trick on Win32, for
example could wreak havoc.

Drew Northup, N1XIM


> -----Original Message-----
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On Behalf
> Of Ulrich Weigand
> Sent: Tuesday, November 21, 2000 5:40 PM
> To: [EMAIL PROTECTED]
> Subject: Re: emulating RDTSC
>
>
>
> Kevin Lawton wrote:
>
> > Ulrich Weigand wrote:
> >
> > > Why are you worrying about the performance of code that
> > > does nothing but wait?  As long as we skew the values
> > > returned from RDTSC properly, that loop should run for
> > > exactly the same amount of (guest) wall-clock time, no
> > > matter whether it will take a million or a thousand
> > > passes through the loop ...
> >
> > Two reasons.  One is that I don't want the guest OS to run
> > slow in the VM.  Under the wrong conditions performance
> > could be sluggish, only because of bogus delay loops.
>
> Why?  If a single RDTSC takes long, then the loop is simply
> executed fewer times ...   The same could e.g. happen on
> Linux if an interrupt happens while looping; the total time
> spent in the loop will still be the same.
>
> Only timing *accuracy* might be affected.  I'm not sure
> this a problem, though.
>
>
> > A second reason is that there is a host OS running, which
> > may be able to schedule a user task to do something useful,
> > if the udelay() value is high.  I don't want to bog down
> > the host OS either while wasting time virtualizing a
> > guest busy loop.
>
> If the udelay() value is high, this is a bug anyway, as it
> would stop the OS running natively for extended periods of
> time, which is not supposed to happen.
>
> In any case, I don't see why this is an argument for *not*
> virtualizing RDTSC.  If RDTSC executes natively, you cannot
> schedule in the host, because the host doesn't even get
> control ...
>
>
> > Given the RDTSC code in the Linux kernel, I think we'll
> > be cool without any changes except to let RDTSC execute
> > normally, and save/restore it during transitions between
> > host<-->monitor/guest.
>
> For Linux this is probably be OK, but I'm not sure in general:
> normally, you have a correlation between RDTSC values and
> external timer interrupts, for example.  If we skew the
> interrupts, but don't skew the RDTSC values (or skew them
> differently: just cutting out the host execution time is
> also a form of skewing), some guest OSes might notice?
>
> Bye,
> Ulrich
>
>
> --
>   Dr. Ulrich Weigand
>   [EMAIL PROTECTED]
>


Reply via email to