Ok, so each time you add the offset things become more and more imprecise...
http://msdn.microsoft.com/en-us/library/dd757629%28v=vs.85%29.aspx
"The default precision of the timeGetTime function can be five
milliseconds or more, depending on the machine. You can use the
timeBeginPeriod and timeEndPeriod functions to increase the precision of
timeGetTime. If you do so, the minimum difference between successive
values returned by timeGetTime can be as large as the minimum period
value set using timeBeginPeriod and timeEndPeriod."
Note you cannot even assume what the precision is because the function
timeBeginPeriod() changes the precision for EVERY program (plus it has
other side effects). Of course, then the docs state
"Use the QueryPerformanceCounter and QueryPerformanceFrequency functions
to measure short time intervals at a high resolution"
even though the relevant specs mention all sorts of problems when one
uses QueryPerformanceCounter(). This is typical MSDN, unfortunately.
On 5/2/11 3:56 , Henrik Sperre Johansen wrote:
On 02.05.2011 00:56, Andres Valloud wrote:
Does the Windows VM depend on QueryPerformanceCounter()?
The Cog VM (which it seemed Jimmie was using) uses a clock
(sqWin32Heartbeat.c) initialized with
GetSystemTimeAsFileTime and timeGetTime
then adds an offset based on current timeGetTime (reinitialization
happens at rollovers)
I can't see any mention of drift in the Microsoft documentation of
timeGetTime, but I guess there must be some?
Cheers,
Henry