The way I read it, each _call_ to timeGetTime should be accurate to within 5 milliseconds.
The docs say 5ms or more, depending on the machine, and there is no indication of what affects the precision on the machine. But even if that's irrelevant, your argument assumes nobody else in the machine called timerBeginPeriod with an argument of, say, 100. Basically, relying on timeGetTime is bound to cause problems because it forces you to assume (guess) what every other program is doing, which opens the door to wrong assumptions and weird, hard to reproduce behavior.
The current time is calculated baseTime + (newTimeGetTime - baseTimeGetTime) , so precisions not adding up does not explain drifts of 36 seconds...
How many times is timeGetTime() called during the accumulated drift? Andres.
