On 2005-12-08, Bengt Richter <[EMAIL PROTECTED]> wrote:

>>We're seeing floating point representation issues.  
>>
>>The resolution of the underlying call is exactly 1us.  Calling
>>gettimeofday() in a loop in C results in deltas of exactly 1 or
>>2 us.  Python uses a C double to represent time, and a double
>>doesn't have enough bit to accurately represent 1us resolution.
>>
> Is there a timer chip that is programmed to count at exactly
> 1us steps?

No, but the value returned by gettimeofday() is an long integer
that counts seconds along a long integer that counts
microseconds.  The resolution of the data seen by Python's time
module is 1us.

The underlying hardware has a much finer resolution (as shown
by the clock_gettimer call), but the resolution of the system
call used by Python's time module on Unix is exactly 1us.

> If this is trying to be platform independent, I think it has
> to be faking it sometimes. E.g., I thought on windows you
> could sometimes get a time based on a pentium time stamp
> counter, which gets 64 bits with a RDTSC instruction and
> counts at full CPU clock rate 

I assume that's what the underlying Linux system call is doing
(I haven't looked). Then it just rounds/truncates to the
nearest microsecond (because that's what the BSD/SysV/Posix API
specifies) when it returns the answer that Python sees.

-- 
Grant Edwards                   grante             Yow!  Hmmm... a CRIPPLED
                                  at               ACCOUNTANT with a FALAFEL
                               visi.com            sandwich is HIT by a
                                                   TROLLEY-CAR...
-- 
http://mail.python.org/mailman/listinfo/python-list

Reply via email to