Antoine Pitrou wrote:
On Wed, 15 Feb 2012 20:56:26 +0100
"Martin v. Löwis" <mar...@v.loewis.de> wrote:
With the quartz in Victor's machine, a single clock takes 0.3ns, so
three of them make a nanosecond. As the quartz may not be entirely
accurate (and also as the CPU frequency may change) you have to measure
the clock rate against an external time source, but Linux has
implemented algorithms for that. On my system, dmesg shows

[    2.236894] Refined TSC clocksource calibration: 2793.000 MHz.
[    2.236900] Switching to clocksource tsc

But that's still not meaningful. By the time clock_gettime() returns,
an unpredictable number of nanoseconds have elapsed, and even more when
returning to the Python evaluation loop.

So the nanosecond precision is just an illusion, and a float should
really be enough to represent durations for any task where Python is
suitable as a language.

I reckon PyPy might be able to call clock_gettime() in a tight loop
almost as frequently as the C program (although not with the overhead
of converting to a decimal).

Cheers,
Mark.
_______________________________________________
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com

Reply via email to