[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Python-Dev] PEP 410 (Decimal timestamp): the implementation is ready for a review

Am 15.02.2012 19:10, schrieb Antoine Pitrou:
> Le mercredi 15 f?vrier 2012 ? 18:58 +0100, Victor Stinner a ?crit :
>> It gives me differences smaller than 1000 ns on Ubuntu 11.10 and a
>> Intel Core i5 @ 3.33GHz:
>> $ ./a.out
>> 0 s, 781 ns
>> $ ./a.out
>> 0 s, 785 ns
>> $ ./a.out
>> 0 s, 798 ns
>> $ ./a.out
>> 0 s, 818 ns
>> $ ./a.out
>> 0 s, 270 ns
> What is it supposed to prove exactly? There is a difference between
> being able to *represent* nanoseconds and being able to *measure* them;
> let alone give a precise meaning to them.

Linux *actually* is able to measure time in nanosecond precision, even
though it is not able to keep its clock synchronized to UTC with a
nanosecond accuracy.

The way Linux does that is to use the time-stamping counter of the
processor (the rdtsc instructions), which (originally) counts one unit
per CPU clock. I believe current processors use slightly different
countings (e.g. through the APIC), but still: you get a resolution
within the clock frequency of the CPU quartz.

With the quartz in Victor's machine, a single clock takes 0.3ns, so
three of them make a nanosecond. As the quartz may not be entirely
accurate (and also as the CPU frequency may change) you have to measure
the clock rate against an external time source, but Linux has
implemented algorithms for that. On my system, dmesg shows

[    2.236894] Refined TSC clocksource calibration: 2793.000 MHz.
[    2.236900] Switching to clocksource tsc