Greg Ewing writes:
> Guido van Rossum wrote:
> > if there is an *actual* causal link between file A and B, the
> > difference in timestamps should always be much larger than 100 ns.
>
> And if there isn't a causal link, simultaneity is relative anyway. To
> Fred sitting at his computer, file A m
Guido van Rossum wrote:
if there is an *actual*
causal link between file A and B, the difference in timestamps should
always be much larger than 100 ns.
And if there isn't a causal link, simultaneity is relative
anyway. To Fred sitting at his computer, file A might have
been created before file
Am 17.02.2012 10:28, schrieb Steven D'Aprano:
> Georg Brandl wrote:
>> Am 16.02.2012 11:14, schrieb "Martin v. Löwis":
>>> Am 16.02.2012 10:51, schrieb Victor Stinner:
2012/2/16 "Martin v. Löwis" :
>> Maybe an alternative PEP could be written that supports the filesystem
>> copying use
On 02/16/2012 02:14 AM, "Martin v. Löwis" wrote:
Am 16.02.2012 10:51, schrieb Victor Stinner:
2012/2/16 "Martin v. Löwis":
Maybe an alternative PEP could be written that supports the filesystem
copying use case only, using some specialized ns APIs? I really think
that all you need is st_{a,c,m}
Victor Stinner wrote:
> Can't we improve the compatibility between Decimal and float, e.g. by
> allowing Decimal+float? Decimal (base 10) + float (base 2) may loss
> precision and this issue matters in some use cases. So we still need a
> way to warn the user on loss of precision.
I think this s
On Fri, Feb 17, 2012 at 9:33 PM, Victor Stinner
wrote:
>> Maybe it's okay to wait a few years on this, until either 128-bit
>> floats are more common or cDecimal becomes the default floating point
>> type? In the mean time for clock freaks we can have a few specialized
>> APIs that return times in
> Maybe it's okay to wait a few years on this, until either 128-bit
> floats are more common or cDecimal becomes the default floating point
> type? In the mean time for clock freaks we can have a few specialized
> APIs that return times in nanoseconds as a (long) integer.
I don't think that the de
Georg Brandl wrote:
Am 16.02.2012 11:14, schrieb "Martin v. Löwis":
Am 16.02.2012 10:51, schrieb Victor Stinner:
2012/2/16 "Martin v. Löwis" :
Maybe an alternative PEP could be written that supports the filesystem
copying use case only, using some specialized ns APIs? I really think
that all y
Am 16.02.2012 11:14, schrieb "Martin v. Löwis":
> Am 16.02.2012 10:51, schrieb Victor Stinner:
>> 2012/2/16 "Martin v. Löwis" :
Maybe an alternative PEP could be written that supports the filesystem
copying use case only, using some specialized ns APIs? I really think
that all you ne
On Wed, Feb 15, 2012 at 11:39 AM, Guido van Rossum wrote:
> Maybe it's okay to wait a few years on this, until either 128-bit
> floats are more common or cDecimal becomes the default floating point
> type?
+1
___
Python-Dev mailing list
Python-Dev@pytho
So, make is unaffected. In my first post on this subject I already
noted that the only real use case is making a directory or filesystem
copy and then verifying that the copy is identical using native tools
that compare times with nsec precision. At least one of the bugs you
quote is about the curr
>> The problem is that shutil.copy2() produces sometimes *older*
>> timestamp :-/ (...)
>
> Have you been able to reproduce this with an actual Makefile? What's
> the scenario?
Hum. I asked the Internet who use shutil.copy2() and I found an "old"
issue (Decimal('43462967.173053') seconds ago):
Py
On Thu, Feb 16, 2012 at 2:48 PM, Victor Stinner
wrote:
> 2012/2/16 Guido van Rossum :
>> On Thu, Feb 16, 2012 at 2:04 PM, Victor Stinner
>> wrote:
>>> It doesn't change anything to the Makefile issue, if timestamps are
>>> different in a single nanosecond, they are seen as different by make
>>> (
2012/2/16 Guido van Rossum :
> On Thu, Feb 16, 2012 at 2:04 PM, Victor Stinner
> wrote:
>> It doesn't change anything to the Makefile issue, if timestamps are
>> different in a single nanosecond, they are seen as different by make
>> (by another program comparing the timestamp of two files using
>
On Thu, Feb 16, 2012 at 2:04 PM, Victor Stinner
wrote:
> It doesn't change anything to the Makefile issue, if timestamps are
> different in a single nanosecond, they are seen as different by make
> (by another program comparing the timestamp of two files using
> nanosecond precision).
But make do
>> > $ stat test | \grep Modify
>> > Modify: 2012-02-16 13:51:25.643597139 +0100
>> > $ stat test2 | \grep Modify
>> > Modify: 2012-02-16 13:51:25.643597126 +0100
>>
>> The loss of precision is not constant: it depends on the timestamp value.
>
> Well, I've tried several times and I can't reproduce
On 02/15/2012 08:12 PM, Guido van Rossum wrote:
On Wed, Feb 15, 2012 at 7:28 PM, Larry Hastings wrote:
I fixed this in trunk last September
(issue 12904); os.utime now preserves all the precision that Python
currently conveys.
So, essentially you fixed this particular issue without having to d
Le jeudi 16 février 2012 à 14:20 +0100, Victor Stinner a écrit :
> > If I run your snippet and inspect modification times using `stat`, the
> > difference is much smaller (around 10 ns, not 1 ms):
> >
> > $ stat test | \grep Modify
> > Modify: 2012-02-16 13:51:25.643597139 +0100
> > $ stat test2 |
> If I run your snippet and inspect modification times using `stat`, the
> difference is much smaller (around 10 ns, not 1 ms):
>
> $ stat test | \grep Modify
> Modify: 2012-02-16 13:51:25.643597139 +0100
> $ stat test2 | \grep Modify
> Modify: 2012-02-16 13:51:25.643597126 +0100
The loss of preci
> The way Linux does that is to use the time-stamping counter of the
> processor (the rdtsc instructions), which (originally) counts one unit
> per CPU clock. I believe current processors use slightly different
> countings (e.g. through the APIC), but still: you get a resolution
> within the clock
On Thu, 16 Feb 2012 13:46:18 +0100
Victor Stinner wrote:
>
> Let's try in a ext4 filesystem:
>
> $ ~/prog/python/timestamp/python
> Python 3.3.0a0 (default:35d6cc531800+, Feb 16 2012, 13:32:56)
> >>> import decimal, os, shutil, time
> >>> open("test", "x").close()
> >>> shutil.copy2("test", "tes
> PEP author Victor asked
> (in http://mail.python.org/pipermail/python-dev/2012-February/116499.html):
>
>> Maybe I missed the answer, but how do you handle timestamp with an
>> unspecified starting point like os.times() or time.clock()? Should we
>> leave these function unchanged?
>
> If *all* yo
> A data point on this specific use case. The following code throws its
> assert ~90% of the time in Python 3.2.2 on a modern Linux machine (assuming
> "foo" exists and "bar" does not):
>
> import shutil
> import os
> shutil.copy2("foo", "bar")
> assert os.stat("foo").st_mtime == os.stat("
2012/2/15 Guido van Rossum :
> So using floats we can match 100ns precision, right?
Nope, not to store an Epoch timestamp newer than january 1987:
>>> x=2**29; (x+1e-7) != x # no loss of precision
True
>>> x=2**30; (x+1e-7) != x # lose precision
False
>>> print(datetime.timedelta(seconds=2**29))
Am 16.02.2012 10:51, schrieb Victor Stinner:
> 2012/2/16 "Martin v. Löwis" :
>>> Maybe an alternative PEP could be written that supports the filesystem
>>> copying use case only, using some specialized ns APIs? I really think
>>> that all you need is st_{a,c,m}time_ns fields and os.utime_ns().
>>
>
2012/2/16 "Martin v. Löwis" :
>> Maybe an alternative PEP could be written that supports the filesystem
>> copying use case only, using some specialized ns APIs? I really think
>> that all you need is st_{a,c,m}time_ns fields and os.utime_ns().
>
> I'm -1 on that, because it will make people write
> Maybe an alternative PEP could be written that supports the filesystem
> copying use case only, using some specialized ns APIs? I really think
> that all you need is st_{a,c,m}time_ns fields and os.utime_ns().
I'm -1 on that, because it will make people write complicated code.
Regards,
Martin
Am 15.02.2012 21:06, schrieb Antoine Pitrou:
> On Wed, 15 Feb 2012 20:56:26 +0100
> "Martin v. Löwis" wrote:
>>
>> With the quartz in Victor's machine, a single clock takes 0.3ns, so
>> three of them make a nanosecond. As the quartz may not be entirely
>> accurate (and also as the CPU frequency ma
On Wed, Feb 15, 2012 at 7:28 PM, Larry Hastings wrote:
>
> On 02/15/2012 09:43 AM, Guido van Rossum wrote:
>>
>> *Apart* from the specific use case of making an exact copy of a
>> directory tree that can be verified by other tools that simply compare
>> the nanosecond times for equality,
>
>
> A d
On 02/15/2012 09:43 AM, Guido van Rossum wrote:
*Apart* from the specific use case of making an exact copy of a
directory tree that can be verified by other tools that simply compare
the nanosecond times for equality,
A data point on this specific use case. The following code throws its
asse
Guido van Rossum writes:
> On Wed, Feb 15, 2012 at 6:06 PM, Greg Ewing
> wrote:
> > It probably isn't worth the bother for things like file timestamps,
> > where the time taken to execute the system call that modifies the
> > file is likely to be several orders of magnitude larger.
>
> Ironical
On Wed, Feb 15, 2012 at 6:06 PM, Greg Ewing wrote:
> On 16/02/12 06:43, Guido van Rossum wrote:
>>
>> This does not explain why microseconds aren't good enough. It seems
>> none of the clocks involved can actually measure even relative time
>> intervals more accurate than 100ns, and I expect that
On 16/02/12 06:43, Guido van Rossum wrote:
This does not explain why microseconds aren't good enough. It seems
none of the clocks involved can actually measure even relative time
intervals more accurate than 100ns, and I expect that kernels don't
actually keep their clock more accurate than milli
On Wed, Feb 15, 2012 at 11:38 AM, "Martin v. Löwis" wrote:
>> *Apart* from the specific use case of making an exact copy of a
>> directory tree that can be verified by other tools that simply compare
>> the nanosecond times for equality, I don't see any reason for
>> complicating so many APIs to p
2012/2/15 Mark Shannon :
>
> I reckon PyPy might be able to call clock_gettime() in a tight loop
> almost as frequently as the C program (although not with the overhead
> of converting to a decimal).
The nanosecond resolution is just as meaningless in C.
--
Regards,
Benjamin
__
Antoine Pitrou wrote:
On Wed, 15 Feb 2012 20:56:26 +0100
"Martin v. Löwis" wrote:
With the quartz in Victor's machine, a single clock takes 0.3ns, so
three of them make a nanosecond. As the quartz may not be entirely
accurate (and also as the CPU frequency may change) you have to measure
the cl
On Wed, 15 Feb 2012 20:56:26 +0100
"Martin v. Löwis" wrote:
>
> With the quartz in Victor's machine, a single clock takes 0.3ns, so
> three of them make a nanosecond. As the quartz may not be entirely
> accurate (and also as the CPU frequency may change) you have to measure
> the clock rate again
Am 15.02.2012 19:10, schrieb Antoine Pitrou:
>
> Le mercredi 15 février 2012 à 18:58 +0100, Victor Stinner a écrit :
>> It gives me differences smaller than 1000 ns on Ubuntu 11.10 and a
>> Intel Core i5 @ 3.33GHz:
>>
>> $ ./a.out
>> 0 s, 781 ns
>> $ ./a.out
>> 0 s, 785 ns
>> $ ./a.out
>> 0 s, 798
> *Apart* from the specific use case of making an exact copy of a
> directory tree that can be verified by other tools that simply compare
> the nanosecond times for equality, I don't see any reason for
> complicating so many APIs to preserve the fake precision. As far as
> simply comparing whether
Le mercredi 15 février 2012 à 18:58 +0100, Victor Stinner a écrit :
> It gives me differences smaller than 1000 ns on Ubuntu 11.10 and a
> Intel Core i5 @ 3.33GHz:
>
> $ ./a.out
> 0 s, 781 ns
> $ ./a.out
> 0 s, 785 ns
> $ ./a.out
> 0 s, 798 ns
> $ ./a.out
> 0 s, 818 ns
> $ ./a.out
> 0 s, 270 ns
So using floats we can match 100ns precision, right?
On Wed, Feb 15, 2012 at 9:58 AM, Victor Stinner
wrote:
>>> Linux supports nanosecond timestamps since Linux 2.6, Windows supports
>>> 100 ns resolution since Windows 2000 or maybe before. It doesn't mean
>>> that Windows system clock is accurat
>> Linux supports nanosecond timestamps since Linux 2.6, Windows supports
>> 100 ns resolution since Windows 2000 or maybe before. It doesn't mean
>> that Windows system clock is accurate: in practical, it's hard to get
>> something better than 1 ms :-)
>
> Well, do you think the Linux system clock
On Wed, Feb 15, 2012 at 9:23 AM, Victor Stinner
wrote:
> 2012/2/15 Guido van Rossum :
>> I just came to this thread. Having read the good arguments on both
>> sides, I keep wondering why anybody would care about nanosecond
>> precision in timestamps.
>
> Python 3.3 exposes C functions that return
On Wed, 15 Feb 2012 18:23:55 +0100
Victor Stinner wrote:
>
> Linux supports nanosecond timestamps since Linux 2.6, Windows supports
> 100 ns resolution since Windows 2000 or maybe before. It doesn't mean
> that Windows system clock is accurate: in practical, it's hard to get
> something better th
2012/2/15 Guido van Rossum :
> I just came to this thread. Having read the good arguments on both
> sides, I keep wondering why anybody would care about nanosecond
> precision in timestamps.
Python 3.3 exposes C functions that return timespec structure. This
structure contains a timestamp with a r
On Wed, Feb 15, 2012 at 8:47 AM, Antoine Pitrou wrote:
> On Wed, 15 Feb 2012 08:39:45 -0800
> Guido van Rossum wrote:
>>
>> What purpose is there to recording timestamps in nanoseconds? For
>> clocks that start when the process starts running, float *is*
>> (basically) good enough. For measuring
On Wed, 15 Feb 2012 08:39:45 -0800
Guido van Rossum wrote:
>
> What purpose is there to recording timestamps in nanoseconds? For
> clocks that start when the process starts running, float *is*
> (basically) good enough. For measuring e.g. file access times, there
> is no way that the actual time
I just came to this thread. Having read the good arguments on both
sides, I keep wondering why anybody would care about nanosecond
precision in timestamps. Unless you're in charge of managing one of
the few atomic reference clocks in the world, your clock is not going
to tell time that accurate. (H
PEP author Victor asked
(in http://mail.python.org/pipermail/python-dev/2012-February/116499.html):
> Maybe I missed the answer, but how do you handle timestamp with an
> unspecified starting point like os.times() or time.clock()? Should we
> leave these function unchanged?
If *all* you know is
On Feb 15, 2012, at 10:11 AM, Martin v. Löwis wrote:
>I think improving datetime needs to go in two directions:
>a) arbitrary-precision second fractions. My motivation for
> proposing/supporting Decimal was that it can support arbitrary
> precision, unlike any of the alternatives (except for u
On Feb 15, 2012, at 10:23 AM, Nick Coghlan wrote:
>What should timedelta.total_seconds() return to avoid losing nanosecond
>precision? How should this be requested when calling the API?
See, I have no problem having this method return a Decimal for high precision
values. This preserves the valu
> I'd like to remind people what the original point of the PEP process
> was: to avoid going in cycles in discussions. To achieve this, the PEP
> author is supposed to record all objections in the PEP, even if he
> disagrees (and may state rebuttals for each objection that people
> brought up).
>
>
2012/2/15 "Martin v. Löwis" :
> I agree with Barry here (despite having voiced support for using Decimal
> before): datetime.datetime *is* the right data type to represent time
> stamps. If it means that it needs to be improved before it can be used
> in practice, then so be it - improve it.
Decim
> I agree with Barry here (despite having voiced support for using Decimal
> before): datetime.datetime *is* the right data type to represent time
> stamps. If it means that it needs to be improved before it can be used
> in practice, then so be it - improve it.
Maybe I missed the answer, but how
On Wed, Feb 15, 2012 at 7:11 PM, "Martin v. Löwis" wrote:
> I agree with Barry here (despite having voiced support for using Decimal
> before): datetime.datetime *is* the right data type to represent time
> stamps. If it means that it needs to be improved before it can be used
> in practice, then
On Wed, Feb 15, 2012 at 10:11, "Martin v. Löwis" wrote:
>> My primary concern with the PEP is adding to users confusion when they have
>> to
>> handle (at least) 5 different types[*] that represent time in Python.
>
> I agree with Barry here (despite having voiced support for using Decimal
> befo
Am 14.02.2012 23:29, schrieb Barry Warsaw:
> I think I will just state my reasoning one last time and then leave it to the
> BDFL or BDFOP to make the final decision.
I'd like to remind people what the original point of the PEP process
was: to avoid going in cycles in discussions. To achieve this,
On Tue, Feb 14, 2012 at 5:13 PM, Gregory P. Smith wrote:
> On Tue, Feb 14, 2012 at 4:23 PM, Nick Coghlan wrote:
>> On Wed, Feb 15, 2012 at 8:29 AM, Barry Warsaw wrote:
>>> My primary concern with the PEP is adding to users confusion when they have
>>> to
>>> handle (at least) 5 different types[
On Tue, Feb 14, 2012 at 4:23 PM, Nick Coghlan wrote:
> On Wed, Feb 15, 2012 at 8:29 AM, Barry Warsaw wrote:
>> My primary concern with the PEP is adding to users confusion when they have
>> to
>> handle (at least) 5 different types[*] that represent time in Python.
>
> My key question to those a
On Tue, Feb 14, 2012 at 2:29 PM, Barry Warsaw wrote:
> I think I will just state my reasoning one last time and then leave it to the
> BDFL or BDFOP to make the final decision.
>
> Victor on IRC says that there is not much difference between Decimal and
> timedelta, and this may be true from an im
On Wed, Feb 15, 2012 at 8:29 AM, Barry Warsaw wrote:
> My primary concern with the PEP is adding to users confusion when they have to
> handle (at least) 5 different types[*] that represent time in Python.
My key question to those advocating the use of timedelta instead of Decimal:
What should t
I think I will just state my reasoning one last time and then leave it to the
BDFL or BDFOP to make the final decision.
Victor on IRC says that there is not much difference between Decimal and
timedelta, and this may be true from an implementation point of view. From a
cognitive point of view, I
(Oops, I sent my email by mistake, here is the end of my email)
> (...) Ah, timedelta case is different. But I already replied to Nick in this
> thread about timedelta. You can also
see arguments against timedelta in the PEP 410.
Victor
___
Python-Dev
2012/2/14 Barry Warsaw :
> On Feb 13, 2012, at 07:33 PM, Victor Stinner wrote:
>
>>Oh, I forgot to mention my main concern about datetime: many functions
>>returning timestamp have an undefined starting point (an no timezone
>>information ), and so cannot be converted to datetime:
>> - time.clock()
On Feb 13, 2012, at 07:33 PM, Victor Stinner wrote:
>Oh, I forgot to mention my main concern about datetime: many functions
>returning timestamp have an undefined starting point (an no timezone
>information ), and so cannot be converted to datetime:
> - time.clock(), time.wallclock(), time.monoton
>> IMO supporting nanosecond in datetime and timedelta is an orthogonal issue.
>
> Not if you use it to cast them aside for this issue. ;)
Hum yes, I wanted to say that even if we don't keep datetime as a
supported type for time.time(), we can still patch the type to make it
support nanosecond res
FWIW, I'm with Barry on this; doing more with the datetime types seems
preferable to introducing yet more different stuff to date/time
handling.
On Mon, Feb 13, 2012 at 19:33, Victor Stinner wrote:
> Oh, I forgot to mention my main concern about datetime: many functions
> returning timestamp have
> A datetime module based approach would need to either use a mix of
> datetime.datetime() (when returning an absolute time) and
> datetime.timedelta() (when returning a time relative to an unknown
> starting point),
Returning a different type depending on the function would be
surprising and conf
On Tue, Feb 14, 2012 at 4:33 AM, Victor Stinner
wrote:
>> However, I am still -1 on the solution proposed by the PEP. I still think
>> that migrating to datetime use is a better way to go, rather than a
>> proliferation of the data types used to represent timestamps, along with an
>> API to speci
Antoine Pitrou conviced me to drop simply the int type: float and
Decimal are just enough. Use an explicit cast using int() to get int.
os.stat_float_times() is still deprecated by the PEP.
Victor
___
Python-Dev mailing list
Python-Dev@python.org
http://
> However, I am still -1 on the solution proposed by the PEP. I still think
> that migrating to datetime use is a better way to go, rather than a
> proliferation of the data types used to represent timestamps, along with an
> API to specify the type of data returned.
>
> Let's look at each item in
On Feb 13, 2012, at 01:28 AM, Victor Stinner wrote:
>I'm still waiting for Nick Coghlan and Guido van Rossum for their
>decision on the PEP.
Thanks for continuing to work on this Victor. I agree with the general
motivation behind the PEP, and appreciate your enthusiasm for improving Python
here.
On Mon, Feb 13, 2012 at 10:28 AM, Victor Stinner
wrote:
> Hi,
>
> I finished the implementation of the PEP 410 ("Use decimal.Decimal
> type for timestamps"). The PEP:
> http://www.python.org/dev/peps/pep-0410/
>
> The implementation:
> http://bugs.python.org/issue13882
>
> Rietveld code review too
Hi,
I finished the implementation of the PEP 410 ("Use decimal.Decimal
type for timestamps"). The PEP:
http://www.python.org/dev/peps/pep-0410/
The implementation:
http://bugs.python.org/issue13882
Rietveld code review tool for this issue:
http://bugs.python.org/review/13882/show
The patch is h
74 matches
Mail list logo