Hi all,
Please excuse me for getting a bit off-topic, but I would like
to point out that except for bean-counters who need to be
bug-compatible with accounting standards, decimal
floating point is generally a bad idea.
That is because the worst-case bound on the rounding error
grows linear with
Antoine Pitrou:
> Given the implementation costs, hardware decimal128 will only become
> mainstream if there's a strong incentive for it, which I'm not sure
> exists or will ever exist ;-)
Stefan Behnel:
> Then we shouldn't implement the new nanosecond API at all, in order to keep
> pressure on
Antoine Pitrou schrieb am 16.10.2017 um 10:20:
> On Sun, 15 Oct 2017 22:00:10 -0700
> Guido van Rossum wrote:
>> On Sun, Oct 15, 2017 at 8:40 PM, Nick Coghlan wrote:
>>
>>> Hopefully by the time we decide it's worth worrying about picoseconds in
>>> "regular" code, compiler support for decimal128
Replying to myself again here, as nobody else said anything:
On Mon, Oct 16, 2017 at 5:42 PM, Koos Zevenhoven wrote:
>
>
> Indeed. And some more on where the precision loss comes from:
>
> When you measure time starting from one point, like 1970, the timer
> reaches large
On 2017-10-16 16:42, MRAB wrote:
> On 2017-10-16 13:30, Greg Ewing wrote:
>> Stephan Houben wrote:
>>
>>> Interestingly, that 2.2e-16 pretty much aligns with the accuracy of the
>>> cesium atomic clocks which are currently used to *define* the second.
>>> So we move to this new API, we should
On Mon, Oct 16, 2017 at 4:10 PM, Victor Stinner
wrote:
> 2017-10-16 9:46 GMT+02:00 Stephan Houben :
> > Hi all,
> >
> > I realize this is a bit of a pet peeve of me, since
> > in my day job I sometimes get people complaining that
> > numerical data
On 2017-10-16 13:30, Greg Ewing wrote:
Stephan Houben wrote:
Interestingly, that 2.2e-16 pretty much aligns with the accuracy of the
cesium atomic clocks which are currently used to *define* the second.
So we move to this new API, we should provide our own definition
of the second, since those
2017-10-16 9:46 GMT+02:00 Stephan Houben :
> Hi all,
>
> I realize this is a bit of a pet peeve of me, since
> in my day job I sometimes get people complaining that
> numerical data is "off" in the sixteenth significant digit
> (i.e. it was stored as a double).
> (...)
Oh.
Stephan Houben wrote:
Interestingly, that 2.2e-16 pretty much aligns with the accuracy of the
cesium atomic clocks which are currently used to *define* the second.
So we move to this new API, we should provide our own definition
of the second, since those rough SI seconds are just too imprecise
Hi,
FYI I proposed the PEP 564 directly on python-dev.
The paragraph about "picosecond":
https://www.python.org/dev/peps/pep-0564/#sub-nanosecond-resolution
Let's move the discussion on python-dev ;-)
Victor
___
Python-ideas mailing list
w/r relativistic effects and continental drift - not really. The speed is
about 1cm/yr or v = 1e-18 c. Relativistic effect would go like 0.5 *
(v/c)**2, so more like 5E-37 in relative rate of proper time. You can just
barely capture a few minutes of that even with int128 resolution. As for
Stephan Houben wrote:
Do we realize that at this level of accuracy, relativistic time
dilatation due to continental drift starts to matter?
Probably also want an accurate GPS position for your computer
so that variations in the local gravitational field can be
taken into account.
--
Greg
On Sun, 15 Oct 2017 22:00:10 -0700
Guido van Rossum wrote:
> On Sun, Oct 15, 2017 at 8:40 PM, Nick Coghlan wrote:
>
> > Hopefully by the time we decide it's worth worrying about picoseconds in
> > "regular" code, compiler support for decimal128 will be
On Mon, Oct 16, 2017 at 08:40:33AM +0200, Stephan Houben wrote:
> "The problem is that Python returns time as a floatting point number
> > which is usually a 64-bit binary floatting number (in the IEEE 754
> > format). This type starts to loose nanoseconds after 104 days."
> >
> >
> Do we realize
2017-10-15 22:08 GMT+02:00 Eric V. Smith :
>From Victor's original message, describing the current functions using
64-bit binary floating point numbers (aka double). They lose precision:
"The problem is that Python returns time as a floatting point number
> which is usually a
On Sun, Oct 15, 2017 at 8:40 PM, Nick Coghlan wrote:
> Hopefully by the time we decide it's worth worrying about picoseconds in
> "regular" code, compiler support for decimal128 will be sufficiently
> ubiquitous that we'll be able to rely on that as our 3rd generation time
>
On 16 October 2017 at 04:28, Victor Stinner
wrote:
> I proposed to use nanoseconds because UNIX has 1 ns resolution in
> timespec, the most recent API, and Windows has 100 ns.
>
> Using picoseconds would confuse users who may expect sub-nanosecond
> resolution, whereas
On 10/15/2017 3:13 PM, Stephan Houben wrote:
Hi all,
I propose multiples of the Planck time, 5.39 × 10 ^−44 s.
Unlikely computers can be more accurate than that anytime soon.
On a more serious note, I am not sure what problem was solved by
moving from
double to a fixed-precision format.
I
Hi all,
I propose multiples of the Planck time, 5.39 × 10 −44 s.
Unlikely computers can be more accurate than that anytime soon.
On a more serious note, I am not sure what problem was solved by moving from
double to a fixed-precision format.
I do know that it now introduced the issue of finding
I proposed to use nanoseconds because UNIX has 1 ns resolution in timespec,
the most recent API, and Windows has 100 ns.
Using picoseconds would confuse users who may expect sub-nanosecond
resolution, whereas no OS support them currently.
Moreover, nanoseconds as int already landed in os.stat
On 2017-10-15 19:02, Koos Zevenhoven wrote:
On Sun, Oct 15, 2017 at 8:17 PM, Antoine Pitrou >wrote:
Since new APIs are expensive and we'd like to be future-proof, why not
move to picoseconds? That would be safe until clocks reach the
On Sun, Oct 15, 2017 at 8:17 PM, Antoine Pitrou wrote:
>
> Since new APIs are expensive and we'd like to be future-proof, why not
> move to picoseconds? That would be safe until clocks reach the THz
> barrier, which is quite far away from us.
>
>
I somewhat like the
Since new APIs are expensive and we'd like to be future-proof, why not
move to picoseconds? That would be safe until clocks reach the THz
barrier, which is quite far away from us.
Regards
Antoine.
On Fri, 13 Oct 2017 16:12:39 +0200
Victor Stinner
wrote:
> Hi,
>
>
23 matches
Mail list logo