=?UTF-8?Q?Erik_Nordstr=C3=B6m?= <erik.nordst...@gmail.com> writes:
> I stumbled upon a precision issue with the to_timestamp() function that
> causes it to return unexpected timestamp values. For instance, the query
> SELECT to_timestamp(1486480176.236538) returns the timestamp "2017-02-07
> 16:09:36.236537+01", which is off by one microsecond. Looking at the source
> code, the issue seems to be that the conversion is unnecessarily done using
> imprecise floating point calculations. Since the target timestamp has
> microsecond precision, and is internally represented by a 64-bit integer
> (on modern platforms), it is better to first convert the given floating
> point value to a microsecond integer and then doing the epoch conversion,
> rather than doing the conversion using floating point and finally casting
> to an integer/timestamp.

This change would introduce overflow failures near the end of the range of
valid inputs.  Maybe it's worth doing anyway and we should just tighten
the range bound tests right above what you patched, but I'm a bit
skeptical.  Float inputs are going to be inherently imprecise anyhow.

I wonder if we could make things better just by using rint() rather than
a naive cast-to-integer.  The cast will truncate not round, and I think
that might be what's mostly biting you.  Does this help for you?

-               result = seconds * USECS_PER_SEC;
+               result = rint(seconds * USECS_PER_SEC);

                        regards, tom lane

Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to