I just realized another issue today. It may have been obvious from one
of Tom's earlier messages, but I'm just now putting the pieces
together.
On Fri, Feb 18, 2022 at 11:44 PM Tom Lane <t...@sss.pgh.pa.us> wrote:
> Also, I notice that there's an overflow hazard upstream of here,
> in interval2tm:
>
> regression=# select interval '214748364 hours' * 11;
> ERROR: interval out of range
> regression=# \errverbose
> ERROR: 22008: interval out of range
> LOCATION: interval2tm, timestamp.c:1982
>
> There's no good excuse for not being able to print a value that
> we computed successfully.

Scenarios like this can properly decode the interval, but actually
error out when encoding the interval. As a consequence you can insert
the value successfully into a table, but any attempt to query the table
that includes the "bad interval" value in the result will cause an
error. Below I've demonstrated an example:

postgres=# CREATE TABLE tbl (i INTERVAL);
CREATE TABLE
postgres=# INSERT INTO tbl VALUES ('1 day'), ('3 months'), ('2 years');
INSERT 0 3
postgres=# SELECT * FROM tbl;
i
---------
1 day
3 mons
2 years
(3 rows)

postgres=# INSERT INTO tbl VALUES ('2147483647 hours 60 minutes');
INSERT 0 1
postgres=# SELECT * FROM tbl;
ERROR: interval out of range

This would seriously reduce the usable of any table that contains one
of these "bad interval" values.

My patch actually fixes this issue, but I just wanted to call it out
because it might be relevant when reviewing.


Reply via email to