Aravind Kumar <aravind...@gmail.com> writes:
> when I do
>  select 1.0e+1001::numeric;
> I get
> invalid input syntax for type numeric: "1.0e+1001"

> Postgresql documentation tells me that numeric data type has an upper
> limit of 131072 digits before decimal point.

You can successfully do
        select pow(10::numeric, 131071);
or
        select ('1' || repeat('0', 131071))::numeric;
so it seems pretty arbitrary that there's such a small limit on the
allowed exponent value.  This is coming from this bit in numeric.c's
set_var_from_str():

        if (exponent > NUMERIC_MAX_PRECISION ||
            exponent < -NUMERIC_MAX_PRECISION)
            ereport(ERROR,
                    (errcode(ERRCODE_INVALID_TEXT_REPRESENTATION),
                     errmsg("invalid input syntax for type numeric: \"%s\"",
                            str)));

which I think is just brain fade.  We should probably change that so that
it only restricts the exponent enough to prevent integer overflow in the
weight calculations, and leave it to make_result() to decide whether the
value is out of storage range.  (I notice it's not bothering to check for
an overflow signal from strtol(), either :-(.  Also it seems like this
should be an "overflow" errcode not an "invalid syntax" one.)

Poking around for any other possible misuses of NUMERIC_MAX_PRECISION,
I came across this bit in numeric_recv():

    len = (uint16) pq_getmsgint(buf, sizeof(uint16));
    if (len < 0 || len > NUMERIC_MAX_PRECISION + NUMERIC_MAX_RESULT_SCALE)
        ereport(ERROR,
                (errcode(ERRCODE_INVALID_BINARY_REPRESENTATION),
                 errmsg("invalid length in external \"numeric\" value")));

which again represents a totally misplaced assumption that
NUMERIC_MAX_PRECISION has something to do with the max number of digits to
the left of the decimal point.  This is actually pretty bad because it
creates a dump/reload failure hazard when using binary COPY:

regression=# create table bign (f1 numeric);
CREATE TABLE
regression=# insert into bign select pow(9.9::numeric, 100000);
INSERT 0 1
regression=# \copy bign to 'bign.data' (format binary);
COPY 1
regression=# \copy bign from 'bign.data' (format binary);
ERROR:  invalid length in external "numeric" value
CONTEXT:  COPY bign, line 1, column f1

I think we can probably just remove the length test there altogether:
anything that fits in uint16 is probably OK.

Because of the dump/reload hazard, I think the numeric_recv() issue
is definitely a back-patchable bug fix.  You could argue that relaxing
the E-notation limit in numeric_in() is more of a feature addition,
but considering that the documentation has stated very specifically
since 9.1 that NUMERIC accepts large values (cf commit cabf5d84b),
I'm inclined to think that's a bug fix too.

                        regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to