On 05/29/2014 04:59 AM, Peter Geoghegan wrote:
Consider this example:

[local]/postgres=# SELECT to_char(1e9::float8,'999999999999999999999D9');
          to_char
--------------------------
              1000000000.0
(1 row)

[local]/postgres=# SELECT to_char(1e20::float8,'999999999999999999999D9');
         to_char
------------------------
   100000000000000000000
(1 row)

[local]/postgres=# SELECT to_char(1e20,'999999999999999999999D9');
          to_char
--------------------------
   100000000000000000000.0
(1 row)


There is access to uninitialized memory here. #define
DEBUG_TO_FROM_CHAR shows this to be the case (garbage is printed to
stdout).

Valgrind brought this to my attention. I propose the attached patch as
a fix for this issue.

The direct cause appears to be that float8_to_char() feels entitled to
truncate Number description post-digits, while that doesn't happen
within numeric_to_char(), say. This is very old code, from the
original to_char() patch back in 2000, and the interactions here are
not obvious. I think that that should be okay to not do this
truncation, since the value of MAXDOUBLEWIDTH was greatly increased in
2007 as part of a round of fixes to bugs of similar vintage. There is
still a defensive measure against over-sized input (we bail), but that
ought to be unnecessary, strictly speaking.

postgres=# select to_char('0.1'::float8, '9D' || repeat('9', 1000));
ERROR:  double precision for character conversion is too wide

Yeah, the code is pretty crufty, I'm not sure what the proper fix would be. I wonder if psprintf would be useful.

- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to