On Wed, Feb 15, 2017 at 09:15:48AM +0100, Richard Biener wrote: > >As the following testcase shows, we store decimal REAL_CSTs always in > >_Decimal128 internal form and perform all the arithmetics on that, but > >while > >for arithmetics we then ensure rounding to the actual type > >(_Decimal{32,64} > >or for _Decimal128 no further rounding), e.g. const_binop calls > > inexact = real_arithmetic (&value, code, &d1, &d2); > > real_convert (&result, mode, &value); > >when converting integers to _Decimal{32,64} we do nothing like that. > >We do that only for non-decimal conversions from INTEGER_CSTs to > >REAL_CSTs. > > > >The following patch fixes that. Bootstrapped/regtested on x86_64-linux > >(i686-linux fails to bootstrap for other reason), and on 6.x branch on > >x86_64-linux and i686-linux. Dominik has kindly tested it on s390x > >(where > >the bug has been originally reported on the float-cast-overflow-10.c > >test). > > > >Ok for trunk? > > OK.
Note I think it is not 100% correct because it is then suffering from double rounding, but that is only the case for conversions from __int128/unsigned __int128 and those are apparently not supported anyway (things compile, but unless the conversion is optimized during compilation, programs don't link, as the libraries don't support it). E.g. I believe int main () { // 99999994999999999999999999999999999 _Decimal32 a = (((__int128_t) 0x134261629f6653ULL) << 64) | 0x0c750eb777ffffffULL; _Decimal32 b = 99999995LL; _Decimal32 c = 99999995000000LL; // 99999994499999999999999999999999999 _Decimal32 d = (((__int128_t) 0x1342616101cf34ULL) << 64) | 0xbc8cce9903ffffffULL; _Decimal32 e = 99999994LL; if (a != 9.999999E+34DF || b != 1.000000E+8DF || c != 1.000000E+14DF || d != 9.999999E+34DF || e != 9.999999E+7DF) __builtin_abort (); return 0; } should pass, but it doesn't, without or with my patch, a is different. As only 99999990000000000000000000000000000 100000000000000000000000000000000000 are exactly representable in _Decimal32 and we do round to even, I believe both numbers should be rounded down, but when first rounding to _Decimal128 the first number is rounded up to 99999995000000000000000000000000000 and then that is rounded again up to 100000000000000000000000000000000000. I've tried to tweak this, but haven't succeeded. And I believe all of _Decimal{32,64} arithmetics is broken similarly if evaluated by the compiler, as decimal_real_arithmetic helpers perform all the computations in _Decimal128 precision (so do rounding in it too) and then everything is rounded again to _Decimal{32,64}. Anyway, I think my patch is a significant improvement and except for the not really supported int128 -> _Decimal{32,64} conversion it should work fine, so I'm committing the patch now. What I've been trying is below (incremental patch), but it didn't seem to work well, I really don't know how decNumber works. --- gcc/real.c 2017-02-14 21:35:35.868906203 +0100 +++ gcc/real.c 2017-02-15 12:40:22.551817716 +0100 @@ -101,7 +101,8 @@ static void do_fix_trunc (REAL_VALUE_TYPE *, const REAL_VALUE_TYPE *); static unsigned long rtd_divmod (REAL_VALUE_TYPE *, REAL_VALUE_TYPE *); -static void decimal_from_integer (REAL_VALUE_TYPE *); +static void decimal_from_integer (REAL_VALUE_TYPE *, + const struct real_format *); static void decimal_integer_string (char *, const REAL_VALUE_TYPE *, size_t); @@ -2265,8 +2266,8 @@ } if (fmt.decimal_p ()) - decimal_from_integer (r); - if (fmt) + decimal_from_integer (r, fmt); + else if (fmt) real_convert (r, fmt, r); } @@ -2320,12 +2321,12 @@ /* Convert a real with an integral value to decimal float. */ static void -decimal_from_integer (REAL_VALUE_TYPE *r) +decimal_from_integer (REAL_VALUE_TYPE *r, const struct real_format *fmt) { char str[256]; decimal_integer_string (str, r, sizeof (str) - 1); - decimal_real_from_string (r, str); + decimal_real_from_string (r, str, fmt); } /* Returns 10**2**N. */ --- gcc/dfp.c.jj 2017-01-01 12:45:38.000000000 +0100 +++ gcc/dfp.c 2017-02-15 12:38:49.627086419 +0100 @@ -67,11 +67,19 @@ decimal_from_decnumber (REAL_VALUE_TYPE /* Create decimal encoded R from string S. */ void -decimal_real_from_string (REAL_VALUE_TYPE *r, const char *s) +decimal_real_from_string (REAL_VALUE_TYPE *r, const char *s, + const real_format *fmt) { decNumber dn; decContext set; - decContextDefault (&set, DEC_INIT_DECIMAL128); + if (fmt == &decimal_quad_format) + decContextDefault (&set, DEC_INIT_DECIMAL128); + else if (fmt == &decimal_double_format) + decContextDefault (&set, DEC_INIT_DECIMAL64); + else if (fmt == &decimal_single_format) + decContextDefault (&set, DEC_INIT_DECIMAL64); + else + gcc_unreachable (); set.traps = 0; decNumberFromString (&dn, s, &set); @@ -82,6 +90,12 @@ decimal_real_from_string (REAL_VALUE_TYP decimal_from_decnumber (r, &dn, &set); } +void +decimal_real_from_string (REAL_VALUE_TYPE *r, const char *s) +{ + decimal_real_from_string (r, s, &decimal_quad_format); +} + /* Initialize a decNumber from a REAL_VALUE_TYPE. */ static void --- gcc/dfp.h.jj 2017-01-01 12:45:38.000000000 +0100 +++ gcc/dfp.h 2017-02-15 12:36:17.270166550 +0100 @@ -33,6 +33,8 @@ void encode_decimal128 (const struct rea /* Arithmetic and conversion functions. */ int decimal_do_compare (const REAL_VALUE_TYPE *, const REAL_VALUE_TYPE *, int); void decimal_real_from_string (REAL_VALUE_TYPE *, const char *); +void decimal_real_from_string (REAL_VALUE_TYPE *, const char *, + const real_format *fmt); void decimal_round_for_format (const struct real_format *, REAL_VALUE_TYPE *); void decimal_real_convert (REAL_VALUE_TYPE *, const real_format *, const REAL_VALUE_TYPE *); Jakub