https://gcc.gnu.org/bugzilla/show_bug.cgi?id=70198

            Bug ID: 70198
           Summary: simple test floating point sequence gives incorrect
                    values-- optimizer changes them
           Product: gcc
           Version: 5.1.0
            Status: UNCONFIRMED
          Severity: normal
          Priority: P3
         Component: fortran
          Assignee: unassigned at gcc dot gnu.org
          Reporter: s.b.dorsey at larc dot nasa.gov
  Target Milestone: ---

Created attachment 37942
  --> https://gcc.gnu.org/bugzilla/attachment.cgi?id=37942&action=edit
test source code

The enclosed simple test code goes through a repeating loop adding small sums
into a long series, and gives correct output (converging on 7.0) with gfortran
versions 4.4.7 on NetBSD and Linux, 4.6.3 on NetBSD and Linux, and 4.7.1 on Mac
OSX, but appears to give incorrect values on versions 4.9, 5.1 and 5.3 on Mac
OSX.

Apologies for the horrible code, but this has been a basic test we have been
using for some time to check floating point accuracy on systems.  It's clear
what is going on is not a precision problem, although changing to double
precision gives differing results.  Changing optimizer levels gives different
results too, which I find very confusing.

Running the enclosed program and giving it the input
3.5 17.664

should give the enclosed sample output values.

Reply via email to