http://gcc.gnu.org/bugzilla/show_bug.cgi?id=47297
Summary: Inconsistent float-point to integer results depending on -O flag Product: gcc Version: 4.4.3 Status: UNCONFIRMED Severity: normal Priority: P3 Component: c AssignedTo: unassig...@gcc.gnu.org ReportedBy: mate...@loskot.net The following program converts with truncation double precision float-point value to signed 16-bit integer: $ cat trunc-min.c #include <stdio.h> #include <stdint.h> int main() { double a; int16_t b; a = -32769; b = (int16_t)a; printf("%d\n", b); } According to the C standard, behaviour of this program is undefined. In spite of the fact the behavior is undefined, I suspect the intent is to be consistent (which appears to be for lots of other values I have tried), but they are not. Different optimisation yields different result: $ gcc -O0 trunc-min.c $ ./a.out 32767 $ gcc -O2 trunc-min.c $ ./a.out -32768 A friend of mine advised me: in one case the cvttsd2si instruction is used and for the other gcc does the conversion internally. Though, I have no experience to confirm for sure. I didn't find any reports about similar issues in the database. Though, closest could be Bug 27682 Environment: OS: x86_64 GNU/Linux (Ubuntu) GCC: gcc (Ubuntu 4.4.3-4ubuntu5) 4.4.3