[Bug c/36299] spurious and undocumented warning with -Waddress for a == 0 when a is an array

2011-03-02 Thread vincent at vinc17 dot org
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=36299

--- Comment #9 from Vincent Lefèvre vincent at vinc17 dot org 2011-03-02 
15:17:33 UTC ---
(In reply to comment #8)
 Every warning warns about something valid in C, otherwise it would be an error
 not a warning.

No, for instance:

int main(void)
{
  int i;
  return i;
}

This is undefined behavior and detected by GCC, but one gets only a warning:

tst.c: In function ‘main’:
tst.c:4: warning: ‘i’ is used uninitialized in this function

Compare to a == 0 in the above testcase, which has a well-defined behavior.


[Bug c/36299] spurious and undocumented warning with -Waddress for a == 0 when a is an array

2011-03-01 Thread vincent at vinc17 dot org
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=36299

--- Comment #5 from Vincent Lefèvre vincent at vinc17 dot org 2011-03-01 
15:05:19 UTC ---
Under Debian, I can no longer reproduce the problem with GCC 4.5.2:

$ gcc-4.5 -Wall warn-nulladdress.c
$ gcc-4.5 -Waddress warn-nulladdress.c
$ gcc-4.4 -Wall warn-nulladdress.c
warn-nulladdress.c: In function ‘main’:
warn-nulladdress.c:14: warning: the address of ‘a’ will never be NULL
$ gcc-4.4 -Waddress warn-nulladdress.c
warn-nulladdress.c: In function ‘main’:
warn-nulladdress.c:14: warning: the address of ‘a’ will never be NULL

So, I assume that it has been fixed anyway. Do you confirm?


[Bug c/36299] spurious and undocumented warning with -Waddress for a == 0 when a is an array

2011-03-01 Thread vincent at vinc17 dot org
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=36299

--- Comment #7 from Vincent Lefèvre vincent at vinc17 dot org 2011-03-02 
01:15:23 UTC ---
(In reply to comment #6)
 I think the intention is to warn, at least for a == (void *)0, since the
 address of a cannot be zero or null. So I would say that this is a regression.

But this is valid in C, and in practice, such a test can occur in macro
expansions: a macro can check whether some pointer is null before doing
something with it. There shouldn't be a warning in such a case.


[Bug bootstrap/45248] Stage 3 bootstrap comparison failure (powerpc-darwin8)

2011-02-07 Thread vincent at vinc17 dot org
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=45248

--- Comment #12 from Vincent Lefèvre vincent at vinc17 dot org 2011-02-08 
03:42:08 UTC ---
(In reply to comment #11)
 Any updates on this? re-confirmation?  I would like to continue testing
 gcc-4.5.x on powerpc-darwin8, but can't b/c of this.

The --with-dwarf2 option (added for MacPorts) fixed the problem on my machine.


[Bug bootstrap/44455] GCC fails to build if MPFR 3.0.0 (Release Candidate) is used

2010-12-12 Thread vincent at vinc17 dot org
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=44455

--- Comment #19 from Vincent Lefèvre vincent at vinc17 dot org 2010-12-12 
23:02:58 UTC ---
FYI, the problem has been handled in the MPFR trunk r7291 for MPFR 3.1.0.
MPFR's configure script now retrieves the location of the GMP source from GMP's
Makefile and adds the necessary -I... flags to CPPFLAGS.

Note also that the behavior will be different from the one with MPFR 2.x. A
side effect is that library versioning is not supported in this case (by that,
I mean that a GMP upgrade without recompiling MPFR against the new GMP version
may break things) because providing --with-gmp-build makes the MPFR build use
GMP's internals, which may change without notice.


[Bug c/46180] CSE across calls to fesetround()

2010-10-26 Thread vincent at vinc17 dot org
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=46180

Vincent Lefèvre vincent at vinc17 dot org changed:

   What|Removed |Added

 CC||vincent at vinc17 dot org

--- Comment #1 from Vincent Lefèvre vincent at vinc17 dot org 2010-10-26 
10:05:33 UTC ---
Dup of bug 34678.


[Bug target/46080] [4.4/4.5/4.6 Regression] incorrect precision of sqrtf builtin for x87 arithmetic (-mfpmath=387)

2010-10-20 Thread vincent at vinc17 dot org
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=46080

--- Comment #7 from Vincent Lefèvre vincent at vinc17 dot org 2010-10-20 
23:43:33 UTC ---
But there's something strange in the generated code: sometimes the fsqrt
instruction is used, sometimes call sqrtf is used (for the same sqrtf() call
in the C source). This is not consistent.


[Bug target/46080] New: incorrect precision of sqrtf builtin for x87 arithmetic (-mfpmath=387)

2010-10-19 Thread vincent at vinc17 dot org
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=46080

   Summary: incorrect precision of sqrtf builtin for x87
arithmetic (-mfpmath=387)
   Product: gcc
   Version: 4.4.5
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: target
AssignedTo: unassig...@gcc.gnu.org
ReportedBy: vinc...@vinc17.org


With -mfpmath=387 (tested on a x86_64 platform), the first sqrtf is computed in
double precision instead of single. On

#include stdio.h
#include math.h

float x = (float) M_PI;

int main(void)
{
  printf (%.60f\n, sqrtf(x));
  printf (%.60f\n, sqrtf(x));
  printf (%.60f\n, sqrtf(x));
  return 0;
}

I get with various gcc versions (including gcc version 4.6.0 20101009
(experimental) [trunk revision 165234] (Debian 20101009-1)):

$ gcc -Wall -mfpmath=387 bug.c -o bug -lm -O0; ./bug
1.7724538755670267153874419818748719990253448486328125
1.772453904151916503906250
1.772453904151916503906250

The bug is also present with -O1 when the sqrtf calls are grouped in a single
printf and disappears if the builtin is disabled with -fno-builtin-sqrtf.

For the first sqrtf, the asm code shows a fsqrt followed by a call sqrtf
under some condition, but the condition is not satisfied. The other occurrences
just have a call sqrtf, which is correct.


[Bug target/46080] [4.4/4.5/4.6 Regression] incorrect precision of sqrtf builtin for x87 arithmetic (-mfpmath=387)

2010-10-19 Thread vincent at vinc17 dot org
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=46080

--- Comment #2 from Vincent Lefèvre vincent at vinc17 dot org 2010-10-20 
01:51:56 UTC ---
Created attachment 22089
  -- http://gcc.gnu.org/bugzilla/attachment.cgi?id=22089
sh script to test sqrtf

Similar problems can also be found with:

  printf (%.60f\n%.60f\n%.60f\n, sqrtf(x), sqrtf(x), sqrtf(x));

I've found that every GCC version I could test was showing some incorrect
behavior (but GCC 4.2.4 was the most consistent one). With the attached script,
I get:

   -DSEP  -O0   -O1   -O2
GCC 3.4.6   SSS   SSS   SDD   SDD
GCC 4.1.3   SSS   SSS   DSS   DDS
GCC 4.2.4   SSS   SSS   DDD   DDD   (x86)
GCC 4.3.5   SSS   SSS   DSS   DDD   (ditto with GCC 4.3.2 on x86)
GCC 4.4.5   DSS   SSD   DSS   DDD

where S means that one gets the result in single precision (as expected) and D
means that one gets the result in double precision.


[Bug bootstrap/45248] Stage 3 bootstrap comparison failure (powerpc-darwin8)

2010-09-26 Thread vincent at vinc17 dot org
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=45248

Vincent Lefèvre vincent at vinc17 dot org changed:

   What|Removed |Added

 CC||vincent at vinc17 dot org

--- Comment #10 from Vincent Lefèvre vincent at vinc17 dot org 2010-09-26 
19:42:07 UTC ---
Same problem when building GCC 4.5.1 via MacPorts. FYI, the MacPorts bug
report:
  https://trac.macports.org/ticket/26378

GCC 4.5.0 build was fine. So, this is a 4.5.1 regression. The summary should
probably be changed to:

   [4.5.1 Regression] Stage 3 bootstrap comparison failure (powerpc-darwin8)


[Bug c/44842] New: gcc should not issue warnings for code that will never be executed

2010-07-06 Thread vincent at vinc17 dot org
GCC issues warnings like division by zero or right shift count = width of
type even though the corresponding code will never be executed (under a
condition that is always false); it shouldn't do this, at least by default. For
instance:

int tst (void)
{
  int x;
  x = 0 ? 1 / 0 : 0;
  return x;
  if (0)
{
  x = 1 / 0;
  x = 1  128;
}
  return x;
}

$ gcc-snapshot -std=c99 -O2 -c tst.c
tst.c: In function 'tst':
tst.c:8:13: warning: division by zero [-Wdiv-by-zero]
tst.c:9:7: warning: right shift count = width of type [enabled by default]

One can see that GCC detects neither the first return x; nor the always-false
condition, and issues spurious warnings for the lines:

  x = 1 / 0;
  x = 1  128;

On the other hand, GCC could successfully detect that the 1 / 0 in

  x = 0 ? 1 / 0 : 0;

would never be executed.

Note: always-false conditions occur in practice for platform-dependent code,
e.g. by doing a test on integer types.


-- 
   Summary: gcc should not issue warnings for code that will never
be executed
   Product: gcc
   Version: unknown
Status: UNCONFIRMED
  Severity: enhancement
  Priority: P3
 Component: c
AssignedTo: unassigned at gcc dot gnu dot org
ReportedBy: vincent at vinc17 dot org


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=44842



[Bug c/39034] Decimal floating-point math done wrong

2010-04-07 Thread vincent at vinc17 dot org


--- Comment #7 from vincent at vinc17 dot org  2010-04-07 09:29 ---
This bug is still open, though it appears to be fixed. Is there any reason?


-- 

vincent at vinc17 dot org changed:

   What|Removed |Added

 CC||vincent at vinc17 dot org


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=39034



[Bug c/43673] New: Incorrect warning: use of 'D' length modifier with 'a' type character

2010-04-07 Thread vincent at vinc17 dot org
With:

#define __STDC_WANT_DEC_FP__
#include stdio.h

int main (void)
{
  double d = 0.1;
  _Decimal64 e = 0.1dd;
  printf (%.20f\n, d);
  printf (%Da\n, e);
  printf (%De\n, e);
  printf (%Df\n, e);
  printf (%Dg\n, e);
  return 0;
}

$ gcc-snapshot -Wall tst.c
tst.c: In function 'main':
tst.c:9:3: warning: use of 'D' length modifier with 'a' type character

while WG14/N1312 says:

D   Specifies that a following a, A, e, E, f, F, g, or G conversion specifier
applies to a _Decimal64 argument.


-- 
   Summary: Incorrect warning: use of 'D' length modifier with 'a'
type character
   Product: gcc
   Version: unknown
Status: UNCONFIRMED
  Severity: minor
  Priority: P3
 Component: c
AssignedTo: unassigned at gcc dot gnu dot org
ReportedBy: vincent at vinc17 dot org


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=43673



[Bug c/39037] FLOAT_CONST_DECIMAL64 pragma not supported

2010-04-07 Thread vincent at vinc17 dot org


--- Comment #5 from vincent at vinc17 dot org  2010-04-07 10:58 ---
This bug should probably be resolved as fixed as well.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=39037



[Bug middle-end/43419] New: gcc replaces pow(x, 0.5) by sqrt(x), invalid when x is -0

2010-03-18 Thread vincent at vinc17 dot org
gcc replaces pow(x, 0.5) by sqrt(x). This is invalid when x is -0. Indeed,
according to ISO C99 (N1256), F.9.4.4:

  pow(±0, y) returns +0 for y  0 and not an odd integer.

So, pow(-0.0, 0.5) should return +0. But sqrt(-0.0) should return -0 according
to the IEEE 754 standard (and F.9.4.5 from ISO C99).

Testcase:

#include stdio.h
#include math.h

int main (void)
{
  volatile double x = -0.0;

  printf (sqrt(-0)= %g\n, sqrt (x));
  printf (pow(-0,0.5) = %g\n, pow (x, 0.5));
  return 0;
}


-- 
   Summary: gcc replaces pow(x, 0.5) by sqrt(x), invalid when x is -
0
   Product: gcc
   Version: unknown
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: middle-end
AssignedTo: unassigned at gcc dot gnu dot org
ReportedBy: vincent at vinc17 dot org


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=43419



[Bug middle-end/43419] gcc replaces pow(x, 0.5) by sqrt(x), invalid when x is -0

2010-03-18 Thread vincent at vinc17 dot org


--- Comment #1 from vincent at vinc17 dot org  2010-03-18 14:33 ---
If I understand correctly, the bug appears with:

r119248 | rguenth | 2006-11-27 12:38:42 +0100 (Mon, 27 Nov 2006) | 10 lines

2006-11-27  Richard Guenther  rguent...@suse.de

PR middle-end/25620
* builtins.c (expand_builtin_pow): Optimize non integer valued
constant exponents using sqrt or cbrt if possible.  Always fall back
to expanding via optabs.

* gcc.target/i386/pow-1.c: New testcase.
* gcc.dg/builtins-58.c: Likewise.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=43419



[Bug fortran/25620] Missed optimization with power

2010-03-18 Thread vincent at vinc17 dot org


--- Comment #18 from vincent at vinc17 dot org  2010-03-18 14:37 ---
The patch affected C, where the transformation of pow(x, 0.5) into sqrt(x) is
incorrect. See PR 43419.


-- 

vincent at vinc17 dot org changed:

   What|Removed |Added

 CC||vincent at vinc17 dot org


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=25620



[Bug target/30484] Miscompilation of remainder expressions on CPUs of the i386 family

2010-02-19 Thread vincent at vinc17 dot org


--- Comment #11 from vincent at vinc17 dot org  2010-02-19 13:08 ---
(In reply to comment #10)
 This issue was discussed on the WG14 reflector in October 2008, and the 
 general
 view was that the standard should not make INT_MIN % -1 well defined (as this
 would impose a significant performance cost on many programs to benefit very
 few) and probably didn't intend to.

My opinion is that introducing an undefined behavior on a particular case like
that is a bad idea: If this case can occur in some application, then the
programmer would have to do a test anyway (and this would even be more costly
as the test would be needed for all implementations, instead of being generated
by the compiler only when needed) or the software could behave erratically
(which is worse). If this case cannot occur, then the programmer should have a
way to tell that to the compiler.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=30484



[Bug c/42179] Incorrect optimization (-O2) yields wrong code (regression)

2009-11-26 Thread vincent at vinc17 dot org


--- Comment #4 from vincent at vinc17 dot org  2009-11-26 15:53 ---
(In reply to comment #1)
 Aliasing rules are indeed broken because you access a union of anonymous type
 through a pointer to a union of type ieee_double_extract.

OK, the real code in MPFR is a double accessed through a pointer to a union
of type ieee_double_extract, but I suppose this is the same problem.

BTW, could my testcase be used to improve GCC's -Wstrict-aliasing=3 (i.e. to
have fewer false negatives without introducing more false positives), or is it
too difficult / not possible?


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=42179



[Bug c/42179] New: Incorrect optimization (-O2) yields wrong code (regression)

2009-11-25 Thread vincent at vinc17 dot org
With gcc-snapshot (Debian 20091118-1) 4.5.0 20091119 (experimental) [trunk
revision 154312] on an x86_64 GNU/Linux machine and the following code (partly
based on GMP):

#include stdio.h

union ieee_double_extract
{
  struct
  {
unsigned int manl:32;
unsigned int manh:20;
unsigned int exp:11;
unsigned int sig:1;
  } s;
  double d;
};

int main (void)
{
  union { double d; unsigned long long i; } x;
  x.d = 0.0 / 0.0;
  printf (d = %g [%llx]\n, x.d, x.i);
  printf (exp = %x\n, (unsigned int)
  ((union ieee_double_extract *)(x.d))-s.exp);
  return 0;
}

$ gcc-snapshot -Wall -Wextra testd.c -o testd
$ ./testd
d = nan [fff8]
exp = 7ff

This is OK, but with -O2:

$ gcc-snapshot -Wall -Wextra testd.c -o testd -O2
$ ./testd
d = nan [fff8]
exp = 0

I don't know whether aliasing rules are broken, but note that there are no
warnings.

GCC 4.4.2 doesn't have this problem.


-- 
   Summary: Incorrect optimization (-O2) yields wrong code
(regression)
   Product: gcc
   Version: 4.5.0
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: c
AssignedTo: unassigned at gcc dot gnu dot org
ReportedBy: vincent at vinc17 dot org


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=42179



[Bug c/40442] Option -I and POSIX conformance (c99 utility)

2009-11-22 Thread vincent at vinc17 dot org


--- Comment #7 from vincent at vinc17 dot org  2009-11-23 04:51 ---
(In reply to comment #6)
 Not a GCC bug, the POSIX list generally agreed the effects of reordering
 system directories should be unspecified or undefined.

What the POSIX list says does not matter if this doesn't go further. What's
important is what the POSIX standard says. So, I've opened the following bug so
that the POSIX standard can be changed:

  http://austingroupbugs.net/view.php?id=187


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=40442



[Bug c/40960] New: POSIX requires that option -D have a lower precedence than -U

2009-08-04 Thread vincent at vinc17 dot org
[This concerns the POSIX c99 utility, but gcc should probably behave in the
same way, as on some platforms, c99 is gcc.]

In http://www.opengroup.org/onlinepubs/9699919799/utilities/c99.html POSIX
specifies:

  -D  name[=value]
Define name as if by a C-language #define directive. If no = value
is given, a value of 1 shall be used. The -D option has lower
precedence than the -U option. That is, if name is used in both a
-U and a -D option, name shall be undefined regardless of the
order of the options.

However, gcc doesn't take the precedence rule into account:

$ cat tst.c
int main(void)
{
#ifdef FOO
  return 1;
#else
  return 0;
#endif
}
$ c99 tst.c -UFOO -DFOO=1
$ ./a.out
zsh: exit 1 ./a.out

whereas FOO should be undefined and the return value should be 0, not 1.

I could reproduce this with various GCC versions, including:
gcc-snapshot (Debian 20090718-1) 4.5.0 20090718 (experimental) [trunk revision
149777]


-- 
   Summary: POSIX requires that option -D have a lower precedence
than -U
   Product: gcc
   Version: unknown
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: c
AssignedTo: unassigned at gcc dot gnu dot org
ReportedBy: vincent at vinc17 dot org


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=40960



[Bug c/40960] POSIX requires that option -D have a lower precedence than -U

2009-08-04 Thread vincent at vinc17 dot org


--- Comment #2 from vincent at vinc17 dot org  2009-08-04 13:29 ---
There would the possibility to have a POSIX mode implied by c99, but I don't
think having different behaviors would be a good idea. IMHO, Makefiles should
be fixed to stick to POSIX.

Also, portable Makefiles, i.e. that work with other compilers, should not be
affected.

Note that Sun cc 5.0 is correct. And gcc 2.95.3 was also correct! (That's old,
but this is on an old Solaris machine where I still have an account.)


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=40960



[Bug rtl-optimization/323] optimized code gives strange floating point results

2009-07-22 Thread vincent at vinc17 dot org


--- Comment #131 from vincent at vinc17 dot org  2009-07-22 17:33 ---
(In reply to comment #130)
 #define axiom_order(a,b)  !(a  b  b  a) 
 #define axiom_eq(a)   a == a 
 #define third ((double)atoi(1)/atoi(3))
[...]

 in C99 (+TC1,TC2,TC3) different precision is not allowed

It is allowed, except for...

 5.1.2.3 p12:
   ... In particular, casts and assignments are required to perform their
 specified conversion

But a division is not a cast, nor an assignment.

 5.2.4.2.2 p8:
   Except for assignment and cast (which remove all extra range and precision),
 the values 8 of operations with floating operands and values subject to the
 usual arithmetic conversions and of floating constants are evaluated to a
 format whose range and precision may be greater than required by the type.

A greater precision is OK for division, in particular.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=323



[Bug c/40442] Option -I and POSIX conformance (c99 utility)

2009-06-15 Thread vincent at vinc17 dot org


--- Comment #4 from vincent at vinc17 dot org  2009-06-15 11:59 ---
(In reply to comment #3)
 If you have modified the implementation (by putting headers/libraries in 
 standard directories where those headers/libraries were not provided by 
 the implementation in those versions in those directories, for example), 
 you are very definitely outside the scope of POSIX.

I'm not sure I understand what you mean. But the existing practice is that
additional headers/libraries (i.e. not those defined by the C standard)
provided by the vendor are stored under /usr/{include,lib}. And I don't think
this goes against POSIX. Concerning /usr/local, the FHS says:

  The /usr/local hierarchy is for use by the system administrator when
  installing software locally.

So, it should be safe to add libraries there. And again, this is the existing
practice.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=40442



[Bug c/40442] New: Option -I and POSIX conformance (c99 utility)

2009-06-14 Thread vincent at vinc17 dot org
GCC doesn't seem to provide a c99 utility, but some vendors provide one based
on gcc. And the GCC behavior can make POSIX conformance difficult to obtain.
Here's the difference.

POSIX.1-2008 says[*]:

  -I  directory
Change the algorithm for searching for headers whose names are not
absolute pathnames to look in the directory named by the directory
pathname before looking in the usual places. Thus, headers whose
names are enclosed in double-quotes (  ) shall be searched for
first in the directory of the file with the #include line, then in
directories named in -I options, and last in the usual places. For
headers whose names are enclosed in angle brackets (  ), the
header shall be searched for only in directories named in -I
options and then in the usual places. Directories named in -I

options shall be searched in the order specified. Implementations
shall support at least ten instances of this option in a single
c99 command invocation.

[*] http://www.opengroup.org/onlinepubs/9699919799/utilities/c99.html

So, the directories specified by -I should have the precedence over the usual
places. However, this is not the behavior of gcc; from the gcc 4.3.2 man page:

  -I dir
Add the directory dir to the list of directories to be searched for
header files.  Directories named by -I are searched before the
standard system include directories.  If the directory dir is a
standard system include directory, the option is ignored to ensure
that the default search order for system directories and the
special treatment of system headers are not defeated .  If dir
begins with =, then the = will be replaced by the sysroot
prefix; see --sysroot and -isysroot.

As you can see, there is a difference for standard system include directories,
for which the option is ignored.

I suggest that GCC adds a new option to switch to the POSIX specifications.
FYI, I've reported the bug against Debian here:

http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=533124


-- 
   Summary: Option -I and POSIX conformance (c99 utility)
   Product: gcc
   Version: unknown
Status: UNCONFIRMED
  Severity: enhancement
  Priority: P3
 Component: c
AssignedTo: unassigned at gcc dot gnu dot org
ReportedBy: vincent at vinc17 dot org


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=40442



[Bug c/40442] Option -I and POSIX conformance (c99 utility)

2009-06-14 Thread vincent at vinc17 dot org


--- Comment #2 from vincent at vinc17 dot org  2009-06-15 02:08 ---
This may be true for standard headers, but system directories don't contain
only standard headers: in practice, they generally also contain additional
libraries. And for instance, a -I/usr/include can be useful to override
headers/libraries installed in /usr/local/{include,lib}.

Then perhaps gcc (and POSIX) should make a difference between standard headers
and other headers.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=40442



[Bug c++/40186] floating point comparison is wrong ( !(a b) (b a) is true )

2009-05-18 Thread vincent at vinc17 dot org


--- Comment #8 from vincent at vinc17 dot org  2009-05-18 14:56 ---
Are you sure that this comes from the extended precision? This would mean that
GCC does implicit extended - double conversions in an asymmetric way, and
IIRC, I've never seen that.

I can't reproduce the problem with g++-4.4 -mfpmath=387 -Os on an x86_64
machine.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=40186



[Bug c/39867] New: [4.4 Regression] Wrong result of conditional operator exp 2 ? 2U : (unsigned int) exp

2009-04-23 Thread vincent at vinc17 dot org
With GCC 4.4.0, the following program outputs 4294967295 instead of 2:

#include stdio.h

int main (void)
{
  int exp = -1;
  printf (%u\n, exp  2 ? 2U : (unsigned int) exp);
  return 0;
}

Note: I've tried with gcc-snapshot under a Debian/unstable x86_64 Linux
machine, but the same bug was reported to me, with gcc-4.4.0 (it was found by
Philippe Theveny, who works on MPFR, and the MPFR tests fail because of that).

GCC 4.3.3 does not have this problem.


-- 
   Summary: [4.4 Regression] Wrong result of conditional operator
exp  2 ? 2U : (unsigned int) exp
   Product: gcc
   Version: 4.4.0
Status: UNCONFIRMED
  Severity: major
  Priority: P3
 Component: c
AssignedTo: unassigned at gcc dot gnu dot org
ReportedBy: vincent at vinc17 dot org


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=39867



[Bug c/39867] [4.4 Regression] Wrong result of conditional operator exp 2 ? 2U : (unsigned int) exp

2009-04-23 Thread vincent at vinc17 dot org


--- Comment #1 from vincent at vinc17 dot org  2009-04-23 13:44 ---
I forgot to say: the bug occurs whether one compiles with optimizations or not.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=39867



[Bug rtl-optimization/323] optimized code gives strange floating point results

2008-11-11 Thread vincent at vinc17 dot org


--- Comment #125 from vincent at vinc17 dot org  2008-11-11 10:13 ---
(In reply to comment #124)
 It seems like the C99 standard prohibits double rounding,

only if Annex F is claimed to be supported (note: Annex F is not just IEEE 754,
it also contains specific bindings). IEEE 754 doesn't prohibit double rounding
either (this depends on the bindings), but with C99 + Annex F, double rounding
is prohibited.

Now, bug 323 is not about double rounding specifically. There are two potential
problems:

1. A double variable (or result of a cast) contains a long double value
(not exactly representable in a double). This is prohibited by C99
(5.1.2.3#12, 6.3.1.5#2 and 6.3.1.8#2[52]). This problem seems to be fixed by
Joseph Myers' patch mentioned in comment #123 (but I haven't tried).

2. Computations on double expressions are carried out in extended precision.
This is allowed by C99 (except for casts and assignments), e.g. when
FLT_EVAL_METHOD=2. But if the implementation (i.e. here compiler + library +
...) claims to support Annex F, then this is prohibited. This point is rather
tricky because the compiler (GCC) and library (e.g. GNU libc) settings must be
consistent, so their developers need to talk with each other. FYI, I reported
the following bug concerning glibc:

  http://sourceware.org/bugzilla/show_bug.cgi?id=6981

because it sets __STDC_IEC_559__ to 1 unconditionally.

 The short answer is that no compiler, be it gcc, will be modified so that
 complex sequences of operations are used for floating-point operations in lieu
 of directly using x87 instructions! At least for two reasons:
 * x87 is now fading away (its use is deprecated on x86-64, it's not used by
 default on Intel Macintosh...)
 * Most people don't want to pay the performance hit.

That's why in Joseph's patch, it's just an option (disabled by default, but
enabled by -std=c99 because one should assume that if a user wants C99, then he
really wants it, and if he is able to add an option, then he is also able to
add another one if he wants to disable this fix in case he knows it is useless
for his application -- this is also true for -ffast-math).

GCC already supports SSE, but this patch is for processors that don't.

Also the performance hit depends very much on the application. Performance hit
is reduced in applications that do not use intensive FP or mostly interactive
applications.

 In addition, I think there are more urgent things to fix in gcc's
 floating-point system, such as support for #pragma STDC FENV ACCESS

FYI, this is bug 34678. And I submitted bug 37845 concerning the FP_CONTRACT
pragma.

 * It is possible to force the x87 to use reduced precision for the mantissa
 (with inline asm or even now with gcc options).

Unfortunately, this means that long double wouldn't behave as expected, and
the behavior is not controllable enough (e.g. due to libraries, plugins...).
Such a change should have been system-wide. Now, this is needed in software
where double rounding is prohibited (e.g. XSLT processor).


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=323



[Bug middle-end/37845] New: gcc ignores FP_CONTRACT pragma set to OFF

2008-10-16 Thread vincent at vinc17 dot org
To be conform to the ISO C standard (when FLT_EVAL_METHOD is 0, 1 or 2, which
is the case of gcc), gcc should either take FP_CONTRACT pragmas into account or
(in the mean time) assume they are set to OFF, i.e. disallow the contraction of
floating expressions. This means in particular that -mno-fused-madd should be
the default (with processors for which this option is supported, e.g. PowerPC).

I've tested with gcc-4.4-20081010 on ia64 that this bug is still present.


-- 
   Summary: gcc ignores FP_CONTRACT pragma set to OFF
   Product: gcc
   Version: 4.4.0
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: middle-end
AssignedTo: unassigned at gcc dot gnu dot org
ReportedBy: vincent at vinc17 dot org


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=37845



[Bug middle-end/37846] New: Option -mno-fused-madd should be supported on IA-64

2008-10-16 Thread vincent at vinc17 dot org
Option -mno-fused-madd is currently not supported on IA-64. This means that the
expression x*y+z is always fused (IA-64's fma instruction) and this cannot be
disabled, unlike on PowerPC.

BTW, I wonder why this option exists for PowerPC (and other CPU types) but not
for IA-64. Isn't it similar, or is there some difficulty?


-- 
   Summary: Option -mno-fused-madd should be supported on IA-64
   Product: gcc
   Version: 4.4.0
Status: UNCONFIRMED
  Severity: enhancement
  Priority: P3
 Component: middle-end
AssignedTo: unassigned at gcc dot gnu dot org
ReportedBy: vincent at vinc17 dot org


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=37846



[Bug target/37845] gcc ignores FP_CONTRACT pragma set to OFF

2008-10-16 Thread vincent at vinc17 dot org


--- Comment #3 from vincent at vinc17 dot org  2008-10-16 13:54 ---
(In reply to comment #1)
 Confirmed.  The FP_CONTRACT macro is not implemented, but the default behavior
 of GCC is to behave like it was set to OFF.

The problem is that on PowerPC, x*y+z is fused (contracted) by default (which
is forbidden when FP_CONTRACT is OFF). I could test only with Apple's gcc
4.0.1, but the man page of the gcc snapshot implies that the problem remains
with the current versions:

   IBM RS/6000 and PowerPC Options

   -mfused-madd
   -mno-fused-madd
   Generate code that uses (does not use) the floating point multiply
   and accumulate instructions.  These instructions are generated by
   default if hardware floating is used.

But the correct behavior would be that these instructions should not be
generated by default.

On http://www.vinc17.org/software/tst-ieee754.c compiled with

  gcc -Wall -O2 -std=c99 tst-ieee754.c -o tst-ieee754 -lm

I get:

$ ./tst-ieee754 | grep fused
x * y + z with FP_CONTRACT OFF is fused.

I need to add -mno-fused-madd to get the correct behavior:

$ ./tst-ieee754 | grep fused
x * y + z with FP_CONTRACT OFF is not fused.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=37845



[Bug middle-end/34678] Optimization generates incorrect code with -frounding-math option (#pragma STDC FENV_ACCESS not implemented)

2008-10-16 Thread vincent at vinc17 dot org


--- Comment #14 from vincent at vinc17 dot org  2008-10-16 14:20 ---
(In reply to comment #12)
 Turning -frounding-math on by default would be a disservice to (most of) our
 users which is why the decision was made (long ago) to not enable this by
 default.

The compiler should generate correct code by default, and options like
-funsafe-math-optimizations are there to allow the users to run the compiler in
a non-conforming mode. So, it would be wise to have -frounding-math by default
and add -fno-rounding-math to the options enabled by
-funsafe-math-optimizations.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=34678



[Bug middle-end/34678] Optimization generates incorrect code with -frounding-math option (#pragma STDC FENV_ACCESS not implemented)

2008-10-16 Thread vincent at vinc17 dot org


--- Comment #16 from vincent at vinc17 dot org  2008-10-16 17:39 ---
I was suggesting to improve the behavior by having -frounding-math by default
(at least when the user compiles with -std=c99 -- if he does this, then this
means that he shows some interest in a conforming implementation). This is not
perfect, but would be better than the current behavior.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=34678



[Bug middle-end/37838] New: gcc ignores FENV_ACCESS pragma set to ON

2008-10-15 Thread vincent at vinc17 dot org
gcc currently ignores FENV_ACCESS pragma set to ON, and generate incorrect code
in such cases. Before gcc recognizes this pragma, the -frounding-math option
should probably be on by default. For instance, consider the following code.

#include stdio.h
#include float.h
#include math.h
#include fenv.h
#pragma STDC FENV_ACCESS ON

static void tstall (void)
{
  volatile double x = DBL_MIN;

  printf (%.20g = %.20g\n, 1.0 + DBL_MIN, 1.0 + x);
  printf (%.20g = %.20g\n, 1.0 - DBL_MIN, 1.0 - x);
}

int main (void)
{
#ifdef FE_TONEAREST
  printf (Rounding to nearest\n);
  if (fesetround (FE_TONEAREST))
printf (Error\n);
  else
tstall ();
#endif

#ifdef FE_TOWARDZERO
  printf (Rounding toward 0\n);
  if (fesetround (FE_TOWARDZERO))
printf (Error\n);
  else
tstall ();
#endif

#ifdef FE_DOWNWARD
  printf (Rounding toward -inf\n);
  if (fesetround (FE_DOWNWARD))
printf (Error\n);
  else
tstall ();
#endif

#ifdef FE_UPWARD
  printf (Rounding toward +inf\n);
  if (fesetround (FE_UPWARD))
printf (Error\n);
  else
tstall ();
#endif

  return 0;
}

By default, I get incorrect results:

Rounding to nearest
1 = 1
1 = 1
Rounding toward 0
1 = 1
1 = 0.99988898
Rounding toward -inf
1 = 1
1 = 0.99988898
Rounding toward +inf
1 = 1.000222
1 = 1

If I add the -frounding-math option, I get correct results:

Rounding to nearest
1 = 1
1 = 1
Rounding toward 0
1 = 1
0.99988898 = 0.99988898
Rounding toward -inf
1 = 1
0.99988898 = 0.99988898
Rounding toward +inf
1.000222 = 1.000222
1 = 1


-- 
   Summary: gcc ignores FENV_ACCESS pragma set to ON
   Product: gcc
   Version: 4.3.2
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: middle-end
AssignedTo: unassigned at gcc dot gnu dot org
ReportedBy: vincent at vinc17 dot org


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=37838



[Bug middle-end/34678] Optimization generates incorrect code with -frounding-math option (#pragma STDC FENV_ACCESS not implemented)

2008-10-15 Thread vincent at vinc17 dot org


--- Comment #9 from vincent at vinc17 dot org  2008-10-15 21:29 ---
What was said in bug 37838 but not here is that -frounding-math sometimes fixes
the problem. So, I was suggesting that -frounding-math should be enabled by
default.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=34678



[Bug middle-end/34678] Optimization generates incorrect code with -frounding-math option (#pragma STDC FENV_ACCESS not implemented)

2008-10-15 Thread vincent at vinc17 dot org


--- Comment #11 from vincent at vinc17 dot org  2008-10-15 22:33 ---
(In reply to comment #10)
 The default of -fno-rounding-math is chosen with the reason that this is what
 a compiler can assume if #pragma STDC FENV_ACCESS is not turned on.

The C standard doesn't require a compiler to recognize the FENV_ACCESS pragma,
but if the compiler does not recognize it, then it must assume that this pragma
is ON (otherwise the generated code can be incorrect). That's why I suggested
that -frounding-math should be the default.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=34678



[Bug target/37390] wrong-code on i486-linux-gnu with -O[12], -O0 works

2008-09-06 Thread vincent at vinc17 dot org


--- Comment #8 from vincent at vinc17 dot org  2008-09-06 18:42 ---
(In reply to comment #7)
 Does increasing bits cause floating point errors. How could 64 bit precison
 give correct result where as 80 bit give incorrect one.

You can have rounding errors whether you increase the precision or not. In
particular, in practice, function pow() is not correctly rounded, and worse,
the error may be quite high. So, I'd say that whatever the choice made by
Linux, your code may be regarded as wrong (and I think this bug is just
invalid, as you could have similar problems with SSE2).

 [EMAIL PROTECTED]:~/prog/tju$ gcc -O2 -mfpmath=sse bug_short.c -lm
 bug_short.c:1: warning: SSE instruction set disabled, using 387 arithmetics

You probably need another compilation flag, like -march=pentium4.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=37390



[Bug target/37390] wrong-code on i486-linux-gnu with -O[12], -O0 works

2008-09-06 Thread vincent at vinc17 dot org


--- Comment #11 from vincent at vinc17 dot org  2008-09-06 22:19 ---
(In reply to comment #10)
 The funny thing is that this only happens with -O2 or -O1 but not with -O0 ie
 no optimization it is all correct , when we optimize the results start 
 varying.

Because with -O0, some values are stored to memory and re-read from memory,
hence rounded to double precision (53-bit significand), a bit like with
-ffloat-store.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=37390



[Bug c/36299] spurious and undocumented warning with -Waddress for a == 0 when a is an array

2008-08-23 Thread vincent at vinc17 dot org


--- Comment #3 from vincent at vinc17 dot org  2008-08-23 20:00 ---
(In reply to comment #2)
 this warning was added on purpose, because probably someone requested it. I
 don't see that it is very different from the documented case of using the
 address of a function in a conditional.

The documentation should be improved anyway (the word suspicious is very
subjective).

 You should be able to work-around the macro case by casting the array to (char
 *) or perhaps casting to (void *) ?

Yes, this makes sense. Perhaps this should be documented.

 That said, we would like to not warn within
 macros for a wide range of warnings but we don't have the infrastructure to do
 that yet.

How about something like __extension__, e.g. __no_warnings__ would disable the
warnings for the following statement or expression? If expression, one could
still use __no_warnings__ with ({ ... }). Keywords for individual warnings or
warning groups would even be better. At the same time, it would be nice to have
some macro defined, declaring that such a keyword is available (that would be
much better than testing the GCC version).


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=36299



[Bug middle-end/36296] wrong warning about potential uninitialized variable

2008-08-18 Thread vincent at vinc17 dot org


--- Comment #9 from vincent at vinc17 dot org  2008-08-18 22:58 ---
(In reply to comment #8)
 Please provide a preprocessed reduced testcase as similar to the original as
 possible. 

Here's a similar testcase.

$ cat tst.c
void *foo (void);
void bar (void *);

void f (void)
{
  int init = 0;
  void *p;

  while (1)
{
  if (init == 0)
{
  p = foo ();
  init = 2;
}
  bar (p);
}
}

$ gcc -Wall -O2 tst.c -c
tst.c: In function 'f':
tst.c:7: warning: 'p' may be used uninitialized in this function

This is quite strange: if I replace the value 2 by 1 or if I replace foo() by
0, the warning is no longer displayed.

Note: in the reality (in MPFR), the variable I called init here is the size of
the array (0 when the array hasn't been allocated yet).


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=36296



[Bug middle-end/36296] bogus uninitialized warning (loop representation)

2008-08-18 Thread vincent at vinc17 dot org


--- Comment #11 from vincent at vinc17 dot org  2008-08-19 01:31 ---
(In reply to comment #10)
 If I replace the value 2 by 1 I still get the warning in GCC 4.4, so that
 really sounds strange. Are you sure about that?

Yes and here Debian's GCC 4.4 snapshot has the same behavior as GCC 4.3.1 (also
from Debian). Also, the optimized trees are not the same for 1 and 2.

vin% cat tst.c
void *foo (void);
void bar (void *);

void f (void)
{
  int init = 0;
  void *p;

  while (1)
{
  if (init == 0)
{
  p = foo ();
  init = INIT;
}
  bar (p);
}
}
vin% gcc --version
gcc.real (Debian 4.3.1-9) 4.3.1
Copyright (C) 2008 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

vin% gcc -Wall -O2 tst.c -c -fdump-tree-optimized -DINIT=1
vin% cat tst.c.126t.optimized

;; Function f (f)

Analyzing Edge Insertions.
f ()
{
  void * p;

bb 2:
  p = foo ();

bb 3:
  bar (p);
  goto bb 3;

}


vin% gcc -Wall -O2 tst.c -c -fdump-tree-optimized -DINIT=2
tst.c: In function 'f':
tst.c:7: warning: 'p' may be used uninitialized in this function
vin% cat tst.c.126t.optimized

;; Function f (f)

Analyzing Edge Insertions.
f ()
{
  void * p;
  int init;

bb 2:
  init = 0;

bb 3:
  if (init == 0)
goto bb 4;
  else
goto bb 5;

bb 4:
  p = foo ();
  init = 2;

bb 5:
  bar (p);
  goto bb 3;

}


vin% /usr/lib/gcc-snapshot/bin/gcc --version
gcc (Debian 20080802-1) 4.4.0 20080802 (experimental) [trunk revision 138551]
Copyright (C) 2008 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

vin% /usr/lib/gcc-snapshot/bin/gcc -Wall -O2 tst.c -c -DINIT=1
vin% /usr/lib/gcc-snapshot/bin/gcc -Wall -O2 tst.c -c -DINIT=2
tst.c: In function 'f':
tst.c:7: warning: 'p' may be used uninitialized in this function
vin% 


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=36296



[Bug rtl-optimization/323] optimized code gives strange floating point results

2008-07-17 Thread vincent at vinc17 dot org


--- Comment #120 from vincent at vinc17 dot org  2008-07-17 12:41 ---
(In reply to comment #119)
 REAL RESULT:
 5.313991e+33
 5.313991e+33
 0.00e+00
 0.00e+00

Only without optimizations. But since the ISO C standard allows expressions to
be evaluated in a higher precision, there's no bug here (unless you show a
contradiction with the value of FLT_EVAL_METHOD, but the FP_CONTRACT pragma
should also be set to OFF -- though this currently has no effect on gcc).


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=323



[Bug driver/36731] New: gcc -v should include default arch/tune values

2008-07-04 Thread vincent at vinc17 dot org
gcc -v output should include default values corresponding to the -march and
-mtune options.

As a reference: http://gcc.gnu.org/ml/gcc-help/2008-07/msg00062.html


-- 
   Summary: gcc -v should include default arch/tune values
   Product: gcc
   Version: 4.3.1
Status: UNCONFIRMED
  Severity: enhancement
  Priority: P3
 Component: driver
AssignedTo: unassigned at gcc dot gnu dot org
ReportedBy: vincent at vinc17 dot org


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=36731



[Bug driver/36731] gcc -v should include default arch/tune values

2008-07-04 Thread vincent at vinc17 dot org


--- Comment #3 from vincent at vinc17 dot org  2008-07-04 19:34 ---
(In reply to comment #1)
 Works if you provide an (empty) input file:
[...]

There's mtune, but not march. Also, most users probably don't know that.

(In reply to comment #2)
 They do already via the configure options.

Not here. Or perhaps one can deduce the value (how?), but that's not obvious
and not documented either.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=36731



[Bug rtl-optimization/323] optimized code gives strange floating point results

2008-06-24 Thread vincent at vinc17 dot org


--- Comment #118 from vincent at vinc17 dot org  2008-06-24 20:45 ---
(In reply to comment #117)
 By a lucky hit, I have found this in the GCC documentation:
 
 -mpc32
 -mpc64
 -mpc80

OK, this is new in gcc 4.3. I haven't tried, but if gcc just changes the
precision without changing the values of float.h macros to make them correct,
this is just a workaround (better than nothing, though). Also, this is a
problem for library code if it requires to have double precision instead of
extended precision, as these options won't probably be taken into account at
that point. (Unfortunately it's probably too late to have a clean ABI.)


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=323



[Bug rtl-optimization/323] optimized code gives strange floating point results

2008-06-22 Thread vincent at vinc17 dot org


--- Comment #116 from vincent at vinc17 dot org  2008-06-22 21:14 ---
(In reply to comment #114)
 Yes, but this requires quite a complicated workaround (solution (4) in my
 comment #109).

The problem is on the compiler side, which could store every result of a cast
or an assignment to memory (this is inefficient, but that's what you get with
the x87, and the ISO C language could be blamed too for *requiring* something
like that instead of being more flexible).

 So you could say that the IEEE754 double precision type is available even on
 a processor without any FPU because this can be emulated using integers.

Yes, but a conforming implementation would be the processor + a library, not
just the processor with its instruction set.

 Moreover, if we assess things pedantically, the workaround (4) still doesn't
 fully obey the IEEE single/double precision type(s), because there remains the
 problem of double rounding of denormals.

As I said, in this particular case (underflow/overflow), double rounding is
allowed by the IEEE standard. It may not be allowed by some languages (e.g.
XPath, and Java in some mode) for good or bad reasons, but this is another
problem.

 I quote, too:
 Applies To
Microsoft#174; Visual C++#174;

Now I assume that it follows the MS-Windows API (though nothing is certain with
Microsoft). And the other compilers under MS-Windows could (or should) do the
same thing.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=323



[Bug c/36588] New: Request warnings about the precedence of the unary - over the binary * and /

2008-06-21 Thread vincent at vinc17 dot org
The problem: it is too easy to write incorrect code with the unary -, e.g.
  - i / 2 (which means (- i) / 2)
when one really wants - (i / 2). The reasons are:

1. Whereas the binary - has a lower precedence than the binary * (multiply)
and / (divide), the unary - has a higher precedence. It is easy to forget
this difference, in particular because the same symbol is used with two
different precedences. For instance, the expressions 0 - i / 2 and - i / 2
look the same, but the former corresponds to 0 - (i / 2) and the latter
corresponds to (- i) / 2.

2. Mathematically (in a ring), the precedence of the unary - (for the
opposite) vs * (multiply) does not matter since the result does not depend on
the precedence, so that it is never explicited in practice and what the writer
of a math expression really thinks depends on the context.

The following code shows such a problem that can occur in practice: the unary
- in k = - i / 2 yields an integer overflow, hence undefined behavior. The
user may not be aware of it. And the fact that this bug is hidden with -O2
optimizations makes it worse. For instance, when compiling this code without
optimizations, I get:
  j = 1073741824
  k = -1073741824
(which is not the intended result). With -O2 optimizations, I get:
  j = 1073741824
  k = 1073741824

Adding the following warnings could avoid bugs like this one:
  warning: suggest parentheses around unary - in operand of *
  warning: suggest parentheses around unary - in operand of /

Note: such warnings should also apply to floating-point expressions as they can
be affected in the rounding directions FE_DOWNWARD and FE_UPWARD.

#include stdio.h
#include limits.h

int main (void)
{
  int i, j, k;

  i = INT_MIN;
  j = 0 - i / 2;
  k = - i / 2;
  printf (j = %d\n, j);
  printf (k = %d\n, k);
  return 0;
}


-- 
   Summary: Request warnings about the precedence of the unary -
over the binary * and /
   Product: gcc
   Version: 4.3.1
Status: UNCONFIRMED
  Severity: enhancement
  Priority: P3
 Component: c
AssignedTo: unassigned at gcc dot gnu dot org
ReportedBy: vincent at vinc17 dot org


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=36588



[Bug rtl-optimization/323] optimized code gives strange floating point results

2008-06-21 Thread vincent at vinc17 dot org


--- Comment #113 from vincent at vinc17 dot org  2008-06-22 00:52 ---
(In reply to comment #112)
 It's true that double *precision* is available on x87. But not the *IEEE-754
 double precision type*.

It is available when storing a result to memory.

 Beside the precision of mantissa, this includes also the range of exponent.
 On the x87, it is possible to set the precision of mantissa but not the range
 of exponent.

The IEEE754-1985 allows this. Section 4.3: Normally, a result is rounded to
the precision of its destination. However, some systems deliver results only to
double or extended destinations. On such a system the user, which may be a
high-level language compiler, shall be able to specify that a result be rounded
instead to single precision, though it may be stored in the double or extended
format with its wider exponent range. [...]

 That's why I believe it doesn't obey the IEEE. (I haven't ever seen the
 IEEE-754 standard but I base on the work of David Monniaux.)

See above. Also beware of subtilities in the wording used by David Monniaux.
FYI, the IEEE754-1985 standard (with minor corrections) is available from the
following page:
  http://www.validlab.com/754R/
(look at the end). AFAIK, the IEEE754-1985 standard was designed from the x87
implementation, so it would have been very surprising that x87 didn't conform
to IEEE754-1985.

 Do you mean that on Windows, long double has (by default) no more precision
 than double? I don't think so (it's confirmed by my experience).

I don't remember my original reference, but here's a new one:
  http://msdn.microsoft.com/en-us/library/aa289157(vs.71).aspx
In fact, this depends on the architecture. I quote: x86. Intermediate
expressions are computed at the default 53-bit precision with an extended range
provided by a 16-bit exponent. When these 53:16 values are spilled to memory
(as can happen during a function call), the extended exponent range will be
narrowed to 11-bits. That is, spilled values are cast to the standard double
precision format with only an 11-bit exponent.
A user may switch to extended 64-bit precision for intermediate rounding by
altering the floating-point control word using _controlfp and by enabling FPU
environment access (see The fpenv_access Pragma). However, when extended
precision register-values are spilled to memory, the intermediate results will
still be rounded to double precision.
This particular semantic is subject to change.

Note that the behavior has changed in some version of Windows (it was using the
extended precision, then it switched to double precision for x86). Now, this
may also depend on the compiler.

 According to the paper of David Monniaux, only FreeBSD 4 sets double
 precision by default (but I know almost nothing about BSD).

I've noted that amongst the BSD's, NetBSD does this too (I don't remember if
I've tried or got it from some document, and this might also depend on the
NetBSD version and/or the processor).


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=323



[Bug rtl-optimization/323] optimized code gives strange floating point results

2008-06-20 Thread vincent at vinc17 dot org


--- Comment #111 from vincent at vinc17 dot org  2008-06-20 16:09 ---
(In reply to comment #109)
 WHERE'S THE BUG
 This is really not a GCC bug. The bug is actually in the x87 FPU because it
 doesn't obey the IEEE standard.

Concerning the standards: The x87 FPU does obey the IEEE754-1985 standard,
which *allows* extended precision, and double precision is *available*. In
fact, one could say that GCC even obeys the IEEE standard (which doesn't define
bindings: the definition of destination page 4 of the IEEE754-1985 standard
is rather vague and lets the language to define it exactly), but it doesn't
obey the ISO C99 standard on some point.

Concerning the x87 FPU: One can say however that the x87 is a badly designed
because it is not possible to statically specify the precision. Nevertheless
the OS/language implementations should take care of this problem.

Note: the solution chosen by some OS'es (*BSD, MS-Windows...) is to configure
the processor to the IEEE double precision by default (thus long double is
also in double precision, but this is OK as far as the C language is concerned,
there's still a problem with float, but in practice, nobody cares AFAIK).

 If you wish to compile for processors which don't have SSE, you have a few
 possibilities:
 (1) A very simple solution: Use long double everywhere.

This avoids the bug, but this is not possible for software that requires double
precision exactly, e.g. XML tools that use XPath. See other examples here:

  http://www.vinc17.org/research/extended.en.html

Also this makes maintenance of software more difficult because long double can
be much slower on some platforms, which support this type in software to
provide more precision (e.g. PowerPC Linux and Mac OS X implement a
double-double arithmetic, Solaris and HPUX implement quadruple precision).

 (But be careful when transfering binary data in long double format between
 computers because this format is not standardized and so the concrete bit
 representations vary between different CPU architectures.)

Well, this is not specific to long double anyway: there exist 3 possible
endianess for the double format (x86, PowerPC, ARM).

 (2) A partial but simple solution: Do comparisons on volatile variables only.

Yes (but this is also a problem concerning the maintenance of portable
programs).

 (4) A complex solution: [...]

Yes, this is the workaround I use in practice.

 RECOMMENDATIONS
 I think this problem is really serious and general. Therefore, programmers
 should be warned soon enough.

Yes, but note that this is not the only problem with compilers. See e.g.

  http://gcc.gnu.org/bugzilla/show_bug.cgi?id=36578

for a bug related to casts to long double on x86_64 and ia64. This one is now
tested by: http://www.vinc17.org/software/tst-ieee754.c (which has also tested
bug 323 for a long time).


-- 

vincent at vinc17 dot org changed:

   What|Removed |Added

 CC||vincent at vinc17 dot org


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=323



[Bug middle-end/36578] New: cast to long double not taken into account when result stored to a double

2008-06-19 Thread vincent at vinc17 dot org
The following program shows that the casts to long double are not taken into
account when the result to stored to a variable of type double. This bug occurs
with gcc from 3.4 to 4.3.1, but didn't occur with gcc 3.3.

The bug can be reproduced on a Linux/x86_64 machine (i.e. where the double
type corresponds to the IEEE-754 double precision and the long double type
corresponds to the traditional x86 extended precision) with the arguments:
  4294967219 4294967429

(these arguments allow the double rounding effect to be visible).

#include stdio.h
#include stdlib.h

int main (int argc, char **argv)
{
  double a, b, c, d, e;
  long double al, bl, dl, el;

  if (argc != 3)
exit (1);

  a = atof (argv[1]);
  b = atof (argv[2]);
  al = a;
  bl = b;

  c = a * b;
  d = (long double) a * (long double) b;
  e = al * bl;
  dl = (long double) a * (long double) b;
  el = al * bl;

  printf (a  =  %.0f\n, a);
  printf (b  =  %.0f\n, b);
  printf (c  =  %.0f\n, c);
  printf (d  =  %.0f\n, d);
  printf (e  =  %.0f\n, e);
  printf (dl =  %.0Lf\n, dl);
  printf (el =  %.0Lf\n, el);

  return 0;
}

Incorrect result (with gcc 3.4 to gcc 4.3.1):
a  =  4294967219
b  =  4294967429
c  =  18446744314227707904
d  =  18446744314227707904
e  =  18446744314227712000
dl =  18446744314227709952
el =  18446744314227709952

Correct result (as given by gcc 3.3) is the same except:
d  =  18446744314227712000

Note: I compiled with the options -std=c99 -Wall -pedantic.


-- 
   Summary: cast to long double not taken into account when result
stored to a double
   Product: gcc
   Version: 4.3.1
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: middle-end
AssignedTo: unassigned at gcc dot gnu dot org
ReportedBy: vincent at vinc17 dot org


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=36578



[Bug middle-end/36578] cast to long double not taken into account when result stored to a double

2008-06-19 Thread vincent at vinc17 dot org


--- Comment #1 from vincent at vinc17 dot org  2008-06-19 14:37 ---
To make things clear, perhaps I should have added:

#if __STDC__ == 1  __STDC_VERSION__ = 199901  defined(__STDC_IEC_559__)
#pragma STDC FP_CONTRACT OFF
  printf (__STDC_IEC_559__ defined:\n
  The implementation shall conform to the IEEE-754 standard.\n
  FLT_EVAL_METHOD is %d (see ISO/IEC 9899, 5.2.4.2.2#7).\n\n,
  (int) FLT_EVAL_METHOD);
#endif

which outputs:

__STDC_IEC_559__ defined:
The implementation shall conform to the IEEE-754 standard.
FLT_EVAL_METHOD is 0 (see ISO/IEC 9899, 5.2.4.2.2#7).

So, one can't even say that the value of d is correct because it is the same as
the exact result rounded to double precision. The fact that the double rounding
doesn't occur is a bug here.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=36578



[Bug target/36484] New: g++ generates code with illegal instruction on Pentium D / x86_64

2008-06-10 Thread vincent at vinc17 dot org
To reproduce the bug, get MPFR trunk, compile with CC=g++ (with or without
optimizations) and make check. The crash occurs on tprintf (and tsprintf and
tfprintf). I could reproduce it with both
  g++-4.2 (GCC) 4.2.4 (Debian 4.2.4-2)
  g++.real (Debian 4.3.1-1) 4.3.1

No problem with gcc.

(gdb) run
Starting program: /home/vlefevre/software/mpfr/tests/.libs/lt-tprintf

Program received signal SIGILL, Illegal instruction.
0x7f65b27c5556 in __gmpfr_vasprintf (ptr=0x7fffbac424b0,
fmt=0x404de4 A, b. %Fe, c. %i%zn\n, ap=0x7fffbac424f0)
at vasprintf.c:1845
1845  ++fmt;
Current language:  auto; currently c++
(gdb) bt
#0  0x7f65b27c5556 in __gmpfr_vasprintf (ptr=0x7fffbac424b0,
fmt=0x404de4 A, b. %Fe, c. %i%zn\n, ap=0x7fffbac424f0)
at vasprintf.c:1845
#1  0x7f65b27c1671 in __gmpfr_vprintf (
fmt=0x404dde a. %R*A, b. %Fe, c. %i%zn\n, ap=0x7fffbac424f0)
at printf.c:85
#2  0x004028f4 in check_vprintf (
fmt=0x404dde a. %R*A, b. %Fe, c. %i%zn\n) at tprintf.c:67
#3  0x00402c27 in check_mixed () at tprintf.c:192
#4  0x0040312b in main (argc=1, argv=0x7fffbac427d8) at tprintf.c:361

And with valgrind:

vin:...ftware/mpfr/tests ./tprintf.vg
==19537== Memcheck, a memory error detector.
==19537== Copyright (C) 2002-2007, and GNU GPL'd, by Julian Seward et al.
==19537== Using LibVEX rev 1804, a library for dynamic binary translation.
==19537== Copyright (C) 2004-2007, and GNU GPL'd, by OpenWorks LLP.
==19537== Using valgrind-3.3.0-Debian, a dynamic binary instrumentation
framework.
==19537== Copyright (C) 2000-2007, and GNU GPL'd, by Julian Seward et al.
==19537== For more details, rerun with: -v
==19537== 
vex amd64-IR: unhandled instruction bytes: 0xF 0xB 0x48 0x83
==19537== valgrind: Unrecognised instruction at address 0x4E46556.
==19537== Your program just tried to execute an instruction that Valgrind
==19537== did not recognise.  There are two possible reasons for this.
==19537== 1. Your program has a bug and erroneously jumped to a non-code
==19537==location.  If you are running Memcheck and you just saw a
==19537==warning about a bad jump, it's probably your program's fault.
==19537== 2. The instruction is legitimate but Valgrind doesn't handle it,
==19537==i.e. it's Valgrind's fault.  If you think this is the case or
==19537==you are not sure, please let us know and we'll try to fix it.
==19537== Either way, Valgrind will now raise a SIGILL signal which will
==19537== probably kill your program.
==19537== 
==19537== Process terminating with default action of signal 4 (SIGILL): dumping
core
==19537==  Illegal opcode at address 0x4E46556
==19537==at 0x4E46556: __gmpfr_vasprintf (vasprintf.c:1845)
==19537==by 0x4E42670: __gmpfr_vprintf (printf.c:85)
==19537==by 0x4028F3: check_vprintf(char*, ...) (tprintf.c:67)
==19537==by 0x402C26: check_mixed() (tprintf.c:192)
==19537==by 0x40312A: main (tprintf.c:361)
==19537== 
==19537== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 8 from 1)
==19537== malloc/free: in use at exit: 6,832 bytes in 14 blocks.
==19537== malloc/free: 646 allocs, 632 frees, 390,447 bytes allocated.
==19537== For counts of detected errors, rerun with: -v
==19537== searching for pointers to 14 not-freed blocks.
==19537== checked 209,424 bytes.
==19537== 
==19537== LEAK SUMMARY:
==19537==definitely lost: 0 bytes in 0 blocks.
==19537==  possibly lost: 0 bytes in 0 blocks.
==19537==still reachable: 6,832 bytes in 14 blocks.
==19537== suppressed: 0 bytes in 0 blocks.
==19537== Rerun with --leak-check=full to see details of leaked memory.
zsh: illegal hardware instruction  ./tprintf.vg


-- 
   Summary: g++ generates code with illegal instruction on Pentium D
/ x86_64
   Product: gcc
   Version: 4.2.4
Status: UNCONFIRMED
  Severity: major
  Priority: P3
 Component: target
AssignedTo: unassigned at gcc dot gnu dot org
ReportedBy: vincent at vinc17 dot org


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=36484



[Bug target/36484] g++ generates code with illegal instruction on Pentium D / x86_64

2008-06-10 Thread vincent at vinc17 dot org


--- Comment #2 from vincent at vinc17 dot org  2008-06-10 09:09 ---
(In reply to comment #1)
 You should try out 4.3.1.

As I said, I could reproduce the problem with this version too (but there's a
bug in gmp.h, so I was not sure).


-- 

vincent at vinc17 dot org changed:

   What|Removed |Added

   Severity|normal  |major


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=36484



[Bug target/36484] g++ generates code with illegal instruction on Pentium D / x86_64

2008-06-10 Thread vincent at vinc17 dot org


--- Comment #4 from vincent at vinc17 dot org  2008-06-10 11:26 ---
cmpb$42, -481(%rbp)
je  .L458
jmp .L456
.L463:
cmpb$85, -481(%rbp)
je  .L461
cmpb$90, -481(%rbp)
je  .L462
jmp .L456
.L458:
.loc 1 1845 0
addq$1, -336(%rbp)
.loc 1 1846 0
.value  0x0b0f
.L459:
.loc 1 1849 0
addq$1, -336(%rbp)
.loc 1 1850 0
movl$3, -224(%rbp)
jmp .L455

I've just noticed the following warning:

vasprintf.c:1846: warning: 'mpfr_rnd_t' is promoted to 'int' when passed
through '...'
vasprintf.c:1846: note: (so you should pass 'int' not 'mpfr_rnd_t' to 'va_arg')
vasprintf.c:1846: note: if this code is reached, the program will abort

I don't know if this is related, because the program dies with a SIGILL, not a
SIGABRT.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=36484



[Bug target/36484] g++ generates code with illegal instruction on Pentium D / x86_64

2008-06-10 Thread vincent at vinc17 dot org


--- Comment #6 from vincent at vinc17 dot org  2008-06-10 12:37 ---
OK, but shouldn't g++ generate a SIGABRT instead of a illegal instruction? I've
never had thought that a compiler should generate an illegal instruction on
purpose, so making me think that the problem comes from the compiler.

Also, is it a problem specific to g++ and is it invalid C++? gcc -std=c99
-Wc++-compat -pedantic -Wextra doesn't emit any warning about this code (but
it was said that -Wc++-compat was incomplete). If this is specific to g++, then
the SIGILL is not acceptable. Otherwise -Wc++-compat needs to be improved.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=36484



[Bug target/36484] g++ generates code with illegal instruction on Pentium D / x86_64

2008-06-10 Thread vincent at vinc17 dot org


--- Comment #8 from vincent at vinc17 dot org  2008-06-10 14:02 ---
I agree about SIGSEGV. But what about abort()? Wouldn't this be cleaner? This
builtin trap is quite similar to a failed assertion (often used to avoid
undefined behavior), isn't it?


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=36484



[Bug target/36484] g++ generates code with illegal instruction on Pentium D / x86_64

2008-06-10 Thread vincent at vinc17 dot org


--- Comment #10 from vincent at vinc17 dot org  2008-06-10 14:52 ---
(In reply to comment #9)
 Calling abort() doesn't work with free-standing environments.

OK, but how about using an illegal instruction with free-standing environments
and abort() with hosted ones? After all, the abort() way is documented in the
GCC manual (under __builtin_trap) and IMHO, abort() would provide a better QoI
for hosted environments.

Now, concerning the warning note: if this code is reached, the program will
abort, could with __builtin_trap be added so that the user could look at the
right place in the manual?


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=36484



[Bug target/36484] g++ generates code with illegal instruction on Pentium D / x86_64

2008-06-10 Thread vincent at vinc17 dot org


--- Comment #11 from vincent at vinc17 dot org  2008-06-10 15:21 ---
Here's the testcase (I've never used va_list and so on myself, so I hope it is
correct; at least it shows the missing warning problem). With gcc -Wall
-std=c99 -Wc++-compat -pedantic -Wextra, I don't get any warning concerning the
incompatibility with C++.

#include stdlib.h
#include stdarg.h

typedef enum { ELEM = 17 } en_t;

void vafoo (int i, va_list ap1)
{
  en_t x;

  x = va_arg (ap1, en_t);
  if (x != ELEM)
exit (1);
}

void foo (int i, ...)
{
  va_list ap;

  va_start (ap, i);
  vafoo (i, ap);
  va_end (ap);
}

int main ()
{
  foo (0, ELEM);
  return 0;
}


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=36484



[Bug middle-end/36296] wrong warning about potential uninitialized variable

2008-05-28 Thread vincent at vinc17 dot org


--- Comment #7 from vincent at vinc17 dot org  2008-05-28 08:18 ---
(In reply to comment #6)
 (In reply to comment #5)
  BTW, the i = i trick
 
 it only works in the initializer and not as a statement after the fact.

But in such a case, as i is not initialized yet, this may be undefined behavior
with some C implementations.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=36296



[Bug middle-end/36296] wrong warning about potential uninitialized variable

2008-05-22 Thread vincent at vinc17 dot org


--- Comment #2 from vincent at vinc17 dot org  2008-05-22 08:34 ---
The severity should probably be changed to enhancement because gcc behaves as
documented (well, almost).

What can be done IMHO is:
1. Split the -Wuninitialized into two different warnings: one for which gcc
knows that the variable is uninitialized and one for which it cannot decide.
-Wuninitialized currently does both.
2. Provide an extension so that the user can tell gcc not to emit a warning for
some particular variable. This would sometimes be better than adding a dummy
initialization (which has its own drawbacks).

In the mean time, make the documentation better concerning -Wuninitialized:
change the first sentence Warn if an automatic variable is used without first
being initialized [...] to Warn if an automatic variable *may be* used
without first being initialized (though the behavior is detailed later).


-- 

vincent at vinc17 dot org changed:

   What|Removed |Added

 CC||vincent at vinc17 dot org


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=36296



[Bug c/36299] New: spurious and undocumented warning with -Waddress for a == 0 when a is an array

2008-05-22 Thread vincent at vinc17 dot org
With -Waddress (implied by -Wall), I get the following warning when I use the
test a == 0 where a is an array: the address of 'a' will never be NULL. This
behavior is undocumented and inconsistent (see below). Here's a testcase:

int main (void)
{
  char a[1], *b;
  b = a;
  if (a == 0)
return 1;
  else if (a == (void *) 0)
return 2;
  else if (b == 0)
return 3;
  else if (b == (void *) 0)
return 4;
  return 0;
}

gcc warns only for a == 0 (and this is OK to use 0 instead of (void *) 0
because it is a valid form for a null pointer constant).

Moreover this is very similar to code like
  if (1) ...
or code given in bug 12963, for which gcc no longer emits warnings: indeed such
kind of correct and useful code is typically used in macros.


-- 
   Summary: spurious and undocumented warning with -Waddress for a
== 0 when a is an array
   Product: gcc
   Version: 4.3.1
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: c
AssignedTo: unassigned at gcc dot gnu dot org
ReportedBy: vincent at vinc17 dot org


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=36299



[Bug middle-end/36296] wrong warning about potential uninitialized variable

2008-05-22 Thread vincent at vinc17 dot org


--- Comment #4 from vincent at vinc17 dot org  2008-05-22 11:01 ---
(In reply to comment #3)
 A way to tell gcc a variable is not uninitialized is to perform
 self-initialization like
 
  int i = i;

This doesn't seem to be valid C code.

 this will cause no code generation but inhibits the warning.  Other compilers
 may warn about this construct of course.

Or worse, generate non-working code.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=36296



[Bug middle-end/36296] wrong warning about potential uninitialized variable

2008-05-22 Thread vincent at vinc17 dot org


--- Comment #5 from vincent at vinc17 dot org  2008-05-22 11:23 ---
BTW, the i = i trick, which is guaranteed to be valid and no-op only *after* i
has been initialized doesn't avoid the warning in such a case. I don't know if
this would be a good feature (the main drawback I can see would be to miss
warnings when this is a result of macro expansion). For instance:

#include assert.h
int foo (int x)
{
  int y;
  assert (x == 0 || x == 1);
  if (x == 0)
y = 1;
  else if (x == 1)
y = 2;
  y = y;  /* to tell the compiler that y has been initialized */
  return y;
}


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=36296



[Bug c/28575] misleading __builtin_choose_expr documentation error

2008-04-24 Thread vincent at vinc17 dot org


--- Comment #3 from vincent at vinc17 dot org  2008-04-24 15:04 ---
Is there any reason why this hasn't been fixed yet? (The trunk still has the
error. And I'm asking this because there's only one word to change.)


-- 

vincent at vinc17 dot org changed:

   What|Removed |Added

 CC||vincent at vinc17 dot org


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=28575



[Bug preprocessor/31186] -I/usr/include not taken into account

2007-05-22 Thread vincent at vinc17 dot org


--- Comment #4 from vincent at vinc17 dot org  2007-05-22 22:50 ---
(In reply to comment #3)
 My recollection is that the special -I behavior is there because
 the system headers have special non-warning properties.
 This situation doesn't apply to -L.

But this introduces an inconsistency, with the effect that the version of the
header and the version of the library do not match.

 Generally speaking this is not a good idea.  Usually people *want* their
 environment to influence configure, and usually if configure overrides this
 it means difficult to fix problems on weirder systems.

But configure does override the user's environment for non-system directories
(and even system directories concerning -L). That's not logical.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=31186



[Bug preprocessor/31186] New: -I/usr/include not taken into account

2007-03-15 Thread vincent at vinc17 dot org
When C_INCLUDE_PATH is defined and -I/usr/include is used, /usr/include should
have the precedence, but examples show that it is not taken into account.

vin:~ cat test.c
#include mpfr.h
vin:~ C_INCLUDE_PATH=/home/vlefevre/include gcc -E -I/usr/include test.c |
grep mpfr.h
# 1 /home/vlefevre/include/mpfr.h 1 3
[...]
vin:~ gcc -E -I/usr/include test.c | grep mpfr.h
# 1 /usr/include/mpfr.h 1 3 4
[...]
vin:~ C_INCLUDE_PATH=/home/vlefevre/include gcc -E
-I/usr/milip-local/stow/mpfr-2.2.0/mpfr/include test.c | grep mpfr.h
# 1 /usr/milip-local/stow/mpfr-2.2.0/mpfr/include/mpfr.h 1
[...]

The gcc man page says:

   -isystem dir
   Search dir for header files, after all directories specified
   by -I but before the standard system directories.  Mark it
   as a system directory, so that it gets the same special
   treatment as is applied to the standard system directories.

so that in the first case, the search path should be:

  /usr/include /home/vlefevre/include /usr/include

equivalent to:

  /usr/include /home/vlefevre/include

Note: this introduces an inconsistency when both -I/usr/include and -L/usr/lib
are used, since the header file is taken from C_INCLUDE_PATH and the library
file is taken from /usr/lib.


-- 
   Summary: -I/usr/include not taken into account
   Product: gcc
   Version: 4.1.2
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: preprocessor
AssignedTo: unassigned at gcc dot gnu dot org
ReportedBy: vincent at vinc17 dot org


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=31186



[Bug preprocessor/31186] -I/usr/include not taken into account

2007-03-15 Thread vincent at vinc17 dot org


--- Comment #2 from vincent at vinc17 dot org  2007-03-15 16:51 ---
(In reply to comment #1)
 I don't think this is a bug, you need to read the other part of the document
 which says if you supply -I DEFAULT_DIR, it is ignored.

OK, but this isn't very clear, as the description under -isystem says *all*
directories specified by -I. I'd replace all by non-ignored. The behavior
w.r.t symbolic links to such directories should also be specified.

Now, this behavior, if it is intentional, leads to 2 questions:

1. Shouldn't -L have a similar behavior to ensure consistency between library
search paths and include search paths?

2. Software is often compiled with configure, make, make install. How can one
force the compiler to look in /usr/include  /usr/lib first (i.e. override the
user's environment)?


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=31186



[Bug target/30484] Miscompilation of remainder expressions on CPUs of the i386 family

2007-01-16 Thread vincent at vinc17 dot org


--- Comment #1 from vincent at vinc17 dot org  2007-01-16 22:03 ---
Is this specific to x86? On PowerPC (gcc 4.0.1 from Mac OS X), I get:

-2147483648 % -1 - -2147483648

Ditto with:

#include limits.h
#include stdio.h

int main (void)
{
  volatile int i = INT_MIN, j = -1;
  printf (%d\n, i % j);
  return 0;
}


-- 

vincent at vinc17 dot org changed:

   What|Removed |Added

 CC||vincent at vinc17 dot org


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=30484



[Bug target/30484] Miscompilation of remainder expressions on CPUs of the i386 family

2007-01-16 Thread vincent at vinc17 dot org


--- Comment #2 from vincent at vinc17 dot org  2007-01-16 22:10 ---
-2147483648, this was on a G5, with gcc 4.0.1 under Mac OS X. On a G4 under
Linux, with gcc 4.1.2 prerelease (Debian), I get 2147483647.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=30484



[Bug middle-end/29335] transcendental functions with constant arguments should be resolved at compile-time

2006-11-05 Thread vincent at vinc17 dot org


--- Comment #33 from vincent at vinc17 dot org  2006-11-05 23:27 ---
(In reply to comment #32)
 (In reply to comment #31)
  (In reply to comment #30)
  So, I don't think a mpfr_signgam alone would really be useful. So, I think 
  that
  choice 2 would be better.
 
 Okay, sounds fine.  Would this make it into 2.2.1 or 2.3?

For compatibility reasons (i.e. the 2.2.x versions must have the same
interface), this can only be in 2.3.0.

 And do you have any very rough timeframe for each release so I can plan
 accordingly for gcc?

A pre-release of 2.2.1 should be there soon; there are still bugs being fixed
(they will be ported to the 2.2 branch once this is complete).

I don't know about 2.3.0; probably in a few months, because there currently
aren't many differences from the 2.2 branch.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=29335



[Bug middle-end/29335] transcendental functions with constant arguments should be resolved at compile-time

2006-11-02 Thread vincent at vinc17 dot org


--- Comment #31 from vincent at vinc17 dot org  2006-11-02 15:57 ---
(In reply to comment #30)

So, I don't think a mpfr_signgam alone would really be useful. So, I think that
choice 2 would be better.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=29335



[Bug middle-end/29335] transcendental functions with constant arguments should be resolved at compile-time

2006-10-31 Thread vincent at vinc17 dot org


--- Comment #26 from vincent at vinc17 dot org  2006-10-31 09:54 ---
(In reply to comment #25)
 As I think about it more, I'm leaning toward having a new function 
 mpfr_lgamma.
  This is because if we want this mpfr function to mimic the behavior of 
 lgamma,
 we need some mechanism to retrieve the value of signgam.  So maybe the
 interface you suggested at the bottom of this link would be best where we
 retrieve an int* from mpfr_lgamma to determine signgam:
 http://sympa.loria.fr/wwsympa/arc/mpfr/2006-10/msg00033.html

Yes, it's true that it is useful to have this value. But determining it
separately is quite easy, without taking a noticeable additional time in
average.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=29335



[Bug middle-end/29335] transcendental functions with constant arguments should be resolved at compile-time

2006-10-31 Thread vincent at vinc17 dot org


--- Comment #28 from vincent at vinc17 dot org  2006-10-31 22:15 ---
(In reply to comment #27)
 It's likely that I'll end up doing it, so would you please tell me how?

According to the C rationale (I haven't checked), the sign of gamma(x) is -1 if
[iff] x  0  remainder(floor(x), 2) != 0. But if x is a non-positive integer,
the sign of gamma(x) isn't defined. Handle these cases first.

The test x  0 is easy to do. In MPFR, you can compute floor(x) (or trunc(x))
with the precision min(PREC(x),max(EXP(x),MPFR_PREC_MIN)), but then, there's no
direct function to decide whether the result is even or odd (I thought we added
this, but this isn't the case). The solution can be to divide x by 2 (this is
exact, except in case of underflow) and call mpfr_frac directly. If the result
is between -0.5 and 0, then gamma(x) is negative. If the result is between -1
and -0.5, then gamma(x) is positive. So, a 2-bit precision for mpfr_frac should
be sufficient (as -0.5 is representable in this precision), but choose a
directed rounding (not GMP_RNDN) for that. Then you can just do a comparison
with -0.5; the case of equality with -0.5 depends on the chosen rounding (if
you obtain -0.5, then it is an inexact result since x is not an integer). For
instance, if you choose GMP_RNDZ, then a result  -0.5 means that gamma(x) is
negative, and a result = -0.5 means that gamma(x) is positive.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=29335



[Bug middle-end/29335] transcendental functions with constant arguments should be resolved at compile-time

2006-10-28 Thread vincent at vinc17 dot org


--- Comment #18 from vincent at vinc17 dot org  2006-10-28 09:07 ---
(In reply to comment #17)
 Yes, I can reproduce the NaN.  In fact, any negative value 
 gives a NaN.

Not any negative value, but in lngamma.c:

  /* if x  0 and -2k-1 = x = -2k, then lngamma(x) = NaN */

probably because the gamma value is negative. This is because MPFR defines
lngamma as log(gamma(x)) while the C standard defines it as log|gamma(x)|. I
wonder if this should be regarded as a bug or if a new function (say,
mpfr_lgamma) should be defined in MPFR (in which case, not before 2.3.0). Do
other standards (other languages) define such a function, either as
log(gamma(x)) or as log|gamma(x)|?

Also, warning! The mpfr_erfc is incomplete for x = 4096: There is an infinite
loop in the 2.2 branch. This problem is now detected in the trunk, and until
this is completely fixed, a NaN is returned with the MPFR erange flag set. This
should be in the 2.2 branch in a few days (and a preversion of MPFR 2.2.1 will
come several days after that).


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=29335



[Bug middle-end/29335] transcendental functions with constant arguments should be resolved at compile-time

2006-10-28 Thread vincent at vinc17 dot org


--- Comment #20 from vincent at vinc17 dot org  2006-10-28 14:05 ---
(In reply to comment #19)
 The documenation in MPFR says:
  -- Function: int mpfr_lngamma (mpfr_t ROP, mpfr_t OP, mp_rnd_t RND)
  Set ROP to the value of the Gamma function on OP, and its
  logarithm respectively, rounded in the direction RND.  When OP is
  a negative integer, NaN is returned.
 
 It only talked about negative integers,

AFAIK, this was mainly for Gamma(negative integer). But this is also true for
lngamma(negative integer). But the point is that if gamma(x) is negative, then
lngamma(x) is NaN since the logarithm of a negative value is NaN. But that's
why the C standard defines lgamma as log|gamma(x)| instead of log(gamma(x)).

 and I glossed over the fact that it
 left out the absolute value that C does.  So it was pilot error, but I think a
 clarification would help.  Many times in the docs MPFR takes pains to follow
 the C99 standard, e.g. the inputs to atan2 or pow.  Where you deviate from it
 should also be noted.

I agree. And I think that none of the MPFR developers were aware of this
problem (I didn't notice the difference when I was looking for C functions that
were missing in MPFR). I posted a mail about that on the MPFR mailing-list.

 Or you could consider it a bug and fix it. :-)

I think this is the best solution, in particular because this would change only
NaN values.

 Anyway, I think I can hand wrap mpfr_log(mpfr_abs(mpfr_gamma)) myself right?

Probably not a good idea, because I think that mpfr_gamma may overflow, though
the final result may be in the double-precision range.

 Glad to hear a new version is coming out.  If you make a prerelease tarball
 available somewhere I'd like to try it with mainline GCC.

OK, I hope I won't forget to announce it in the gcc dev mailing-list.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=29335



[Bug middle-end/29335] transcendental functions with constant arguments should be resolved at compile-time

2006-10-28 Thread vincent at vinc17 dot org


--- Comment #22 from vincent at vinc17 dot org  2006-10-28 16:58 ---
(In reply to comment #21)
 Since you mentioned C functions missing in MPFR, what are your plans for the
 Bessel functions?  I'd like to hook up builtins j0/j1/jn/y0/y1/yn.  Thanks.

They're in the TODO, but there are no plans yet to implement them.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=29335



[Bug preprocessor/29588] /usr/local/include should not be in the default include path

2006-10-25 Thread vincent at vinc17 dot org


--- Comment #2 from vincent at vinc17 dot org  2006-10-25 14:00 ---
(In reply to comment #1)
 So this sounds like a bug in your installation.

This cannot be with my installation in particular as the bug occurred on
various Linux machines (only one is mine). However it could be due to bad
defaults in Linux distributions.

FYI, I've opened a bug here:

  http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=395177

against libc6 (there could be a fix there), but perhaps ld should be fixed too,
as the bug occurs whether -static is given or not.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=29588



[Bug preprocessor/29588] New: /usr/local/include should not be in the default include path

2006-10-24 Thread vincent at vinc17 dot org
Because /usr/local/include is in the default include path, the include path and
the library path are not consistent. The consequence is that (unless the user
has modified the search paths with environment variables or switches) when some
version of a library is installed in /usr (e.g., provided by the system) and
another version of the library is installed in /usr/local (e.g., installed by
the admin with configure, make, make install), the header file will be taken
from /usr/local/include and the library will be taken from /usr/lib, but they
do not correspond to the same version.

For instance, this problem can be seen when GMP 4.1.4 is installed in /usr (as
in Debian/stable) and the user installs GMP 4.2.1 (the latest version) in
/usr/local (with configure, make, make install), as the gmp.h from GMP 4.2.1 is
not compatible with the GMP library version 4.1.4.

In short, gcc should make sure that include and library search paths are
consistent *by default*. If the user wants a different search path, he can
still modifie C_INCLUDE_PATH, LIBRARY_PATH and LD_LIBRARY_PATH (for instance)
altogether.


-- 
   Summary: /usr/local/include should not be in the default include
path
   Product: gcc
   Version: 4.1.2
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: preprocessor
AssignedTo: unassigned at gcc dot gnu dot org
ReportedBy: vincent at vinc17 dot org
  GCC host triplet: i686-pc-linux-gnu
GCC target triplet: i486-linux-gnu


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=29588



[Bug other/29405] GCC should include latest GMP/MPFR sources and always build libgmp.a/libmpfr.a

2006-10-10 Thread vincent at vinc17 dot org


--- Comment #3 from vincent at vinc17 dot org  2006-10-10 13:53 ---
(In reply to comment #2)
 What's worrying me a bit is the versioning of MPFR.

Note that GMP is similar.

 Vincent, would it be possible that some version number is increased every
 time a patch is posted, so that the current version would be 2.2.16 or
 something like that?

There has been a very short discussion about that last year:
  http://sympa.loria.fr/wwsympa/arc/mpfr/2005-12/msg00049.html

The problem is that it is not that simple. First, for some reasons, not all
patches committed to the 2.2 branch are put on the 2.2.0 web page, so that the
future 2.2.1 version will not just be 2.2.0 + the patches provided on the web
page. We could provide another way to identify the patches, but as said in the
cited URL, this could be done only as of MPFR 2.3.0 (possibly except if one
decides just to add a macro to mpfr.h for this purpose). The main problem is
that one may want to apply some patches, but not others, or identify builds
from the Subversion repository... For instance, the macro could contain a group
of tags (e.g. the name of the patches and possibly some other information). But
how would this macro be used by gcc and other software? Would a group of tags
be useful, or too complex?


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=29405



[Bug c/28800] New: Incorrect warning ISO C forbids an empty source file

2006-08-22 Thread vincent at vinc17 dot org
I get the following warning:

dixsept:~ cat  tst.c
#define FOO
dixsept:~ gcc -pedantic -c tst.c
tst.c:1: warning: ISO C forbids an empty source file

But the source isn't empty (and AFAIK, the ISO C doesn't forbid empty sources).
Perhaps gcc mixes up with a translation unit, in which case the wording should
be changed.


-- 
   Summary: Incorrect warning ISO C forbids an empty source file
   Product: gcc
   Version: 4.1.2
Status: UNCONFIRMED
  Severity: normal
  Priority: P3
 Component: c
AssignedTo: unassigned at gcc dot gnu dot org
ReportedBy: vincent at vinc17 dot org


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=28800



[Bug middle-end/27116] [4.2 Regression] Incorrect integer division (wrong sign).

2006-06-08 Thread vincent at vinc17 dot org


--- Comment #17 from vincent at vinc17 dot org  2006-06-08 07:18 ---
The patch looks strange to me too: is there any reason why the optimization
would be correct under wrapping? i.e. I don't understand why -fwrapv can fix
the problem (as said in comment #1).


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=27116



[Bug middle-end/21067] Excessive optimization of floating point expression

2006-05-21 Thread vincent at vinc17 dot org


--- Comment #5 from vincent at vinc17 dot org  2006-05-22 01:08 ---
IMHO, -frounding-math should be the default, unless -ffast-math is given.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=21067



[Bug c/27116] [4.2 Regression] Incorrect integer division (wrong sign).

2006-04-11 Thread vincent at vinc17 dot org


--- Comment #3 from vincent at vinc17 dot org  2006-04-11 15:16 ---
(In reply to comment #2)
 which is incorrect since the input domain is not symmetric wrt 0.

I disagree. Could you give an explicit example?


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=27116



[Bug c/27116] [4.2 Regression] Incorrect integer division (wrong sign).

2006-04-11 Thread vincent at vinc17 dot org


--- Comment #5 from vincent at vinc17 dot org  2006-04-11 15:46 ---
(In reply to comment #4)
 I mean the middle-end probably does some interesting foldings of 
 -2147483647L
 - 1L as the result -08000 has the overflow flag set.

The bug also occurs with: (long) -2147483648LL.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=27116



[Bug c/27116] [4.2 Regression] Incorrect integer division (wrong sign).

2006-04-11 Thread vincent at vinc17 dot org


--- Comment #6 from vincent at vinc17 dot org  2006-04-11 15:50 ---
BTW, concerning the overflow flag, I think it comes from the sign cancellation:
the long constant -2147483648 is replaced its opposite, but the opposite is not
representable in a long, hence the overflow.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=27116



[Bug libgcj/16122] gij - Incorrect result due to computations in extended precision on x86

2006-02-14 Thread vincent at vinc17 dot org


--- Comment #5 from vincent at vinc17 dot org  2006-02-14 17:03 ---
(In reply to comment #4)
 Note however, that the true accurate value for d, calculated at infinite
 precision, is 1-(2^-16).  So, the absolute error for gcj is 1+(2^-16) and the
 absolute error with correct rounding is 1-(2^-16).  (I'm not surprised this
 hasn't been reported as a problem with any real applications!)

Note that some algorithms may be sensitive to this difference. I give an
example in http://www.vinc17.org/research/publi.en.html#Lef2005b (The
Euclidean division implemented with a floating-point division and a floor); the
effect of extended precision is dealt with in Section 5.

A second problem is the reproducilibity of the results on various
architectures. Under probabilistic hypotheses, there should be something like 1
case over 2048 that is incorrectly rounded (in the real world, this is much
less).

 It might be worth setting the floating-point precision of gcj to double, but
 that would only fix the double-precision case, and I presume we'd still have
 the same double rounding problem for floats.  

Yes, however doubles are nowadays used much more often than floats, IMHO. I
think that fixing the problem for the doubles would be sufficient (as it is
probably too difficult to do better), though not perfect.

 And in any case, I do not know if libcalls would be affected by being entered
 with the FPU in round-to-double mode.  We might end up breaking things.

The only glibc function for which problems have been noticed is pow in corner
cases. See http://sources.redhat.com/bugzilla/show_bug.cgi?id=706. And it is
also inaccurate when the processor is configured in extended precision; so in
any case, users shouldn't rely on it. I'd be interested in other cases, if any.

More information here: http://www.vinc17.org/research/extended.en.html


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=16122



[Bug middle-end/21067] Excessive optimization of floating point expression

2005-06-15 Thread vincent at vinc17 dot org

--- Additional Comments From vincent at vinc17 dot org  2005-06-15 16:42 
---
Even without fenv.h, the function could be in a library called in a directed
rounding mode.

And one can change the rounding mode via a standard function in the glibc, no
need for a pragma.

-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=21067


[Bug middle-end/21067] Excessive optimization of floating point expression

2005-06-15 Thread vincent at vinc17 dot org


-- 
   What|Removed |Added

 CC||vincent at vinc17 dot org


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=21067


[Bug middle-end/21032] GCC 3.4.3 wrongly reorders floating-point operations

2005-06-15 Thread vincent at vinc17 dot org

--- Additional Comments From vincent at vinc17 dot org  2005-06-15 16:49 
---
I think that this is just bug 323 (which is a real bug, not invalid). Version
3.4 added other regressions related to this bug (e.g. when one has function
calls), and this is not specific to the negate operation.

-- 
   What|Removed |Added

 CC||vincent at vinc17 dot org


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=21032


[Bug middle-end/21032] GCC 3.4.3 wrongly reorders floating-point operations

2005-06-15 Thread vincent at vinc17 dot org

--- Additional Comments From vincent at vinc17 dot org  2005-06-15 17:08 
---
Oops, forget my comment. There is a bug, but 5.1.2.3#13 / 6.3.1.5#2 / 6.3.1.8#2
is not related to it if gcc does reduce the precision (due to the volatile,
that in fact prevents bug 323 from occurring here, right?).

Well, if gcc assumes more or less that all the types have the same range and
precision when doing optimization, then this could indeed be seen as bug 323. It
would be interesting to know how gcc deduced (wrongly) that it could do the
change concerning the neg.

-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=21032


[Bug other/14708] description of -ffloat-store in gcc man page incorrect/inaccurate

2004-12-08 Thread vincent at vinc17 dot org

--- Additional Comments From vincent at vinc17 dot org  2004-12-08 15:13 
---
I'm wrong. gcc 3.4 (from Debian) still has this problem. So, -ffloat-store is
still needed for C compliance.

-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=14708


[Bug other/14708] description of -ffloat-store in gcc man page incorrect/inaccurate

2004-11-29 Thread vincent at vinc17 dot org

--- Additional Comments From vincent at vinc17 dot org  2004-11-29 15:35 
---
In Comment 5, I wrote:
 The real problem is that intermediate results in extended precision are not
converted back to double after a cast or an assignment; this is required by the
C standard, whether __STDC_IEC_559__ is defined or not.

This problem has been fixed in gcc 3.4. So, now I think that the -ffloat-store
option should no longer be used: if the user wants the result to be converted
into double precision, he could add a cast to double, which is more portable
than relying on --ffloat-store. Also, note that neither the cast nor the
-ffloat-store option solves the problem of double rounding as described here:

  http://www.vinc17.org/research/extended.en.html

IMHO the manual should discourage the use of -ffloat-store.

-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=14708