Re: buggy optimization levels...

2003-08-03 Thread Jens Rehsack
On 01.08.2003 05:00, Erik Trulsson wrote:

On Thu, Jul 31, 2003 at 10:30:57PM -0400, Chuck Swiger wrote:
[...]

problem of compiling the system with cc -O2 resulting in a buggy kernel.  
If you determine that compiling with cc -O -fgcse results in failures, 
[...]

There is an open bug report in gcc belonging to gcse:
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=11741
You may create another report with your own research
results or notify the gcc people to fix your problem
when fixing optimization/11741, too.
Jens

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: buggy optimization levels...

2003-08-03 Thread Chuck Swiger
Erik Trulsson wrote:
On Sat, Aug 02, 2003 at 03:52:25PM -0400, Chuck Swiger wrote:
[ ... ]
That wasn't my real point anyway. I was trying to refute your statement
that Even if the code contains a bug, cc -O and cc -O -fgcse
should produce the same results.
I claim that if the code has a bug that results in undefined behaviour
then the compiler is allowed to produce different results when invoked
with different optimization flags.
If the code being compiled has a bug that results in undefined behavior, the 
compiler is allowed to produce different results when invoked with different 
optimization flags.

While true, that doesn't refute my statement: what the compiler is allowed to 
do, and what the compiler should do, is required not to diverge for code which 
does not involve undefined behavior.

[ ... ]
Page 586 of _Compilers: Principles, Techniques, and Tools_ states:

First, a transformation must preserve the meaning of programs.  That is, an 
optimization must not change the output produced by a program for a given 
input, or cause an error, such as a division by zero, that was not present 
in the original program.  The influence of this criterion prevades this 
chapter; at all times we take the safe approach of missing an opportunity 
to apply a transformation rather than risk changing what the program does.

	--

Like your divide-by-zero example above, or this paragraph about semantics 
vs meaning, I'm not going to disagree if you want to state that the 
running time of a program is part of the behavior.  However, using the 
terms in such a fashion precludes you from understanding the intended 
meaning in this particular context.
I understand the intended meaning. I just don't agree completely with
your reasoning.   I would say that the compiler is not allowed to
*introduce* errors, or to change the meaning of *correct* programs.
So far, so good.

(Correct here essentially meaning programs that do not invoke undefined
behaviour.)  For programs that do not have a defined 'meaning' the
compiler is free to do anyting.
This is wrong.  The fact that integer divide-by-zero by not defined by ANSI C 
(C89, et al), means the following program does not have well-defined behavior:

int main() {
  return 120 / 0;
}
...however, what happens if you remove the semicolon after the zero?

That results in a syntax error, and the compiler is _not_ free to do anything 
it likes in such a case: it is required to identify the syntax error in code 
which fails to parse by issuing an error message:

d.c: In function `main':
d.c:3: syntax error before `}'
There is more to the point just above than just identifying syntax errors.  If 
you have 5 input source files (call them translation units), of which four are 
valid code, and the fifth contains an error resulting in undefined behavior, the 
compiler is required to compile four of the five source files in a well-defined 
fashion.

One can repeat that analysis for code within the fifth input source file: if you 
moved only the function containing the bug/undefined code to a sixth file, one 
would discover that the compiler will handle the rest of the code in that file 
in a well-defined fashion.

One could repeat the analysis yet again, using units known as basic blocks, 
which may either be a single intermediate code instruction (that's intermediate 
code within the AST, not the input source code), or a sequence of instructions 
terminated by a flow-of-control instruction like a branch, return from 
subroutine, memory protection/barrier commands, etc.

If a compiler was not allowed to change the result of running a program
having undefined behaviour when compiled with different optimization
flags, then this would preclude doing just about any optimization.
No, it would not.  Code optimization consists of transformations to the AST 
based on algebraic invariants, liveness analysis used by CSE and dead-code 
removal, loop invariants, and other techniques which are universal 
(platform-independant), as well as register allocation, peephole analysis, and 
other transformations to the target code which are platform-specific.

What happens to source code with undefined semantics isn't particularly useful 
to the person writing compiler: valid optimization techniques are required to 
not change the meaning of any possible well-defined input source code, so being 
able to behave differently in the case of undefined semantics doesn't help.

I am fairly certain that for *all* the specific optimizations available
in gcc it is possible to find *some* program that will give different
results depending on if that optimization ws used or not.
Are you claiming that every specific optimization available in gcc contains 
bugs?  :-)

If one were to accept your claims that the compiler should not perform
any optimizations that could change the behaviour any program, then it
would not be able to do any optimization at all, which is clearly not a
desirable situation.
It is entirely possible to 

Re: buggy optimization levels...

2003-08-02 Thread Chuck Swiger
Erik Trulsson wrote:
On Thu, Jul 31, 2003 at 10:30:57PM -0400, Chuck Swiger wrote:
[ ... ]
I understand that figuring out why the kernel died can be hard, 
particularly if the failures aren't concise and completely reproducable, 
and thus tracing the problem back to making the right change to gcc to fix 
the optimization that caused the observed failure is thus also hard.
Note that it is not necessarily gcc which is at fault for such
failures. It may be a bug in gcc, but it may also be a bug in the code
being compiled that has a bug that only shows up under higher
optimization levels.
The latter is probably somewhat more common actually.
Does the last comment mean that you can provide at least one example of code 
which behaves differently when compiled with cc -O versus cc -O2?

Otherwise, what does more common mean in the context of zero examples?

[ ... ]
...and makes it so that -O2, -O3, etc does not enable GCSE optimization.
But if the bug is not in gcc but in the code being compiled (and which
only happens to show up when compiled with GCSE optimization) 
Even if the code contains a bug, cc -O and cc -O -fgcse should produce the 
same results.  Excluding certain well-defined exceptions (-ffast-math comes to 
mind), compiler optimizations likes -fgcse are not allowed to change the meaning 
of compiled code, do we agree?

 ...such a patch would disable this optimization for correct code also
 even though it is not necessary there.
Such a patch would disable the optimization for all cases.

If there exists any lexically correct input source code (ie, which parses 
validly) where compiling with -fgcse results in different behavior, that 
optimization is unsafe and should not be enabled by -O2 in any circumstance.

--
-Chuck
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: buggy optimization levels...

2003-08-02 Thread Chuck Swiger
Kris Kennaway wrote:
[ ... ]
This is the trivial part (you don't even need to modify gcc, because
all the optimizations turned on by -Ofoo are also available as
individual -fblah options).
Indeed.  If you've forgotten, I quoted the section of the gcc source code which 
indicates which individual -fblah options are enabled at -O1, -O2, -O3.

As I've already said, once you have a
self-contained test-case that demonstrates that a particular gcc
optimization level generates broken code, the gcc people will fix it.
Yes, I hope and believe they would.  If you've also forgotten the origin of this 
thread, it was:

| The known bugs section of the GCC info documentation lists 5 issues; man
| gcc lists none.  Can someone provide a test case for a bug involving cc -O
| versus cc -O3 under FreeBSD 4-STABLE for the x86 architecture?
One might (reasonably and correctly) conclude that I was asking for examples of 
such test-cases.

--
-Chuck
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: buggy optimization levels...

2003-08-02 Thread Erik Trulsson
On Sat, Aug 02, 2003 at 12:19:06PM -0400, Chuck Swiger wrote:
 Erik Trulsson wrote:
 On Thu, Jul 31, 2003 at 10:30:57PM -0400, Chuck Swiger wrote:
 [ ... ]
 I understand that figuring out why the kernel died can be hard, 
 particularly if the failures aren't concise and completely reproducable, 
 and thus tracing the problem back to making the right change to gcc to 
 fix the optimization that caused the observed failure is thus also hard.
 
 Note that it is not necessarily gcc which is at fault for such
 failures. It may be a bug in gcc, but it may also be a bug in the code
 being compiled that has a bug that only shows up under higher
 optimization levels.
 The latter is probably somewhat more common actually.
 
 Does the last comment mean that you can provide at least one example of 
 code which behaves differently when compiled with cc -O versus cc -O2?
 
 Otherwise, what does more common mean in the context of zero examples?

If you want real world examples you can trawl through the mailing list
archives. I am sure you can find at least a few examples if you look
hard enough. (Searching through list archives is not so fun that I will
do it myself unless there is something that *I* want to know.)

A somewhat contrived example that behaves differently when compiled
with -O3 or when compiled with -O2 or lower optimization follows:

static int f(int a)
{
return a/0;
}
int main(void)
{
int x;
x = f(5);
return 0;
}

Compiling this with -O2 or lower and running the program will result
in the program crashing. Compiled with -O3 the program just exits
cleanly.  (FreeBSD 4.8-stable ; gcc 2.95.4)
(The code compiles just fine withouth warnings in either case even when
compiled with -ansi -pedantic -Wall)

Since there is a bug (division by zero) in the program that invokes
undefined behaviour either result is perfectly acceptable, and the
difference is not due to a bug in gcc, but due to a bug in my program.

(The reason for the different behaviour is that -O3 turns on inlining of
functions, and when the call to f() has been inlined gcc is able to
determine that the call has no side-effects that need to be preserved,
and since the result of the call is never used after being assigned to
x the whole line 'x = f(5);' can safely be removed.
When compiled with -O2 or lower the compiler is not able to determine
that the call to f() can be omitted and therefore f() will be called.)

 
 [ ... ]
 ...and makes it so that -O2, -O3, etc does not enable GCSE optimization.
 
 But if the bug is not in gcc but in the code being compiled (and which
 only happens to show up when compiled with GCSE optimization) 
 
 Even if the code contains a bug, cc -O and cc -O -fgcse should produce 
 the same results.  Excluding certain well-defined exceptions (-ffast-math 
 comes to mind), compiler optimizations likes -fgcse are not allowed to 
 change the meaning of compiled code, do we agree?

Not quite.  Compiler optimization flags (with a few exceptions like
-ffast-math) are not allowed to change the semantics of the compiled
code.  For buggy code that invokes undefined behaviour (divison by
zero, accessing unallocated memory, etc.) there is no semantics to
preserve and therefore the compiled code may well produce different
results when compiled with different flags.
(Undefined behaviour in the context of the C standard means the
compiler is allowed to do whatever it damn well pleases, including, but
not limited to, doing what you wanted and expected it to do, formatting
your harddisk or making demons fly out of your nose.


 
  ...such a patch would disable this optimization for correct code also
  even though it is not necessary there.
 
 Such a patch would disable the optimization for all cases.
 
 If there exists any lexically correct input source code (ie, which parses 
 validly) where compiling with -fgcse results in different behavior, that 
 optimization is unsafe and should not be enabled by -O2 in any circumstance.

With that definition just about *all* optimizations would be unsafe.

(And optimization is actually *supposed* to give different behaviour. 
The running time of a program is part of its behaviour and
optimization is generally supposed to reduce the running time, thereby
giving different behaviour.)


-- 
Insert your favourite quote here.
Erik Trulsson
[EMAIL PROTECTED]
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: buggy optimization levels...

2003-08-02 Thread Chuck Swiger
Erik Trulsson wrote:
[ ... ]
A somewhat contrived example that behaves differently when compiled
with -O3 or when compiled with -O2 or lower optimization follows:
static int f(int a)
{
return a/0;
}
int main(void)
{
int x;
x = f(5);
return 0;
}
Contrived, but interesting and useful nonetheless; thanks for the response.

[ ... ]
Even if the code contains a bug, cc -O and cc -O -fgcse should produce 
the same results.  Excluding certain well-defined exceptions (-ffast-math 
comes to mind), compiler optimizations likes -fgcse are not allowed to 
change the meaning of compiled code, do we agree?
Not quite.  Compiler optimization flags (with a few exceptions like
-ffast-math) are not allowed to change the semantics of the compiled
code.
I really don't want to debate the use of meaning versus semantics.  :-)

For buggy code that invokes undefined behaviour (divison by
zero, accessing unallocated memory, etc.) there is no semantics to
preserve and therefore the compiled code may well produce different
results when compiled with different flags.
C code that permits a divide-by-zero condition will result in a runtime error, 
but I disagree that this has no semantics to preserve.  If the code was using 
floating point, IEEE 754 defines semantics for divide-by-zero one could also 
have an 'Infinity' result, but that's not available with ints; your code results 
in a SIGFPE being generated.

(Undefined behaviour in the context of the C standard means the
compiler is allowed to do whatever it damn well pleases, including, but
not limited to, doing what you wanted and expected it to do, formatting
your harddisk or making demons fly out of your nose.
Sure.  I see and acknowledge the validity of your point: it's possible for a 
programmer to write code which has different behavior depending on (say) 
-finline-functions.

However, that being said, the fact that the C standard says such-and-such gives 
undefined behavior does not preclude the implementation from defining the 
behavior for such-and-such.

If there exists any lexically correct input source code (ie, which parses 
validly) where compiling with -fgcse results in different behavior, that 
optimization is unsafe and should not be enabled by -O2 in any circumstance.
With that definition just about *all* optimizations would be unsafe.

(And optimization is actually *supposed* to give different behaviour. 
The running time of a program is part of its behaviour and
optimization is generally supposed to reduce the running time, thereby
giving different behaviour.)
What you say is so obviously correct that one should instead conclude that my 
use of the term 'behavior' with regard to compiler optimizations has a more 
specific meaning.

Page 586 of _Compilers: Principles, Techniques, and Tools_ states:

First, a transformation must preserve the meaning of programs.  That is, an 
optimization must not change the output produced by a program for a given 
input, or cause an error, such as a division by zero, that was not present in 
the original program.  The influence of this criterion prevades this chapter; at 
all times we take the safe approach of missing an opportunity to apply a 
transformation rather than risk changing what the program does.

	--

Like your divide-by-zero example above, or this paragraph about semantics vs 
meaning, I'm not going to disagree if you want to state that the running time 
of a program is part of the behavior.  However, using the terms in such a 
fashion precludes you from understanding the intended meaning in this particular 
context.

--
-Chuck
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: buggy optimization levels...

2003-08-02 Thread Erik Trulsson
On Sat, Aug 02, 2003 at 03:52:25PM -0400, Chuck Swiger wrote:
 Erik Trulsson wrote:
 [ ... ]
 A somewhat contrived example that behaves differently when compiled
 with -O3 or when compiled with -O2 or lower optimization follows:
 
 static int f(int a)
 {
 return a/0;
 }
 int main(void)
 {
 int x;
 x = f(5);
 return 0;
 }
 
 Contrived, but interesting and useful nonetheless; thanks for the response.

Real world examples tend to be a bit more complicated and difficult to
detect, but this was the best I could come up with on short notice, and
it should not be too different in kind from bugs that can actually occur.


 
 [ ... ]
 Even if the code contains a bug, cc -O and cc -O -fgcse should 
 produce the same results.  Excluding certain well-defined exceptions 
 (-ffast-math comes to mind), compiler optimizations likes -fgcse are not 
 allowed to change the meaning of compiled code, do we agree?
 
 Not quite.  Compiler optimization flags (with a few exceptions like
 -ffast-math) are not allowed to change the semantics of the compiled
 code.
 
 I really don't want to debate the use of meaning versus semantics.  :-)

That wasn't my real point anyway. I was trying to refute your statement
that Even if the code contains a bug, cc -O and cc -O -fgcse
should produce the same results.

I claim that if the code has a bug that results in undefined behaviour
then the compiler is allowed to produce different results when invoked
with different optimization flags.


 
 For buggy code that invokes undefined behaviour (divison by
 zero, accessing unallocated memory, etc.) there is no semantics to
 preserve and therefore the compiled code may well produce different
 results when compiled with different flags.
 
 C code that permits a divide-by-zero condition will result in a runtime 
 error, but I disagree that this has no semantics to preserve.  If the 

It hasn't any semantics to preserve. There is no guarantee that a
division-by-zero will result in a runtime error. The C standard defines
what the semantics (or meaning if you prefer) of a C programs is. For
code that executes an integer division-by-zero it is not defined what
the program should do. Therefore *any* result that occurs from running
the code is conforming to the standard.

If you believe differently please tell me what that program *should* do
when run. (Without making any assumptions about which compiler, OS or
hardware is being used.)

 code was using floating point, IEEE 754 defines semantics for 
 divide-by-zero one could also have an 'Infinity' result, but that's not 
 available with ints; your code results in a SIGFPE being generated.

My code results in a SIGFPE when compiled with gcc on FreeBSD using a
certain set of flags. I am sure that if you try it with other
compilers, or on other operating systems you will see different
results.
(If you were to run it under AmigaOS, for example, you would probably
get a system crash with an error code indicating an integer
division-by-zero.  Other systems might well just ignore such a division.)

Programs that have undefined behaviour will very often behave
differently when compiled with different compilers, so the fact that
one compiler invoked with different flags will give different results
is not surprising.

 
 (Undefined behaviour in the context of the C standard means the
 compiler is allowed to do whatever it damn well pleases, including, but
 not limited to, doing what you wanted and expected it to do, formatting
 your harddisk or making demons fly out of your nose.
 
 Sure.  I see and acknowledge the validity of your point: it's possible for 
 a programmer to write code which has different behavior depending on (say) 
 -finline-functions.
 
 However, that being said, the fact that the C standard says such-and-such 
 gives undefined behavior does not preclude the implementation from 
 defining the behavior for such-and-such.

Of course it doesn't preclude that, but few implementations actually
does define what the behaviour will be for such cases, and code
depending on such implementation-specific behaviour is highly
non-portable anyway.

 
 If there exists any lexically correct input source code (ie, which parses 
 validly) where compiling with -fgcse results in different behavior, that 
 optimization is unsafe and should not be enabled by -O2 in any 
 circumstance.
 
 With that definition just about *all* optimizations would be unsafe.
 
 (And optimization is actually *supposed* to give different behaviour. 
 The running time of a program is part of its behaviour and
 optimization is generally supposed to reduce the running time, thereby
 giving different behaviour.)
 
 What you say is so obviously correct that one should instead conclude that 
 my use of the term 'behavior' with regard to compiler optimizations has a 
 more specific meaning.

Generally yes. (Of course people involved with hard real-time systems
will probably be very interested in how optimizations 

Re: buggy optimization levels...

2003-08-01 Thread LLeweLLyn Reese
Chuck Swiger [EMAIL PROTECTED] writes:

 Hi, all--
 
 The known bugs section of the GCC info documentation lists 5 issues;
 man gcc lists none.


You are looking in the 'wrong' place for 'known bugs' (Or the GCC
people aren't putting the info in the 'right' place :-)

http://gcc.gnu.org/bugzilla/

Is where to look for known bugs.


___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: buggy optimization levels...

2003-07-31 Thread Kris Kennaway
On Mon, Jul 14, 2003 at 05:37:45PM -0400, Chuck Swiger wrote:
 Hi, all--
 
 The known bugs section of the GCC info documentation lists 5 issues; man 
 gcc lists none.  Can someone provide a test case for a bug involving cc 
 -O versus cc -O3 under FreeBSD 4-STABLE for the x86 architecture?

Probably not, or it would have already been fixed.

The warning against using FreeBSD with settings higher than -O1 (==
-O) is because it often causes bugs that are difficult to track down
(e.g. some aspect of the kernel just doesn't work properly).

Kris


pgp0.pgp
Description: PGP signature


Re: buggy optimization levels...

2003-07-31 Thread Chuck Swiger
Kris Kennaway wrote:
On Mon, Jul 14, 2003 at 05:37:45PM -0400, Chuck Swiger wrote:
The known bugs section of the GCC info documentation lists 5 issues; man 
gcc lists none.  Can someone provide a test case for a bug involving cc 
-O versus cc -O3 under FreeBSD 4-STABLE for the x86 architecture?
Probably not, or it would have already been fixed.
Hopefully so, as the compiler toolchain is important.  :-)

The warning against using FreeBSD with settings higher than -O1 (==
-O) is because it often causes bugs that are difficult to track down
(e.g. some aspect of the kernel just doesn't work properly).
OK.  Can the existence of such problems be confirmed reliably, say by regression 
testing?  /usr/src/contrib/gcc/toplev.c is clear enough which specific 
optimizations are involved at the different number levels:

  if (optimize = 1)
{
  flag_defer_pop = 1;
  flag_thread_jumps = 1;
#ifdef DELAY_SLOTS
  flag_delayed_branch = 1;
#endif
#ifdef CAN_DEBUG_WITHOUT_FP
  flag_omit_frame_pointer = 1;
#endif
}
  if (optimize = 2)
{
  flag_cse_follow_jumps = 1;
  flag_cse_skip_blocks = 1;
  flag_gcse = 1;
  flag_expensive_optimizations = 1;
  flag_strength_reduce = 1;
  flag_rerun_cse_after_loop = 1;
  flag_rerun_loop_opt = 1;
  flag_caller_saves = 1;
  flag_force_mem = 1;
#ifdef INSN_SCHEDULING
  flag_schedule_insns = 1;
  flag_schedule_insns_after_reload = 1;
#endif
  flag_regmove = 1;
}
  if (optimize = 3)
{
  flag_inline_functions = 1;
}
Couldn't one compile with cc -O -finline-functions, and then iterate through 
-fcse-follow-jumps, -fgcse, etc and see which optimizations are safe?

--
-Chuck
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: buggy optimization levels...

2003-07-31 Thread Kris Kennaway
On Thu, Jul 31, 2003 at 09:34:17PM -0400, Chuck Swiger wrote:

 OK.  Can the existence of such problems be confirmed reliably, say by 
 regression testing?

The problem is in identifying precisely which piece of code is
failing.  A regression test is only useful if it concisely exercises a
specific set of tests.  As I said, in theory any optimization problems
can be tracked down to the failure of a certain piece of code, but in
something as large as the kernel it is not usually easy.

Kris


pgp0.pgp
Description: PGP signature


Re: buggy optimization levels...

2003-07-31 Thread Chuck Swiger
Kris Kennaway wrote:
On Thu, Jul 31, 2003 at 09:34:17PM -0400, Chuck Swiger wrote:
OK.  Can the existence of such problems be confirmed reliably, say by 
regression testing?
The problem is in identifying precisely which piece of code is
failing.  A regression test is only useful if it concisely exercises a
specific set of tests.  As I said, in theory any optimization problems
can be tracked down to the failure of a certain piece of code, but in
something as large as the kernel it is not usually easy.
Ah, my apologies-- I believe I see where the confusion lies.

I understand that figuring out why the kernel died can be hard, particularly if 
the failures aren't concise and completely reproducable, and thus tracing the 
problem back to making the right change to gcc to fix the optimization that 
caused the observed failure is thus also hard.

Fine.  However, you don't _need_ to identify the reason why the kernel died, or 
solve the bug in global common expression elimination to solve the problem of 
compiling the system with cc -O2 resulting in a buggy kernel.  If you 
determine that compiling with cc -O -fgcse results in failures, one does:

--- toplev.c_oldThu Jul 31 22:23:22 2003
+++ toplev.cThu Jul 31 22:24:01 2003
@@ -4916,7 +4916,6 @@
 {
   flag_cse_follow_jumps = 1;
   flag_cse_skip_blocks = 1;
-  flag_gcse = 1;
   flag_expensive_optimizations = 1;
   flag_strength_reduce = 1;
   flag_rerun_cse_after_loop = 1;
...and makes it so that -O2, -O3, etc does not enable GCSE optimization.

--
-Chuck
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: buggy optimization levels...

2003-07-31 Thread Erik Trulsson
On Thu, Jul 31, 2003 at 10:30:57PM -0400, Chuck Swiger wrote:
 Kris Kennaway wrote:
 On Thu, Jul 31, 2003 at 09:34:17PM -0400, Chuck Swiger wrote:
 OK.  Can the existence of such problems be confirmed reliably, say by 
 regression testing?
 
 The problem is in identifying precisely which piece of code is
 failing.  A regression test is only useful if it concisely exercises a
 specific set of tests.  As I said, in theory any optimization problems
 can be tracked down to the failure of a certain piece of code, but in
 something as large as the kernel it is not usually easy.
 
 Ah, my apologies-- I believe I see where the confusion lies.
 
 I understand that figuring out why the kernel died can be hard, 
 particularly if the failures aren't concise and completely reproducable, 
 and thus tracing the problem back to making the right change to gcc to fix 
 the optimization that caused the observed failure is thus also hard.

Note that it is not necessarily gcc which is at fault for such
failures. It may be a bug in gcc, but it may also be a bug in the code
being compiled that has a bug that only shows up under higher
optimization levels.
The latter is probably somewhat more common actually.


 
 Fine.  However, you don't _need_ to identify the reason why the kernel 
 died, or solve the bug in global common expression elimination to solve the 
 problem of compiling the system with cc -O2 resulting in a buggy kernel.  
 If you determine that compiling with cc -O -fgcse results in failures, 
 one does:
 
 --- toplev.c_oldThu Jul 31 22:23:22 2003
 +++ toplev.cThu Jul 31 22:24:01 2003
 @@ -4916,7 +4916,6 @@
  {
flag_cse_follow_jumps = 1;
flag_cse_skip_blocks = 1;
 -  flag_gcse = 1;
flag_expensive_optimizations = 1;
flag_strength_reduce = 1;
flag_rerun_cse_after_loop = 1;
 
 ...and makes it so that -O2, -O3, etc does not enable GCSE optimization.

But if the bug is not in gcc but in the code being compiled (and which
only happens to show up when compiled with GCSE optimization) such a
patch would disable this optimization for correct code also even though
it is not necessary there.



-- 
Insert your favourite quote here.
Erik Trulsson
[EMAIL PROTECTED]
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: buggy optimization levels...

2003-07-31 Thread Kris Kennaway
On Thu, Jul 31, 2003 at 10:30:57PM -0400, Chuck Swiger wrote:

 Fine.  However, you don't _need_ to identify the reason why the kernel 
 died, or solve the bug in global common expression elimination to solve the 
 problem of compiling the system with cc -O2 resulting in a buggy kernel.  
 If you determine that compiling with cc -O -fgcse results in failures, 
 one does:

This is the trivial part (you don't even need to modify gcc, because
all the optimizations turned on by -Ofoo are also available as
individual -fblah options).  As I've already said, once you have a
self-contained test-case that demonstrates that a particular gcc
optimization level generates broken code, the gcc people will fix it.

Kris


pgp0.pgp
Description: PGP signature


Re: buggy optimization levels...

2003-07-14 Thread LLeweLLyn Reese
Chuck Swiger [EMAIL PROTECTED] writes:

 Hi, all--
 
 The known bugs section of the GCC info documentation lists 5 issues;
 man gcc lists none.  Can someone provide a test case for a bug
 involving cc -O versus cc -O3 under FreeBSD 4-STABLE for the x86
 architecture?

You could probably find a few by searching the bug data base at:
http://gcc.gnu.org/bugzilla/

 
 What is the preferred solution?  The Dragon book and other compiler
 references have a definition of safe versus unsafe optimizations; is
 the problem that -O3 enables something unsafe?

I believe that none of -O[0-3s] are intended to enable unsafe
optimizations. (There are some optimization flags, which are *not*
enabled by any -O opt, like -ffast-math, which are documented to
be unsafe in some fashion or another; see
http://gcc.gnu.org/onlinedocs/gcc-3.2.3/gcc/Optimize-Options.html#Optimize%20Options)

 Who is responsible
 (FreeBSD, GNU compiler team, others?) for changing the compiler
 defaults so that -Ox will not produce known-invalid results, for any x?
[snip]

If gcc produces invalid results or bad code at any optimization level,
I think you should report it as a bug according to the
instructions at http://gcc.gnu.org/bugs.html
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]