On 4/8/2013 5:46 PM, Kenneth Zadeck wrote:
In some sense you have to think in terms of three worlds:
1) what you call "compile-time static expressions" is one world which in
gcc is almost always done by the front ends.
2) the second world is what the optimizers can do.   This is not
compile-time static expressions because that is what the front end has
already done.
3) there is run time.

My view on this is that optimization is just doing what is normally done
at run time but doing it early.   From that point of view, we are if not
required, morally obligated to do thing in the same way that the
hardware would have done them.    This is why i am so against richi on
wanting to do infinite precision.    By the time the middle or the back
end sees the representation, all of the things that are allowed to be
done in infinite precision have already been done.   What we are left
with is a (mostly) strongly typed language that pretty much says exactly
what must be done. Anything that we do in the middle end or back ends in
infinite precision will only surprise the programmer and make them want
to use llvm.

That may be so in C, in Ada it would be perfectly reasonable to use
infinite precision for intermediate results in some cases, since the
language standard specifically encourages this approach.

Reply via email to