Hi,

while lacking the deep insight into GCC internals most of you have, I'd never 
the less like to ask you to be very prudent concerning the issue of removal 
of seemingly unnecessary RTL optimizations.
In contrast to 32 bit targets, for 8 and 16 bit targets the RTL representation 
possibly might look completely different than the corresponding tree 
representation of the code: 
In my opinion, now that the new tree optimizations exist, it might finally be 
a good approach to let all the optimizations that could be done on original 
DI/SI/HI mode expressions be done on the tree level already. Then one could 
expand the DI/SI/HI mode expressions at RTL generation to only refer to the 
processor's native word length modes. What one would get then is a RTL which 
has about nothing to do with the corresponding tree representation.

For 8/16 bit target and gcc3 it seems that it used to be necessary to do the 
expansion/splitting extremely late since optimizations on SI/HI mode 
expressions required to keep SI/HI mode objects at least until reload. Now 
hopefully one could consider to do the splitting much earlier and with the 
help of the existing RTL optimizers one might be able to find many additional 
optimization possibilities. 

I think that it would be a pity if one could no longer find these 
optimizations because the corresponding RTL optimizers have been removed.

Yours,

Björn

Reply via email to