On Mar 20, 2012, at 5:26 AM, Richard Sandiford wrote:
> So what I was trying to say was that if we remove the assert
> altogether, and allow CONST_DOUBLEs to be wider than 2 HWIs,
> we need to define what the "implicit" high-order HWIs of a
> CONST_DOUBLE are, just like we already do for CONST_INT.

The problem with your position is you are wrong.  Let me give you an concrete 

typedef __attribute__((mode(OI))) signed int256_t;

int256_t foo() {
  return (int256_t)0x7000000000000000 << 10;

This fails your assert you want to move down.  This is _wrong_, wrong, wrong 
wrong.  This is getting tiring.

This is perfectly well defined, with no change in any definition.  Would you 
like to try again?

> If we remove the assert altogether, it very much matters what is done by that 
> last "*vp" line.

> If Mike or anyone is up to doing that, then great.

I categorically object to moving it down, it is wrong, for all the same reasons 
the original is wrong.

> But if instead it's just a case of handling zero correctly, moving rather than
> removing the assert seems safer.

Sure, that fixes 1 testcase, and is monotonically better.  The problem, it 
fixes less than 1% of the values for which we assert currently which are wrong. 
 What is the point of bug pushing?  Bug pushing is wrong.

> I'm obviously not explaining this well :-)

No, you are explaining your position well, it is just that your position is 
wrong.  Your position is you want to bug push, that position is wrong.  You 
can't win with me by having that position.  The only solution is to change your 
position.  You've done it once now, and you are moving in the right direction.  
You are now willing to remove the assert for 0, which you didn't want to do 
before.  Your next stop, will be to remove the assert for  
(int256_t)0x7000000000000000 << 10.  I will keep pushing until you relent on 
that point.  Once you do,  your position will have changed twice, mine, 
unchanged.  I will not relent after that step, merely, I will select the next 
value and make you agree to it, then the next, and the next and the next, until 
I succeed in moving you to my original position.

Now, you might be thinking that this check will protect wrong code generation 
in the rest of the compiler, and that may be somewhat true.  A counter point 
would be:

  (int256_t)1 << 254

which would have been my next interesting value to get you to agree with me on, 
but, the assert you point at, doesn't prevent great badness from happening with 
this case.  Other parts of the compiler just suck, because people didn't care 
enough.  This I can change, but will take time.  In my first patch, it will not 
be complete nor perfect.  If you want to reject all the patches until every 
single last test case works, well, that isn't usually what we do.  We usually 
approve each one as long as it is monotonically better, that's the standard by 
which to judge patches.

Is your position, the rest of the compiler isn't perfect, so, no progress can 
be made in fixing it until the rest of it is perfect?  If so, this is a very 
bad position to take.  I have time now to fix and submit work to make things 
better.  After I have things working perfectly, I may have 0 time with which to 
do this.  I'll give you a concrete example which exactly parallels this work.  
There was a time when gcc didn't work on 64-bit targets.  I did the first 
64-bit port. It was done as a series of patches, one at a time, that pushed the 
compiler kicking and screaming in the right direction.  That work is the basis 
for all the 64-bit ports today; they are now less rare than they were when I 
first started.  gcc is reasonably good at supporting 64-bit ports.  What is 
different today, absolutely nothing, just N.  Instead of 64, it is 256, life 
goes on.  All I can say, if you have this mistaken notion that work to make OI 
work should be blocked because OI doesn't work, is you are wrong.  So, my 
question is, do you have this position?

Reply via email to