David Honig wrote:
> I was thinking more in terms of arrays
>
> memset( arr, 0, sizeof(arr)) // zero
> unsigned int v=1;
> for (int i=0; i< arr_size; i++) v += arr[i]; // check
> if ( v>0 && v<2 ) // test
> sanity();
> else
> insanity();
>
> But I suppose that if compilers can be arbitrarily 'clever'
> (eg about memset() and the additive properties of zero)
> you'll have to check the assembly code...
>
> Perhaps
>
> for (int i=0; i< arr_size; i++) arr[i]=i; // "zero"
> unsigned int v=0;
> for (i=0; i< arr_size; i++) v += arr[i]; // check
> if ( v != expected_value( arr_size ) ) insanity();
> else sanity();
>
> is better?  (In the sense that this code will be treated
> as worth-keeping.)
as an abstract, probably not - there is always a chance that *any* given
structure will be optimised away in a future compiler that is both
arbitarily "smart" but stupid enough not to understand you are going to
extreme effort to try and avoid the optimiser removing a clear.
about all I can think of that could not be optimised away would be a dynamic
library call - if a library call "int clearthismem(void pointer*, long
lengthinbytes);" is used, as the optimising compiler can't look inside the
code for it to see exactly what it does with the passed variable, it can't
optimise it away - and the compiler that compiled the "blank passed mem to
zero" code won't know that it isn't used once the subroutine is exited (and
in fact, it could also be used as a convenient way to force-initialise an
array or string to zeros if execution time wasn't an issue)

Reply via email to