Hi,

On 2017-12-22 11:00:54 -0500, Tom Lane wrote:
> Andres Freund <and...@anarazel.de> writes:
> > Use new overflow aware integer operations.
> 
> I just tried to compile manually on prairiedog for the first time since
> this went in, and noticed that it now produces a boatload of warnings
> like
> 
> int.c: In function 'int2pl':
> int.c:735: warning: 'result' may be used uninitialized in this function
> 
> I think it's entirely within its rights to complain, because
> the non-builtin code paths in int.h look like
> 
>       int32           res = (int32) a + (int32) b;
> 
>       if (res > PG_INT16_MAX || res < PG_INT16_MIN)
>               return true;
>       *result = (int16) res;
>       return false;
> 
> and so if an overflow occurs then *result won't get set.
> 
> I do not think it is reasonable for these functions to not set the
> output variable at all in the overflow case; it is not their job
> to opine on whether the caller may use the result.

I don't agree. Please note that that the function's documentation
explicitly says "The content of *result is implementation defined in
case of overflow.".


> Furthermore, this is an incorrect emulation of the built-in functions;
> the gcc manual clearly says

>       These built-in functions promote the first two operands into
>       infinite precision signed type and perform addition on those
>       promoted operands. The result is then cast to the type the third
>       pointer argument points to and stored there. If the stored result
>       is equal to the infinite precision result, the built-in functions
>       return false, otherwise they return true.
> 
> So as coded, there is a difference of behavior depending on whether
> you have the built-in, and that cannot be anything but bad.

The problem is that other intrinsic implementation we might want to use
in the future, like microsoft's, do *not* exactly behave that way. So I
think defining this as implementation defined is the right thing to do.

Greetings,

Andres Freund

Reply via email to