On Mar 15, 2016 7:33 PM, "Keith Medcalf" <kmedcalf at dessus.com> wrote:
>
>
> On Tuesday, 15 March, 2016 07:46, James K Lowden wrote
> > To my way of thinking, SQLite's handling of giant integers per se
> > is an edge case.  Because such huge numbers don't normally arise, the
> > non-error path (inputs in bounds) almost always produces correct
> > results. The reason to have CAST raise a range error when the output
> > would be invalid is to guard against erroneous inputs creating spurious
> > outputs that, when used in further computation, produce inexplicable
> > results. It's better for the user and programmer to detect the error
> > early -- nearest the cause, where it can be diagnosed -- than to
> > produce a bogus answer and be forced into a manual recapitulation of the
> > processing.
>
> This would be met by returning NULL.  Any operation performed on NULL
other than IS [NOT] NULL results in NULL, so this would carry through to
further computations.
>
> For example, even the operation "select cast(pow(2,65) as integer)" and
"select cast(-pow(2,65) as integer)" should return NULL rather than MAXINT
and MININT respectively.

The $64 bit question ;) is how much existing code might break if such
changes were made. One can argue that the existing implementation is
broken, but a lot of software has been written to use it as it is. What
happens to them if such an improvement is made?

Please note I'm not advocating no change. Just asking the question.

SDR

Reply via email to