On 16 September 2015 at 09:32, Tom Lane <t...@sss.pgh.pa.us> wrote:

> Dean Rasheed <dean.a.rash...@gmail.com> writes:
> > ... For example, exp() works for inputs up to 6000. However, if you
> > compute exp(5999.999) the answer is truly huge -- probably only of
> > academic interest to anyone. With HEAD, exp(5999.999) produces a
> > number with 2609 significant digits in just 1.5ms (on my ageing
> > desktop box). However, only the first 9 digits returned are correct.
> > The other 2600 digits are pure noise. With my patch, all 2609 digits
> > are correct (confirmed using bc), but it takes 27ms to compute, making
> > it 18x slower.
>
> > AFAICT, this kind of slowdown only happens in cases like this where a
> > very large number of digits are being returned. It's not obvious what
> > we should be doing in cases like this. Is a performance reduction like
> > that acceptable to generate the correct answer? Or should we try to
> > produce a more approximate result more quickly, and where do we draw
> > the line?
>
> FWIW, in that particular example I'd happily take the 27ms time to get
> the more accurate answer.  If it were 270ms, maybe not.  I think my
> initial reaction to this patch is "are there any cases where it makes
> things 100x slower ... especially for non-outrageous inputs?"  If not,
> sure, let's go for more accuracy.
>

Agreed

Hopefully things can be made faster with less significant digits.

I figure this is important enough to trigger a maint release, but since we
already agreed when the next one is, I don't see we need to do it any
quicker, do we?

Well done Dean for excellent research.

-- 
Simon Riggs                http://www.2ndQuadrant.com/
<http://www.2ndquadrant.com/>
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Reply via email to