Dean Rasheed <dean.a.rash...@gmail.com> writes: > ... For example, exp() works for inputs up to 6000. However, if you > compute exp(5999.999) the answer is truly huge -- probably only of > academic interest to anyone. With HEAD, exp(5999.999) produces a > number with 2609 significant digits in just 1.5ms (on my ageing > desktop box). However, only the first 9 digits returned are correct. > The other 2600 digits are pure noise. With my patch, all 2609 digits > are correct (confirmed using bc), but it takes 27ms to compute, making > it 18x slower.
> AFAICT, this kind of slowdown only happens in cases like this where a > very large number of digits are being returned. It's not obvious what > we should be doing in cases like this. Is a performance reduction like > that acceptable to generate the correct answer? Or should we try to > produce a more approximate result more quickly, and where do we draw > the line? FWIW, in that particular example I'd happily take the 27ms time to get the more accurate answer. If it were 270ms, maybe not. I think my initial reaction to this patch is "are there any cases where it makes things 100x slower ... especially for non-outrageous inputs?" If not, sure, let's go for more accuracy. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers