On 06/12/2013 07:51 PM, Andres Freund wrote:
> On 2013-06-12 19:47:46 +0800, Craig Ringer wrote:
>> On 06/12/2013 05:55 PM, Greg Stark wrote:
>>> On Wed, Jun 12, 2013 at 12:56 AM, Craig Ringer <cr...@2ndquadrant.com> 
>>> wrote:
>>>> The main thing I'm wondering is how/if to handle backward compatibility 
>>>> with
>>>> the existing NUMERIC and its DECIMAL alias
>>> If it were 100% functionally equivalent you could just hide the
>>> implementation internally. Have a bit that indicates which
>>> representation was stored and call the right function depending.
>> That's what I was originally wondering about, but as Tom pointed out it
>> won't work. We'd still need to handle scale and precision greater than
>> that offered by _Decimal128 and wouldn't know in advance how much
>> scale/precision they wanted to preserve. So we'd land up upcasting
>> everything to NUMERIC whenever we did anything with it anyway, only to
>> then convert it back into the appropriate fixed size decimal type for
>> storage.
> Well, you can limit the "upcasting" to the cases where we would exceed
> the precision.
How do you determine that for, say, DECIMAL '4'/ DECIMAL '3'? Or
sqrt(DECIMAL '2') ?

... actually, in all those cases Pg currently arbitrarily limits the
precision to 17 digits. Interesting. Not true for multiplication though:

regress=> select (NUMERIC '4' / NUMERIC '3') * NUMERIC
'3.141592653589793238462643383279502884197169';
                           ?column?                          
--------------------------------------------------------------
 4.1887902047863908798971027247128958968414458906832371934277
(1 row)


so simple operations like:

SELECT (DECIMAL '4'/ DECIMAL '3') * (DECIMAL '1.11');

would exceed the precision currently provided and be upcast. We'd
quickly land up getting to full "NUMERIC" internally no matter what type
we started with.

I think a good starting point would be to use the Intel and IBM
libraries to implement basic DECIMAL32/64/128 to see if they perform
better than the gcc builtins tested by Pavel by adapting his extension.

If the performance isn't interesting it may still be worth adding for
compliance reasons, but if we can only add IEEE-compliant decimal FP by
using non-SQL-standard type names I don't think that's super useful. If
there are significant performance/space gains to be had, we could
consider introducing DECIMAL32/64/128 types with the same names used by
DB2, so people could explicitly choose to use them where appropriate.

>> Pretty pointless, and made doubly so by the fact that if we're
>> not using a nice fixed-width type and have to support VARLENA we miss
>> out on a whole bunch of performance benefits.
> I rather doubt that using a 1byte varlena - which it will be for
> reasonably sized Datums - will be a relevant bottleneck here. Maybe if
> you only have 'NOT NULL', fixed width columns, but even then...
That's good to know - if I've overestimated the cost of using VARLENA
for this, that's really quite good news.

-- 
 Craig Ringer                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to