Hi

I did some test with unlogged table in shared buffers

foo(a int[]);  -- 2K long array 100K rows

for queries

select max(v) from (unnest(a) from foo) x;
select max(a[1]) from foo
select max(a[2000]) from foo

I didn't find significant slowdown.

Some slowdown is visible (about 10%) for query

update foo set a = a || 1;

Significant slowdown is on following test:

do $$ declare a int[] := '{}'; begin for i in 1..90000 loop a := a || 10;
end loop; end$$ language plpgsql;
do $$ declare a numeric[] := '{}'; begin for i in 1..90000 loop a := a ||
10.1; end loop; end$$ language plpgsql;

integer master 14sec x patched 55sec
numeric master 43sec x patched 108sec

It is probably worst case - and it is known plpgsql antipattern

Regards

Pavel



2015-05-01 21:59 GMT+02:00 Tom Lane <t...@sss.pgh.pa.us>:

> Pavel Stehule <pavel.steh...@gmail.com> writes:
> > Test for 3000 elements:
>
> >                    Original     Patch
> > Integer            55sec      8sec
> > Numeric           341sec      8sec
>
> > Quicksort is about 3x faster -- so a benefit of this patch is clear.
>
> Yeah, the patch should pretty much blow the doors off any case that's
> heavily dependent on access or update of individual array elements ...
> especially for arrays with variable-length element type, such as numeric.
>
> What I'm concerned about is that it could make things *slower* for
> scenarios where that isn't the main thing being done with the arrays,
> as a result of useless conversions between "flat" and "expanded"
> array formats.  So what we need is to try to benchmark some cases
> that don't involve single-element operations but rather whole-array
> operations (on arrays that are plpgsql variables), and see if those
> cases have gotten noticeably worse.
>
>                         regards, tom lane
>

Reply via email to