On Wed, 23 Dec 2009 00:00:31 +0100 Ivan Sergio Borgonovo <m...@webthatworks.it> wrote:
> On Tue, 22 Dec 2009 20:47:18 +0100 > Pavel Stehule <pavel.steh...@gmail.com> wrote: > > > Hello > > > > I found one ugly trick. You can multiply lines and SUM > cons > > could be replaced limit clause: > > The trick is really smart (and fun), kudos, really, it's always a > pleasure to read your solutions, thanks. > > But as expected: as unexpected... > test=# create or replace function tano(a int, out b int) ^^^ should be anything but a > returns int as > $$ > declare > row record; > begin > b :=0; > for row in select a as _a from data where a>0 > loop > b := row._a + b; > if (b>=a) then > return; > end if; > end loop; > return; > end; > $$ language plpgsql; Making it longer to better appreciate the difference: 1M rows where a [0,2] select * from tano((1000000)::int); b --------- 1000001 (1 row) Time: 1235.243 ms select sum(x) from (select 1 as x,(a = generate_series(1,a))::int from data limit 1000000) s; sum --------- 1000000 (1 row) Time: 1309.441 ms Being fair once you add the where clause to the generate_series version the difference in performance is negligible and saying that the plpgsql version is faster would require some more serious benchmarking. Surprised! If the generate_series can compete with the plpgsql for loop... why is the plpgsql version so "slow"? -- Ivan Sergio Borgonovo http://www.webthatworks.it -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql