On 23 May 2016 at 08:53, Andres Freund <and...@anarazel.de> wrote:

> Hi,
>
> discussing executor performance with a number of people at pgcon,
> several hackers - me included - complained about the additional
> complexity, both code and runtime, required to handle SRFs in the target
> list.
>
> One idea I circulated was to fix that by interjecting a special executor
> node to process SRF containing targetlists (reusing Result possibly?).
> That'd allow to remove the isDone argument from ExecEval*/ExecProject*
> and get rid of ps_TupFromTlist which is fairly ugly.
>
>
> Robert suggested - IIRC mentioning previous on-list discussion - to
> instead rewrite targetlist SRFs into lateral joins. My gut feeling is
> that that'd be a larger undertaking, with significant semantics changes.
>
> If we accept bigger semantical changes, I'm inclined to instead just get
> rid of targetlist SRFs in total; they're really weird and not needed
> anymore.
>
> One issue with removing targetlist SRFs is that they're currently
> considerably faster than SRFs in FROM:
> tpch[14693][1]=# COPY (SELECT * FROM generate_series(1, 10000000)) TO
> '/dev/null';
> COPY 10000000
> Time: 2217.167 ms
> tpch[14693][1]=# COPY (SELECT generate_series(1, 10000000)) TO '/dev/null';
> COPY 10000000
> Time: 1355.929 ms
> tpch[14693][1]=#
>
> I'm no tto concerned about that, and we could probably fixing by
> removing forced materialization from the relevant code path.
>
> Comments?
>
>
SRFs-in-tlist are a lot faster for lockstep iteration etc. They're also
much simpler to write, though if the result result rowcount differs
unexpectedly between the functions you get exciting and unexpected
behaviour.

WITH ORDINALITY provides what I think is the last of the functionality
needed to replace SRFs-in-from, but at a syntatactic complexity and
performance cost. The following example demonstrates that, though it
doesn't do anything that needs LATERAL etc. I'm aware the following aren't
semantically identical if the rowcounts differ.


craig=> EXPLAIN ANALYZE SELECT generate_series(1,1000000) x,
generate_series(1,1000000) y;
                                          QUERY PLAN

----------------------------------------------------------------------------------------------
 Result  (cost=0.00..5.01 rows=1000 width=0) (actual time=0.024..92.845
rows=1000000 loops=1)
 Planning time: 0.039 ms
 Execution time: 123.123 ms
(3 rows)

Time: 123.719 ms


craig=> EXPLAIN ANALYZE SELECT x, y FROM generate_series(1,1000000) WITH
ORDINALITY AS x(i, n) INNER JOIN generate_series(1,1000000) WITH ORDINALITY
AS y(i, n) ON (x.n = y.n);
                                                                QUERY PLAN

------------------------------------------------------------------------------------------------------------------------------------------
 Merge Join  (cost=0.01..97.50 rows=5000 width=64) (actual
time=179.863..938.375 rows=1000000 loops=1)
   Merge Cond: (x.n = y.n)
   ->  Function Scan on generate_series x  (cost=0.00..10.00 rows=1000
width=40) (actual time=108.813..303.690 rows=1000000 loops=1)
   ->  Materialize  (cost=0.00..12.50 rows=1000 width=40) (actual
time=71.043..372.880 rows=1000000 loops=1)
         ->  Function Scan on generate_series y  (cost=0.00..10.00
rows=1000 width=40) (actual time=71.039..266.209 rows=1000000 loops=1)
 Planning time: 0.184 ms
 Execution time: 970.744 ms
(7 rows)

Time: 971.706 ms


I get the impression the with-ordinality case could perform just as well if
the optimiser recognised a join on the ordinality column and iterated the
functions in lockstep to populate the result row directly. Though that
could perform _worse_ if the function is computationally costly and
benefits significantly from the CPU cache, where we're better off
materializing it or at least executing it in chunks/batches...


-- 
 Craig Ringer                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services

Reply via email to