Viktor Rosenfeld <[EMAIL PROTECTED]> writes:
> I'm having trouble understanding the cost of the Materialize  
> operator.  Consider the following plan:

> Nested Loop  (cost=2783.91..33217.37 rows=78634 width=44) (actual  
> time=77.164..2478.973 rows=309 loops=1)
>          Join Filter: ((rank2.pre <= rank5.pre) AND (rank5.pre <=  
> rank2.post))
>          ->  Nested Loop  (cost=0.00..12752.06 rows=1786 width=33)  
> (actual time=0.392..249.255 rows=9250 loops=1)
>                .....
>          ->  Materialize  (cost=2783.91..2787.87 rows=396 width=22)  
> (actual time=0.001..0.072 rows=587 loops=9250)
>                ->  Nested Loop  (cost=730.78..2783.51 rows=396  
> width=22) (actual time=7.637..27.030 rows=587 loops=1)
>                      ....

> The cost of the inner-most Nested Loop is 27 ms, but the total cost of  
> the Materialize operator is 666 ms (9250 loops * 0.072 ms per  
> iteration).  So, Materialize introduces more than 10x overhead.

Not hardly.  Had the Materialize not been there, we'd have executed
the inner nestloop 9250 times, for a total cost of 9250 * 27ms.
(Actually it might have been less due to cache effects, but still
a whole lot more than 0.072 per iteration.)

These numbers say that it's taking the Materialize about 120 microsec
per row returned, which seems a bit high to me considering that the
data is just sitting in a tuplestore.  I surmise that you are using
a machine with slow gettimeofday() and that's causing the measurement
overhead to be high.

                        regards, tom lane

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to