Robert Haas <robertmh...@gmail.com> writes: > On Wed, Aug 11, 2010 at 5:12 PM, Tom Lane <t...@sss.pgh.pa.us> wrote: >> Yeah, possibly. It would probably be difficult for the planner to >> figure out where the cutover point is to make that worthwhile, though; >> the point where you'd need to make the transformation is long before we >> have any rowcount estimates.
> This may be a stupid question, but why does the transformation have to > be done before we have the row count estimates? Well, I was thinking in terms of doing it when we do the SRF inlining. It might be that we could get away with just having an arbitrary cost limit like 100*cpu_operator_cost, and not think about how many rows would actually be involved. > I think we're just > looking for a scan node with a filter condition that contains a stable > subexpression that's expensive enough to be worth factoring out, I do *not* want to grovel over every subexpression (and sub-sub-expression, etc) in a query thinking about whether to do this. That gets O(expensive) pretty quickly. My idea of the appropriate scope of a hack like this is just to prevent any performance loss from SRF inlining. Another approach we could take is to fix the implementation limitation in inline_set_returning_function() about punting when there's a sub-select in the arguments. Then users could make this happen for themselves when it matters. regards, tom lane -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs