* Tom Lane (t...@sss.pgh.pa.us) wrote: > Robert Haas <robertmh...@gmail.com> writes: > > On Wed, Dec 3, 2014 at 12:08 PM, Tom Lane <t...@sss.pgh.pa.us> wrote: > >> I would envision the planner starting out generating the first subplan > >> (without the optimization), but as it goes along, noting whether there > >> are any opportunities for join removal. At the end, if it found that > >> there were such opportunities, re-plan assuming that removal is possible. > >> Then stick a switch node on top. > >> > >> This would give optimal plans for both cases, and it would avoid the need > >> for lots of extra planner cycles when the optimization can't be applied > >> ... except for one small detail, which is that the planner has a bad habit > >> of scribbling on its own input. I'm not sure how much cleanup work would > >> be needed before that "re-plan" operation could happen as easily as is > >> suggested above. But in principle this could be made to work. > > > Doesn't this double the planning overhead, in most cases for no > > benefit? The alternative plan used only when there are deferred > > triggers is rarely going to get used. > > Personally, I remain of the opinion that this optimization will apply in > only a tiny fraction of real-world cases, so I'm mostly concerned about > not blowing out planning time when the optimization doesn't apply.
This was my thought also- most of the time we won't be able to apply the optimization and we'll know that pretty early on and can skip the double planning. What makes this worthwhile is that there are cases where it'll be applied regularly due to certain tools/technologies being used and the extra planning will be more than made up for by the reduction in execution time. > However, even granting that that is a concern, so what? You *have* to > do the planning twice, or you're going to be generating a crap plan for > one case or the other. Yeah, I don't see a way around that.. Thanks, Stephen
signature.asc
Description: Digital signature