On Wed, Jan 25, 2012 at 11:24 AM, Tom Lane <t...@sss.pgh.pa.us> wrote:
> After that I started doing some performance testing, and the initial
> news was bad: the planner was about 3x slower than 9.1 on a moderately
> large join problem.  I've spent the past several days hacking away at
> that, and have got it down to about 35% slower by dint of the following
> changes:
> * micro-optimization of add_path(), in particular avoiding duplicate
> calculations during cost comparisons.
> * introducing a two-step mechanism for testing whether proposed join
> paths are interesting.  The patch first calculates a quick and dirty
> lower bound on the cost of the join path (mostly, from the costs of
> its input paths) and looks through the joinrel's pathlist to see if
> there is already a path that is clearly better.  If not, it proceeds
> with the full cost calculation and add_path insertion.  In my testing
> the precheck is able to eliminate 80% or so of the proposed paths,
> more than repaying the extra pathlist search.
> Now this is of course cheating with both hands, in that either of these
> optimization techniques could have been applied before ... but they
> weren't.  I think that 35% slower on large join problems is probably
> acceptable, given that this is investigating a larger number of possible
> solutions and hopefully often finding a better plan.  I think I can
> knock some more off of that by refactoring the costsize.c APIs, anyway.
> Right now the first-pass and second-pass cost calculations are
> independent and so there's some repeated work, which I'd like to
> eliminate both for speed reasons and to get rid of duplicate code
> that'd have to be kept in sync if it's left as-is.
> If there are not objections, I'd like to go ahead and commit this
> after one more round of polishing.  There's still a lot left to do,
> but what it mainly needs now is some testing on real-world planning
> problems, and I don't think it's going to get that until it's been
> merged in.

I have to admit that I have a bad feeling about this.  It strikes me
that there is no way we're not going to get complaints about a 35%
slowdown on planning a large join problem.  It is true that some
people will have the benefit of finding a faster plan - but I think
many people won't.  We're really facing the same trade-off here that
we do with many other things people have asked for the query planner
to do over the years: is it worth slowing down everyone, on every
query, for an optimization that will apply rarely but provide huge
benefits when it does?  Of course, that's fundamentally a judgement
call.  But I can't help observing that the number of requests we've
had for the planner to deduce implied inequalities is far larger than
the number of people who have complained about the problem that this
fixes.  Now, maybe the speed penalty for deducing implied inequalities
would be even larger than 35%: I don't know.  But we've sweat blood to
avoid much smaller regressions on much less important cases.

To be clear, I'd love to have this feature.  But if there is a choice
between reducing planning time significantly for everyone and NOT
getting this feature, and increasing planning time significantly for
everyone and getting this feature, I think we will make more people
happy by doing the first one.

Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to