Re: [HACKERS] Multi-pass planner

2013-04-19 Thread Jeff Janes
On Wed, Apr 3, 2013 at 6:40 PM, Greg Stark st...@mit.edu wrote:


 On Fri, Aug 21, 2009 at 6:54 PM, decibel deci...@decibel.org wrote:

 Would it? Risk seems like it would just be something along the lines of
 the high-end of our estimate. I don't think confidence should be that hard
 either. IE: hard-coded guesses have a low confidence. Something pulled
 right out of most_common_vals has a high confidence.


I wouldn't be so sure of that.  I've run into cases where all of the
frequencies pulled out of most_common_vals are off by orders of magnitude.
 The problem is that if ANALYZE only samples 1/1000th of the table, and it
sees a value twice, it assumes the value is present 2000 times in the
table, even when it was only in the table twice.  Now, for any given value
that occurs twice in the table, it is very unlikely for both of those to
end up in the sample. But when you have millions of distinct values which
each occur twice (or some low number of time), it is a near certainty that
several of them are going to end with both instances in the sample.  Those
few ones that get lucky are of course going to end up in the
most_common_vals list.

Since the hashjoin estimates cost depending on the frequency of the most
common value, having this be systematically off by a factor of 1000 is
rather unfortunate.

The problem here is that the sample size which is adequate for getting a
good estimate of the histograms (which is what controls the sample size
currently) is not adequate for getting a good estimate of most_common_vals.
 Cranking up the statistics_target would give a better estimates of
most_common_vals, but at the expense of having a needlessly large
histogram, which slows down planning.  There is currently no knob to crank
up the sample size for the sake of most common values, but then prune the
histogram back down for storage.

Cheers,

Jeff


Re: [HACKERS] Multi-pass planner

2013-04-19 Thread Claudio Freire
On Fri, Apr 19, 2013 at 6:19 PM, Jeff Janes jeff.ja...@gmail.com wrote:
 On Wed, Apr 3, 2013 at 6:40 PM, Greg Stark st...@mit.edu wrote:


 On Fri, Aug 21, 2009 at 6:54 PM, decibel deci...@decibel.org wrote:

 Would it? Risk seems like it would just be something along the lines of
 the high-end of our estimate. I don't think confidence should be that hard
 either. IE: hard-coded guesses have a low confidence. Something pulled right
 out of most_common_vals has a high confidence.


 I wouldn't be so sure of that.  I've run into cases where all of the
 frequencies pulled out of most_common_vals are off by orders of magnitude.
 The problem is that if ANALYZE only samples 1/1000th of the table, and it
 sees a value twice, it assumes the value is present 2000 times in the table,
 even when it was only in the table twice.  Now, for any given value that
 occurs twice in the table, it is very unlikely for both of those to end up
 in the sample. But when you have millions of distinct values which each
 occur twice (or some low number of time), it is a near certainty that
 several of them are going to end with both instances in the sample.  Those
 few ones that get lucky are of course going to end up in the
 most_common_vals list.


Especially if there's some locality of occurrence, since analyze
samples pages, not rows.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Multi-pass planner

2013-04-19 Thread Jeff Janes
On Thu, Apr 4, 2013 at 11:53 AM, Dimitri Fontaine dimi...@2ndquadrant.frwrote:

 Robert Haas robertmh...@gmail.com writes:
  for estimate_worstcase_fraction.  So, when computing the cost of a
  path, we'd compute our current expected-case estimate, and also a
  worst-case estimate, and then compute the final cost as:

 There also was the idea for the executor to be able to handle alternate
 plans and some heuristic to determine that the actual cost of running a
 plan is much higher than what's been estimated, so much so as to switch
 to starting from scratch with the other plan instead.


Or even before it starts executing.  If the planner realizes the stakes are
high enough, it could abandon its assumptions about how likely it is for a
block to be in cache, and go do a little sampling of the cache and see.  To
be effective it would probably have to sample the OS cache as well as the
shared_buffers, which would certain complicate things and might not be
portable.

Of course right now there is no explicit estimate about the cache hit rate
at all, they are just implicitly built into other settings.  So those would
probably need to be separated into true IO cost, and a default cache
estimate to used when sampling is not warranted.

Cheers,

Jeff


Re: [HACKERS] Multi-pass planner

2013-04-19 Thread Jeff Janes
On Fri, Apr 19, 2013 at 2:24 PM, Claudio Freire klaussfre...@gmail.comwrote:


 Especially if there's some locality of occurrence, since analyze
 samples pages, not rows.


But it doesn't take all rows in each sampled page.  It generally takes
about one row per page, specifically to avoid the problem you indicate.
Maybe it is possible to trick it into taking too many (for example, if most
pages are completely empty), but I haven't experienced that as being a
problem.

Cheers,

Jeff


Re: [HACKERS] Multi-pass planner

2013-04-19 Thread Claudio Freire
On Fri, Apr 19, 2013 at 7:43 PM, Jeff Janes jeff.ja...@gmail.com wrote:
 On Fri, Apr 19, 2013 at 2:24 PM, Claudio Freire klaussfre...@gmail.com
 wrote:


 Especially if there's some locality of occurrence, since analyze
 samples pages, not rows.


 But it doesn't take all rows in each sampled page.  It generally takes about
 one row per page, specifically to avoid the problem you indicate. Maybe it
 is possible to trick it into taking too many (for example, if most pages are
 completely empty), but I haven't experienced that as being a problem.


Still, I remember a discussion where it was clear there was a bias
towards sampling rows from the same page (or nearby), resulting in
this particular problem.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Multi-pass planner

2013-04-04 Thread Robert Haas
On Wed, Apr 3, 2013 at 9:40 PM, Greg Stark st...@mit.edu wrote:
 I used to advocate a similar idea. But when questioned on list I tried to
 work out the details and ran into some problem coming up with a concrete
 plan.

 How do you compare a plan that you think has a 99% chance of running in 1ms
 but a 1% chance of taking 1s against a plan that has a 90% chance of 1ms and
 a 10% chance of taking 100ms? Which one is actually riskier? They might even
 both have the same 95% percentile run-time.

 And additionally there are different types of unknowns. Do you want to treat
 plans where we have a statistical sample that gives us a probabilistic
 answer the same as plans where we think our model just has a 10% chance of
 being wrong? The model is going to either be consistently right or
 consistently wrong for a given query but the sample will vary from run to
 run. (Or vice versa depending on the situation).

One idea that someone through up against a wall recently during an
EnterpriseDB development meeting - I think it was Kevin Grittner but
it might have been Noah Misch, or some combination - was to have a GUC
for estimate_worstcase_fraction.  So, when computing the cost of a
path, we'd compute our current expected-case estimate, and also a
worst-case estimate, and then compute the final cost as:

estimated_cost = estimate_worstcase_fraction * worst_case_estimate +
(1 - estimate_worstcase_fraction) * expected_case_estimate

I think Kevin and I both have the intuition that even a rather small
value for estimate_worstcase_fraction - like 0.01 or 0.001 or even
smaller - would prevent a pretty significant fraction of the problems
people encounter in this area.  But users could change it, in general
or for particular queries, if it ended up being wrong in their
environment.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Multi-pass planner

2013-04-04 Thread Dimitri Fontaine
Robert Haas robertmh...@gmail.com writes:
 for estimate_worstcase_fraction.  So, when computing the cost of a
 path, we'd compute our current expected-case estimate, and also a
 worst-case estimate, and then compute the final cost as:

There also was the idea for the executor to be able to handle alternate
plans and some heuristic to determine that the actual cost of running a
plan is much higher than what's been estimated, so much so as to switch
to starting from scratch with the other plan instead.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Multi-pass planner

2013-04-04 Thread Robert Haas
On Thu, Apr 4, 2013 at 2:53 PM, Dimitri Fontaine dimi...@2ndquadrant.fr wrote:
 Robert Haas robertmh...@gmail.com writes:
 for estimate_worstcase_fraction.  So, when computing the cost of a
 path, we'd compute our current expected-case estimate, and also a
 worst-case estimate, and then compute the final cost as:

 There also was the idea for the executor to be able to handle alternate
 plans and some heuristic to determine that the actual cost of running a
 plan is much higher than what's been estimated, so much so as to switch
 to starting from scratch with the other plan instead.

Yeah.  The thing is, if the plan has any side effects, that's not
really an option.  And even if it doesn't, it may throw away a lot of
work.  One thing we could do is switch from a unparameterized nested
loop to a hash join if the outer side turns out to be much larger than
expected, but that's only going to benefit a pretty narrow set of use
cases.  Which is why I think a planner-based approach is probably more
promising.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Multi-pass planner

2013-04-04 Thread Amit Kapila
On Friday, April 05, 2013 1:59 AM Robert Haas wrote:
 On Thu, Apr 4, 2013 at 2:53 PM, Dimitri Fontaine
 dimi...@2ndquadrant.fr wrote:
  Robert Haas robertmh...@gmail.com writes:
  for estimate_worstcase_fraction.  So, when computing the cost of a
  path, we'd compute our current expected-case estimate, and also a
  worst-case estimate, and then compute the final cost as:
 
  There also was the idea for the executor to be able to handle
 alternate
  plans and some heuristic to determine that the actual cost of running
 a
  plan is much higher than what's been estimated, so much so as to
 switch
  to starting from scratch with the other plan instead.
 
 Yeah.  The thing is, if the plan has any side effects, that's not
 really an option.  And even if it doesn't, it may throw away a lot of
 work.  

Why to throw away all the work, it could as well try to repair the plan or
even if plan repair is not possible, it can keep multiple plans and 
next time can try to choose best among available one's.


With Regards,
Amit Kapila.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Multi-pass planner

2013-04-03 Thread Greg Stark
On Fri, Aug 21, 2009 at 6:54 PM, decibel deci...@decibel.org wrote:

 Would it? Risk seems like it would just be something along the lines of
 the high-end of our estimate. I don't think confidence should be that hard
 either. IE: hard-coded guesses have a low confidence. Something pulled
 right out of most_common_vals has a high confidence. Something estimated
 via a bucket is in-between, and perhaps adjusted by the number of tuples.


I used to advocate a similar idea. But when questioned on list I tried to
work out the details and ran into some problem coming up with a concrete
plan.

How do you compare a plan that you think has a 99% chance of running in 1ms
but a 1% chance of taking 1s against a plan that has a 90% chance of 1ms
and a 10% chance of taking 100ms? Which one is actually riskier? They might
even both have the same 95% percentile run-time.

And additionally there are different types of unknowns. Do you want to
treat plans where we have a statistical sample that gives us a
probabilistic answer the same as plans where we think our model just has a
10% chance of being wrong? The model is going to either be consistently
right or consistently wrong for a given query but the sample will vary from
run to run. (Or vice versa depending on the situation).


-- 
greg


Re: [HACKERS] Multi-pass planner

2009-08-21 Thread decibel

On Aug 20, 2009, at 11:18 PM, Josh Berkus wrote:

I don't think it's a bad idea, I just think you have to set your
expectations pretty low. If the estimates are bad there isn't really
any plan that will be guaranteed to run quickly.


Well, the way to do this is via a risk-confidence system.  That is,  
each

operation has a level of risk assigned to it; that is, the cost
multiplier if the estimates are wrong.  And each estimate has a  
level of

confidence attached.  Then you can divide the risk by the confidence,
and if it exceeds a certain level, you pick another plan which has a
lower risk/confidence level.

However, the amount of extra calculations required for even a simple
query are kind of frightning.



Would it? Risk seems like it would just be something along the lines  
of the high-end of our estimate. I don't think confidence should be  
that hard either. IE: hard-coded guesses have a low confidence.  
Something pulled right out of most_common_vals has a high confidence.  
Something estimated via a bucket is in-between, and perhaps adjusted  
by the number of tuples.

--
Decibel!, aka Jim C. Nasby, Database Architect  deci...@decibel.org
Give your computer some brain candy! www.distributed.net Team #1828



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Multi-pass planner

2009-08-20 Thread decibel
There have been a number of planner improvement ideas that have been  
thrown out because of the overhead they would add to the planning  
process, specifically for queries that would otherwise be quiet fast.  
Other databases seem to have dealt with this by creating plan caches  
(which might be worth doing for Postgres), but what if we could  
determine when we need a fast planning time vs when it won't matter?


What I'm thinking is that on the first pass through the planner, we  
only estimate things that we can do quickly. If the plan that falls  
out of that is below a certain cost/row threshold, we just run with  
that plan. If not, we go back and do a more detailed estimate.

--
Decibel!, aka Jim C. Nasby, Database Architect  deci...@decibel.org
Give your computer some brain candy! www.distributed.net Team #1828



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Multi-pass planner

2009-08-20 Thread Robert Haas
On Thu, Aug 20, 2009 at 11:15 AM, decibeldeci...@decibel.org wrote:
 There have been a number of planner improvement ideas that have been thrown
 out because of the overhead they would add to the planning process,
 specifically for queries that would otherwise be quiet fast. Other databases
 seem to have dealt with this by creating plan caches (which might be worth
 doing for Postgres), but what if we could determine when we need a fast
 planning time vs when it won't matter?

 What I'm thinking is that on the first pass through the planner, we only
 estimate things that we can do quickly. If the plan that falls out of that
 is below a certain cost/row threshold, we just run with that plan. If not,
 we go back and do a more detailed estimate.

It's not a dumb idea, but it might be hard to set that threshold
correctly.  You might have some queries that you know take 4 seconds
(and you don't want to spend another 200 ms planning them every time
for no benefit) and other queries that only take 500 ms (but you know
that they can be squashed down to 150 ms with a bit more planning).

I think one of the problems with the planner is that all decisions are
made on the basis of cost.  Honestly, it works amazingly well in a
wide variety of situations, but it can't handle things like we might
as well materialize here, because it doesn't cost much and there's a
big upside if our estimates are off.  The estimates are the world,
and you live and die by them.

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Multi-pass planner

2009-08-20 Thread Kevin Grittner
Robert Haas robertmh...@gmail.com wrote:
 
 I think one of the problems with the planner is that all decisions
 are made on the basis of cost.  Honestly, it works amazingly well in
 a wide variety of situations, but it can't handle things like we
 might as well materialize here, because it doesn't cost much and
 there's a big upside if our estimates are off.  The estimates are
 the world, and you live and die by them.
 
[thinking out loud]
 
If there were some reasonable way to come up with a *range* for cost
at each step, a reasonable heuristic might be to cost the plan using
minimum values and maximum values, and use the root mean square of the
two for comparisons to other plans.  I don't know that we have a good
basis to come up with ranges rather than absolute numbers, though.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Multi-pass planner

2009-08-20 Thread Robert Haas
On Thu, Aug 20, 2009 at 12:55 PM, Kevin
Grittnerkevin.gritt...@wicourts.gov wrote:
 Robert Haas robertmh...@gmail.com wrote:

 I think one of the problems with the planner is that all decisions
 are made on the basis of cost.  Honestly, it works amazingly well in
 a wide variety of situations, but it can't handle things like we
 might as well materialize here, because it doesn't cost much and
 there's a big upside if our estimates are off.  The estimates are
 the world, and you live and die by them.

 [thinking out loud]

 If there were some reasonable way to come up with a *range* for cost
 at each step, a reasonable heuristic might be to cost the plan using
 minimum values and maximum values, and use the root mean square of the
 two for comparisons to other plans.  I don't know that we have a good
 basis to come up with ranges rather than absolute numbers, though.

Maybe.  The problem is that we have mostly two cases: an estimate that
we think is pretty good based on reasonable statistics (but may be way
off if there are hidden correlations we don't know about), and a wild
guess.  Also, it doesn't tend to matter very much when the estimates
are off by, say, a factor of two.  The real problem is when they are
off by an order of magnitude or more.

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Multi-pass planner

2009-08-20 Thread Greg Stark
On Thu, Aug 20, 2009 at 6:10 PM, Robert Haasrobertmh...@gmail.com wrote:
 Maybe.  The problem is that we have mostly two cases: an estimate that
 we think is pretty good based on reasonable statistics (but may be way
 off if there are hidden correlations we don't know about), and a wild
 guess.  Also, it doesn't tend to matter very much when the estimates
 are off by, say, a factor of two.  The real problem is when they are
 off by an order of magnitude or more.

One problem is that you can't just take a range of row estimates and
calculate a cost for both endpoints of the estimate range to get a
cost estimate. It's quite possible for more rows to generate a lower
cost (think if you have a NOT IN query).

Another problem is that it's not really helpful to have a range of
costs unless you can actually make use of them to make decisions. The
planner doesn't come up with multiple independent complete plans and
then pick the one with the cheapest cost. It has to make some
decisions along the way to avoid exponential growth. Those decisions
might have a tightly constrained cost but cause higher nodes to have
very wide cost ranges (think of deciding not to materialize something
which later gets put on the outer side of a nested loop). But there's
no way to know at the time that they'll be critical to avoiding that
risky plan later.

I don't think it's a bad idea, I just think you have to set your
expectations pretty low. If the estimates are bad there isn't really
any plan that will be guaranteed to run quickly.

-- 
greg
http://mit.edu/~gsstark/resume.pdf

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Multi-pass planner

2009-08-20 Thread Greg Stark
On Thu, Aug 20, 2009 at 6:28 PM, Greg Starkgsst...@mit.edu wrote:
 I don't think it's a bad idea, I just think you have to set your
 expectations pretty low. If the estimates are bad there isn't really
 any plan that will be guaranteed to run quickly.

Actually this is usually Tom's point when this topic comes up. Say
you're deciding between an index scan and a sequential scan. The
sequential scan has a total cost of 1000..1000 but the index scan has
an estimated total cost of 1..1. If you pick the sequential scan
you might be running 1000x slower than the index scan in the worst
case. But if you pick the index scan you might be running 10x slower
than the sequential scan in the worst case. If you don't trust the
estimate where does that leave you? Making a mistake either way is
fatal.


-- 
greg
http://mit.edu/~gsstark/resume.pdf

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Multi-pass planner

2009-08-20 Thread Kevin Grittner
Greg Stark gsst...@mit.edu wrote:
 
 Say you're deciding between an index scan and a sequential scan. The
 sequential scan has a total cost of 1000..1000 but the index scan
 has an estimated total cost of 1..1.
 
My proposal was to use RMS, which would effectively favor lower worst
case behavior.  Specifically, if the estimated cost range is
1000..1000 you get sqrt((1000*1000+1000*1000)/2) = 1000, while
1..1 yields sqrt((1*1+1*1)/2) = 7071.067847.  So with this
heuristic it would prefer the sequential scan.
 
Of course, for these purposes, you would get the same choices by
leaving off the division and square root calculation, so it could
simplify to choosing the lower of 1000*1000+1000*1000 versus
1*1+1*1.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Multi-pass planner

2009-08-20 Thread Mischa Sandberg
In a federated database engine I built in the mid-90's,
it more or less ran both plans in parallel, to implement fast-first and 
min-total cost.
The index query in general started returning rows whose oids went into a filter
that discarded them from the serial query once it started to crank things out.

'index' and 'serial' is not exactly what was going on;
the federated engine was joining multiple tables across multiple (sometimes 
hundreds)
of databases, with really variable transmission times, and tolerance for 
timed-out db servers.
It had no reliable way to get cost estimates in advance,
since it didn't have access to statistical metadata (no, Postgres wasn't one
of the databases I had to take as input, more's the pity).

'Parallel' is also not quite accurate. It retained plans (queries were heavily 
repeated)
and did mickey-mouse simulated annealing of past performance, so that if one of 
two plans
was the faster 90% of the time, then there was a 10% probability it would run 
both plans in parallel,
just to check whether something had changed.

The code was part of a proprietary product, and was used by Acxiom and Cisco.
But the concept was pretty simple and as described above.

 -Original Message-
 From: pgsql-hackers-ow...@postgresql.org
 [mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Greg Stark
 Sent: Thursday, August 20, 2009 10:32 AM
 To: Robert Haas
 Cc: Kevin Grittner; decibel; Pg Hackers
 Subject: Re: [HACKERS] Multi-pass planner

 On Thu, Aug 20, 2009 at 6:28 PM, Greg Starkgsst...@mit.edu wrote:
  I don't think it's a bad idea, I just think you have to set your
  expectations pretty low. If the estimates are bad there
 isn't really
  any plan that will be guaranteed to run quickly.

 Actually this is usually Tom's point when this topic comes
 up. Say you're deciding between an index scan and a
 sequential scan. The sequential scan has a total cost of
 1000..1000 but the index scan has an estimated total cost of
 1..1. If you pick the sequential scan you might be
 running 1000x slower than the index scan in the worst case.
 But if you pick the index scan you might be running 10x
 slower than the sequential scan in the worst case. If you
 don't trust the estimate where does that leave you? Making a
 mistake either way is fatal.


 --
 greg
 http://mit.edu/~gsstark/resume.pdf

 --
 Sent via pgsql-hackers mailing list
 (pgsql-hackers@postgresql.org) To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Multi-pass planner

2009-08-20 Thread Josh Berkus

 I don't think it's a bad idea, I just think you have to set your
 expectations pretty low. If the estimates are bad there isn't really
 any plan that will be guaranteed to run quickly.

Well, the way to do this is via a risk-confidence system.  That is, each
operation has a level of risk assigned to it; that is, the cost
multiplier if the estimates are wrong.  And each estimate has a level of
confidence attached.  Then you can divide the risk by the confidence,
and if it exceeds a certain level, you pick another plan which has a
lower risk/confidence level.

However, the amount of extra calculations required for even a simple
query are kind of frightning.

-- 
Josh Berkus
PostgreSQL Experts Inc.
www.pgexperts.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers