On Wed, Apr 3, 2013 at 6:40 PM, Greg Stark st...@mit.edu wrote:
On Fri, Aug 21, 2009 at 6:54 PM, decibel deci...@decibel.org wrote:
Would it? Risk seems like it would just be something along the lines of
the high-end of our estimate. I don't think confidence should be that hard
either. IE:
On Fri, Apr 19, 2013 at 6:19 PM, Jeff Janes jeff.ja...@gmail.com wrote:
On Wed, Apr 3, 2013 at 6:40 PM, Greg Stark st...@mit.edu wrote:
On Fri, Aug 21, 2009 at 6:54 PM, decibel deci...@decibel.org wrote:
Would it? Risk seems like it would just be something along the lines of
the high-end of
On Thu, Apr 4, 2013 at 11:53 AM, Dimitri Fontaine dimi...@2ndquadrant.frwrote:
Robert Haas robertmh...@gmail.com writes:
for estimate_worstcase_fraction. So, when computing the cost of a
path, we'd compute our current expected-case estimate, and also a
worst-case estimate, and then
On Fri, Apr 19, 2013 at 2:24 PM, Claudio Freire klaussfre...@gmail.comwrote:
Especially if there's some locality of occurrence, since analyze
samples pages, not rows.
But it doesn't take all rows in each sampled page. It generally takes
about one row per page, specifically to avoid the
On Fri, Apr 19, 2013 at 7:43 PM, Jeff Janes jeff.ja...@gmail.com wrote:
On Fri, Apr 19, 2013 at 2:24 PM, Claudio Freire klaussfre...@gmail.com
wrote:
Especially if there's some locality of occurrence, since analyze
samples pages, not rows.
But it doesn't take all rows in each sampled
On Wed, Apr 3, 2013 at 9:40 PM, Greg Stark st...@mit.edu wrote:
I used to advocate a similar idea. But when questioned on list I tried to
work out the details and ran into some problem coming up with a concrete
plan.
How do you compare a plan that you think has a 99% chance of running in 1ms
Robert Haas robertmh...@gmail.com writes:
for estimate_worstcase_fraction. So, when computing the cost of a
path, we'd compute our current expected-case estimate, and also a
worst-case estimate, and then compute the final cost as:
There also was the idea for the executor to be able to handle
On Thu, Apr 4, 2013 at 2:53 PM, Dimitri Fontaine dimi...@2ndquadrant.fr wrote:
Robert Haas robertmh...@gmail.com writes:
for estimate_worstcase_fraction. So, when computing the cost of a
path, we'd compute our current expected-case estimate, and also a
worst-case estimate, and then compute
On Friday, April 05, 2013 1:59 AM Robert Haas wrote:
On Thu, Apr 4, 2013 at 2:53 PM, Dimitri Fontaine
dimi...@2ndquadrant.fr wrote:
Robert Haas robertmh...@gmail.com writes:
for estimate_worstcase_fraction. So, when computing the cost of a
path, we'd compute our current expected-case
On Fri, Aug 21, 2009 at 6:54 PM, decibel deci...@decibel.org wrote:
Would it? Risk seems like it would just be something along the lines of
the high-end of our estimate. I don't think confidence should be that hard
either. IE: hard-coded guesses have a low confidence. Something pulled
right
On Aug 20, 2009, at 11:18 PM, Josh Berkus wrote:
I don't think it's a bad idea, I just think you have to set your
expectations pretty low. If the estimates are bad there isn't really
any plan that will be guaranteed to run quickly.
Well, the way to do this is via a risk-confidence system.
There have been a number of planner improvement ideas that have been
thrown out because of the overhead they would add to the planning
process, specifically for queries that would otherwise be quiet fast.
Other databases seem to have dealt with this by creating plan caches
(which might be
On Thu, Aug 20, 2009 at 11:15 AM, decibeldeci...@decibel.org wrote:
There have been a number of planner improvement ideas that have been thrown
out because of the overhead they would add to the planning process,
specifically for queries that would otherwise be quiet fast. Other databases
seem
Robert Haas robertmh...@gmail.com wrote:
I think one of the problems with the planner is that all decisions
are made on the basis of cost. Honestly, it works amazingly well in
a wide variety of situations, but it can't handle things like we
might as well materialize here, because it doesn't
On Thu, Aug 20, 2009 at 12:55 PM, Kevin
Grittnerkevin.gritt...@wicourts.gov wrote:
Robert Haas robertmh...@gmail.com wrote:
I think one of the problems with the planner is that all decisions
are made on the basis of cost. Honestly, it works amazingly well in
a wide variety of situations, but
On Thu, Aug 20, 2009 at 6:10 PM, Robert Haasrobertmh...@gmail.com wrote:
Maybe. The problem is that we have mostly two cases: an estimate that
we think is pretty good based on reasonable statistics (but may be way
off if there are hidden correlations we don't know about), and a wild
guess.
On Thu, Aug 20, 2009 at 6:28 PM, Greg Starkgsst...@mit.edu wrote:
I don't think it's a bad idea, I just think you have to set your
expectations pretty low. If the estimates are bad there isn't really
any plan that will be guaranteed to run quickly.
Actually this is usually Tom's point when
Greg Stark gsst...@mit.edu wrote:
Say you're deciding between an index scan and a sequential scan. The
sequential scan has a total cost of 1000..1000 but the index scan
has an estimated total cost of 1..1.
My proposal was to use RMS, which would effectively favor lower worst
case
above.
-Original Message-
From: pgsql-hackers-ow...@postgresql.org
[mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Greg Stark
Sent: Thursday, August 20, 2009 10:32 AM
To: Robert Haas
Cc: Kevin Grittner; decibel; Pg Hackers
Subject: Re: [HACKERS] Multi-pass planner
On Thu
I don't think it's a bad idea, I just think you have to set your
expectations pretty low. If the estimates are bad there isn't really
any plan that will be guaranteed to run quickly.
Well, the way to do this is via a risk-confidence system. That is, each
operation has a level of risk
20 matches
Mail list logo