On Fri, Apr 19, 2013 at 7:43 PM, Jeff Janes wrote:
> On Fri, Apr 19, 2013 at 2:24 PM, Claudio Freire
> wrote:
>>
>>
>> Especially if there's some locality of occurrence, since analyze
>> samples pages, not rows.
>
>
> But it doesn't take all rows in each sampled page. It generally takes about
>
On Fri, Apr 19, 2013 at 2:24 PM, Claudio Freire wrote:
>
> Especially if there's some locality of occurrence, since analyze
> samples pages, not rows.
>
But it doesn't take all rows in each sampled page. It generally takes
about one row per page, specifically to avoid the problem you indicate.
M
On Thu, Apr 4, 2013 at 11:53 AM, Dimitri Fontaine wrote:
> Robert Haas writes:
> > for estimate_worstcase_fraction. So, when computing the cost of a
> > path, we'd compute our current expected-case estimate, and also a
> > worst-case estimate, and then compute the final cost as:
>
> There also w
On Fri, Apr 19, 2013 at 6:19 PM, Jeff Janes wrote:
> On Wed, Apr 3, 2013 at 6:40 PM, Greg Stark wrote:
>>
>>
>> On Fri, Aug 21, 2009 at 6:54 PM, decibel wrote:
>>>
>>> Would it? Risk seems like it would just be something along the lines of
>>> the high-end of our estimate. I don't think confiden
On Wed, Apr 3, 2013 at 6:40 PM, Greg Stark wrote:
>
> On Fri, Aug 21, 2009 at 6:54 PM, decibel wrote:
>
>> Would it? Risk seems like it would just be something along the lines of
>> the high-end of our estimate. I don't think confidence should be that hard
>> either. IE: hard-coded guesses have
On Friday, April 05, 2013 1:59 AM Robert Haas wrote:
> On Thu, Apr 4, 2013 at 2:53 PM, Dimitri Fontaine
> wrote:
> > Robert Haas writes:
> >> for estimate_worstcase_fraction. So, when computing the cost of a
> >> path, we'd compute our current expected-case estimate, and also a
> >> worst-case e
On Thu, Apr 4, 2013 at 2:53 PM, Dimitri Fontaine wrote:
> Robert Haas writes:
>> for estimate_worstcase_fraction. So, when computing the cost of a
>> path, we'd compute our current expected-case estimate, and also a
>> worst-case estimate, and then compute the final cost as:
>
> There also was t
Robert Haas writes:
> for estimate_worstcase_fraction. So, when computing the cost of a
> path, we'd compute our current expected-case estimate, and also a
> worst-case estimate, and then compute the final cost as:
There also was the idea for the executor to be able to handle alternate
plans and
On Wed, Apr 3, 2013 at 9:40 PM, Greg Stark wrote:
> I used to advocate a similar idea. But when questioned on list I tried to
> work out the details and ran into some problem coming up with a concrete
> plan.
>
> How do you compare a plan that you think has a 99% chance of running in 1ms
> but a 1
On Fri, Aug 21, 2009 at 6:54 PM, decibel wrote:
> Would it? Risk seems like it would just be something along the lines of
> the high-end of our estimate. I don't think confidence should be that hard
> either. IE: hard-coded guesses have a low confidence. Something pulled
> right out of most_commo
On Aug 20, 2009, at 11:18 PM, Josh Berkus wrote:
I don't think it's a bad idea, I just think you have to set your
expectations pretty low. If the estimates are bad there isn't really
any plan that will be guaranteed to run quickly.
Well, the way to do this is via a risk-confidence system. That
> I don't think it's a bad idea, I just think you have to set your
> expectations pretty low. If the estimates are bad there isn't really
> any plan that will be guaranteed to run quickly.
Well, the way to do this is via a risk-confidence system. That is, each
operation has a level of risk assig
But the concept was pretty simple and as described above.
> -Original Message-
> From: pgsql-hackers-ow...@postgresql.org
> [mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Greg Stark
> Sent: Thursday, August 20, 2009 10:32 AM
> To: Robert Haas
> Cc: Kevin Gritt
Greg Stark wrote:
> Say you're deciding between an index scan and a sequential scan. The
> sequential scan has a total cost of 1000..1000 but the index scan
> has an estimated total cost of 1..1.
My proposal was to use RMS, which would effectively favor lower worst
case behavior. Specific
On Thu, Aug 20, 2009 at 6:28 PM, Greg Stark wrote:
> I don't think it's a bad idea, I just think you have to set your
> expectations pretty low. If the estimates are bad there isn't really
> any plan that will be guaranteed to run quickly.
Actually this is usually Tom's point when this topic comes
On Thu, Aug 20, 2009 at 6:10 PM, Robert Haas wrote:
> Maybe. The problem is that we have mostly two cases: an estimate that
> we think is pretty good based on reasonable statistics (but may be way
> off if there are hidden correlations we don't know about), and a wild
> guess. Also, it doesn't te
On Thu, Aug 20, 2009 at 12:55 PM, Kevin
Grittner wrote:
> Robert Haas wrote:
>
>> I think one of the problems with the planner is that all decisions
>> are made on the basis of cost. Honestly, it works amazingly well in
>> a wide variety of situations, but it can't handle things like "we
>> might
Robert Haas wrote:
> I think one of the problems with the planner is that all decisions
> are made on the basis of cost. Honestly, it works amazingly well in
> a wide variety of situations, but it can't handle things like "we
> might as well materialize here, because it doesn't cost much and
>
On Thu, Aug 20, 2009 at 11:15 AM, decibel wrote:
> There have been a number of planner improvement ideas that have been thrown
> out because of the overhead they would add to the planning process,
> specifically for queries that would otherwise be quiet fast. Other databases
> seem to have dealt wi
There have been a number of planner improvement ideas that have been
thrown out because of the overhead they would add to the planning
process, specifically for queries that would otherwise be quiet fast.
Other databases seem to have dealt with this by creating plan caches
(which might be w
20 matches
Mail list logo