On Feb 17, 2009, at 11:23 PM, Robert Haas wrote:
Actually, a simple algorithm that might work really well would be to
calculate relation cache odds as ( number of page accesses for
relation /
number of page accesses for all relations ) * ( sum(relpages)*BLKSZ /
eff_cache_size ), where number
On Fri, Feb 20, 2009 at 7:25 PM, decibel deci...@decibel.org wrote:
On Feb 17, 2009, at 11:23 PM, Robert Haas wrote:
Actually, a simple algorithm that might work really well would be to
calculate relation cache odds as ( number of page accesses for relation /
number of page accesses for all
On Wed, Feb 18, 2009 at 1:34 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
I'm interested to know whether anyone else shares my belief that
nested loops are the cause of most really bad plans. What usually
happens to me is that the planner develops some
On Wed, Feb 18, 2009 at 01:34:25AM -0500, Tom Lane wrote:
Robert Haas robertmh...@gmail.com writes:
I'm interested to know whether anyone else shares my belief that
nested loops are the cause of most really bad plans. What usually
happens to me is that the planner develops some unwarranted
If the planning was done with some sort of interval then you'd be
able to encode information about how well your stats characterized the
underlying data. Traditionally awkward things like amount of cache
would serve to drop the lower bound, but not alter the upper. The
planner then
Robert Haas robertmh...@gmail.com writes:
Yeah, I thought about this too, but it seems like overkill for the
problem at hand, and as you say it's not clear you'd get any benefit
out of the upper bound anyway. I was thinking of something simpler:
instead of directly multiplying 0.005 into the
On Wed, Feb 18, 2009 at 11:46 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
Yeah, I thought about this too, but it seems like overkill for the
problem at hand, and as you say it's not clear you'd get any benefit
out of the upper bound anyway. I was thinking
On Wed, 2009-02-18 at 07:50 -0500, Robert Haas wrote:
(Now it appears that Josh is having problems that are caused by
overestimating the cost of a page fetch, perhaps due to caching
effects. Those are discussed upthread, and I'm still interested to
see whether we can arrive at any sort of
Robert Haas wrote:
experience, most bad plans are caused by bad selectivity estimates,
and the #1 source of bad selectivity estimates is selectivity
estimates for unknown expressions.
ISTM unknown expressions should be modeled as a range of
values rather than one single arbitrary value.
For
On Wed, Feb 18, 2009 at 2:46 PM, Ron Mayer
rm...@cheapcomplexdevices.com wrote:
Robert Haas wrote:
experience, most bad plans are caused by bad selectivity estimates,
and the #1 source of bad selectivity estimates is selectivity
estimates for unknown expressions.
ISTM unknown expressions
Robert Haas robertmh...@gmail.com writes:
... At any rate, we'd need to save quite
a bit to pay for carting around best and worst case costs for every
plan we consider.
Another problem with this is it doesn't really do anything to solve the
problem we were just discussing, namely having an
On Wed, Feb 18, 2009 at 3:32 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
... At any rate, we'd need to save quite
a bit to pay for carting around best and worst case costs for every
plan we consider.
Another problem with this is it doesn't really do
On Wed, 2009-02-18 at 15:32 -0500, Tom Lane wrote:
An idea that I think has been mentioned before is to try to identify
cases where we can *prove* there is at most one row emitted by a
sub-path (eg, because of a unique index, DISTINCT subplan, etc). Then
we could penalize nestloops with
Simon Riggs si...@2ndquadrant.com writes:
On Wed, 2009-02-18 at 15:32 -0500, Tom Lane wrote:
An idea that I think has been mentioned before is to try to identify
cases where we can *prove* there is at most one row emitted by a
sub-path (eg, because of a unique index, DISTINCT subplan, etc).
On Feb 15, 2009, at 9:54 PM, Robert Haas wrote:
On Sun, Feb 15, 2009 at 1:16 PM, Greg Smith gsm...@gregsmith.com
wrote:
On Fri, 13 Feb 2009, Robert Haas wrote:
This seems plausible, but I'm not totally sold: predicting the
contents of the operating system buffer cache sounds like it might be
Actually, a simple algorithm that might work really well would be to
calculate relation cache odds as ( number of page accesses for relation /
number of page accesses for all relations ) * ( sum(relpages)*BLKSZ /
eff_cache_size ), where number of page accesses would be both from relcache
and
Robert Haas robertmh...@gmail.com writes:
I'm interested to know whether anyone else shares my belief that
nested loops are the cause of most really bad plans. What usually
happens to me is that the planner develops some unwarranted optimism
about the number of rows likely to be generated by
On Fri, 13 Feb 2009, Robert Haas wrote:
Gather statistics on relation access patterns and use that to estimate
the fraction of a relation likely to be in cache.
At one point I had a hacked background writer that collected statistics
about the contents of the buffer cache. Since it's
Greg Smith gsm...@gregsmith.com wrote:
have a second count that assumes the last
2*shared_buffers evicted are also still cached.
Perhaps it would be better to assume that the external cache is
effective_cache_size - shared_buffers? Of course, we would need to
have some heuristics to cover
On Sun, Feb 15, 2009 at 1:16 PM, Greg Smith gsm...@gregsmith.com wrote:
On Fri, 13 Feb 2009, Robert Haas wrote:
Gather statistics on relation access patterns and use that to estimate the
fraction of a relation likely to be in cache.
At one point I had a hacked background writer that collected
--On Donnerstag, Februar 12, 2009 16:06:31 -0800 Joshua D. Drake
j...@commandprompt.com wrote:
However, in recent times I have found that increasing cpu_tuple_cost,
cpu_operator_cost and cpu_index_tuple_cost to be very useful. This is
always in the scenario of, queries were running fine for
yet more arguments, to let postgresql estimate those automatically.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Fri, 2009-02-13 at 20:10 +, Grzegorz Jaskiewicz wrote:
yet more arguments, to let postgresql estimate those automatically.
Well I haven't seen any arguments actually. Which was the point of my
original question. I don't think anyone actually knows what these knobs
change, in practice.
On Thu, 2009-02-12 at 16:06 -0800, Joshua D. Drake wrote:
Hello,
I was helping a customer today with what is becoming a common theme with
a lot of work we do. Basically, It was working fine until recently.
Now 90% of the time it is as simple as running an ANALYZE VERBOSE and
picking apart
On Fri, Feb 13, 2009 at 3:27 PM, Joshua D. Drake j...@commandprompt.com wrote:
On Fri, 2009-02-13 at 20:10 +, Grzegorz Jaskiewicz wrote:
yet more arguments, to let postgresql estimate those automatically.
Well I haven't seen any arguments actually. Which was the point of my
original
Hello,
I was helping a customer today with what is becoming a common theme with
a lot of work we do. Basically, It was working fine until recently.
Now 90% of the time it is as simple as running an ANALYZE VERBOSE and
picking apart relations that aren't being maintained properly and adjust
26 matches
Mail list logo