On Wed, Apr 19, 2006 at 01:25:28AM -0400, Tom Lane wrote:
> Mark Kirkwood <[EMAIL PROTECTED]> writes:
> > Jim C. Nasby wrote:
> >> Good point. :/ I'm guessing there's no easy way to see how many blocks
> >> for a given relation are in shared memory, either...
>
> > contrib/pg_buffercache will tell
Tom Lane wrote:
Mark Kirkwood <[EMAIL PROTECTED]> writes:
Jim C. Nasby wrote:
Good point. :/ I'm guessing there's no easy way to see how many blocks
for a given relation are in shared memory, either...
contrib/pg_buffercache will tell you this -
I think the key word in Jim's comment was "e
Mark Kirkwood <[EMAIL PROTECTED]> writes:
> Jim C. Nasby wrote:
>> Good point. :/ I'm guessing there's no easy way to see how many blocks
>> for a given relation are in shared memory, either...
> contrib/pg_buffercache will tell you this -
I think the key word in Jim's comment was "easy", ie, che
On Wed, Apr 19, 2006 at 04:47:40PM +1200, Mark Kirkwood wrote:
> Jim C. Nasby wrote:
> >On Tue, Apr 18, 2006 at 06:22:26PM -0400, Tom Lane wrote:
> >>"Jim C. Nasby" <[EMAIL PROTECTED]> writes:
> >>>Actually, if you run with stats_block_level turned on you have a
> >>>first-order approximation of wh
Jim C. Nasby wrote:
On Tue, Apr 18, 2006 at 06:22:26PM -0400, Tom Lane wrote:
"Jim C. Nasby" <[EMAIL PROTECTED]> writes:
Actually, if you run with stats_block_level turned on you have a
first-order approximation of what is and isn't cached.
Only if those stats decayed (pretty fast) with time;
Markus Schaber <[EMAIL PROTECTED]> writes:
> An easy first approach would be to add a user tunable cache probability
> value to each index (and possibly table) between 0 and 1. Then simply
> multiply random_page_cost with (1-that value) for each scan.
That's not the way you'd need to use it. But
On Tue, Apr 18, 2006 at 06:26:48PM -0400, Tom Lane wrote:
> Markus Schaber <[EMAIL PROTECTED]> writes:
> > Hmm, how does effective_cach_size correspond with it? Shouldn't a high
> > effective_cache_size have a similar effect?
>
> It seems reasonable to suppose that effective_cache_size ought to be
On Tue, Apr 18, 2006 at 06:22:26PM -0400, Tom Lane wrote:
> "Jim C. Nasby" <[EMAIL PROTECTED]> writes:
> > Actually, if you run with stats_block_level turned on you have a
> > first-order approximation of what is and isn't cached.
>
> Only if those stats decayed (pretty fast) with time; which they
Markus Schaber <[EMAIL PROTECTED]> writes:
> Hmm, how does effective_cach_size correspond with it? Shouldn't a high
> effective_cache_size have a similar effect?
It seems reasonable to suppose that effective_cache_size ought to be
used as a number indicating how much "stuff" would hang around from
"Jim C. Nasby" <[EMAIL PROTECTED]> writes:
> Actually, if you run with stats_block_level turned on you have a
> first-order approximation of what is and isn't cached.
Only if those stats decayed (pretty fast) with time; which they don't.
regards, tom lane
On Tue, Apr 18, 2006 at 12:51:59PM +0200, Markus Schaber wrote:
> > In my mind this is tied into another issue, which is that the planner
> > always costs on the basis of each query starting from zero. In a real
> > environment it's much cheaper to use heavily-used indexes than this cost
> > model
Hi, Tom,
Tom Lane wrote:
> Well, the other thing that's going on here is that we know we are
> overestimating the cost of nestloop-with-inner-indexscan plans.
> The current estimation for that is basically "outer scan cost plus N
> times inner scan cost" where N is the estimated number of outer t
Ian Westmacott <[EMAIL PROTECTED]> writes:
> That's what I feared, thanks. But then if I simplify things a bit,
> such that the row counts are quite good, and PG still chooses a
> worse plan, can I conclude anything about my configuration settings
> (like random_page_cost)?
Well, the other thing
On Fri, 2006-04-14 at 12:13 -0400, Tom Lane wrote:
> Ian Westmacott <[EMAIL PROTECTED]> writes:
> > I have this query, where PG (8.1.2) prefers Merge Join over Hash Join
> > over Nested Loop. However, this order turns out to increase in
> > performance. I was hoping someone might be able to shed
Ian Westmacott <[EMAIL PROTECTED]> writes:
> I have this query, where PG (8.1.2) prefers Merge Join over Hash Join
> over Nested Loop. However, this order turns out to increase in
> performance. I was hoping someone might be able to shed some light on
> why PG chooses the plans in this order, and
I have this query, where PG (8.1.2) prefers Merge Join over Hash Join
over Nested Loop. However, this order turns out to increase in
performance. I was hoping someone might be able to shed some light on
why PG chooses the plans in this order, and what I might do to
influence it otherwise. Thanks
16 matches
Mail list logo