Jim C. Nasby jnasby 'at' pervasive.com writes:
On Tue, Mar 21, 2006 at 02:03:19PM +0100, Guillaume Cottenceau wrote:
Jim C. Nasby jnasby 'at' pervasive.com writes:
On Tue, Mar 21, 2006 at 10:40:45PM +1200, Mark Kirkwood wrote:
I was going to recommend higher - but not knowing what
On Wed, 2006-03-22 at 02:04, Guillaume Cottenceau wrote:
Jim C. Nasby jnasby 'at' pervasive.com writes:
On Tue, Mar 21, 2006 at 02:03:19PM +0100, Guillaume Cottenceau wrote:
Jim C. Nasby jnasby 'at' pervasive.com writes:
On Tue, Mar 21, 2006 at 10:40:45PM +1200, Mark Kirkwood
Hi Scott,
Scott Marlowe smarlowe 'at' g2switchworks.com writes:
On Wed, 2006-03-22 at 02:04, Guillaume Cottenceau wrote:
[...]
Yes, we use 7.4.5 actually, because it just works, so production
wants to first deal with all the things that don't work before
upgrading. I have recently
On Mon, Mar 20, 2006 at 09:35:14AM +0100, Guillaume Cottenceau wrote:
shared_buffer = 12000
effective_cache_size = 25000
This would mean you are reserving 100M for Postgres to cache relation
pages, and informing the planner that it can expect ~200M available
from the disk buffer
On Mon, Mar 20, 2006 at 09:14:32AM +0100, Guillaume Cottenceau wrote:
Guillaume,
Thanks for your answer.
On 17 Mar 2006 11:09:50 +0100, Guillaume Cottenceau
wrote:
Reading the documentation and postgresql list archives, I have
run ANALYZE right before my tests, I have increased the
Jim C. Nasby jnasby 'at' pervasive.com writes:
[...]
My point is that the planner's cost estimate is way above the
actual cost of the query, so the planner doesn't use the best
plan. Even if the index returns so much rows, actual cost of the
query is so that index scan (worst case, all
Jim C. Nasby wrote:
On Mon, Mar 20, 2006 at 09:35:14AM +0100, Guillaume Cottenceau wrote:
shared_buffer = 12000
effective_cache_size = 25000
This would mean you are reserving 100M for Postgres to cache relation
pages, and informing the planner that it can expect ~200M available
from the disk
On Tue, Mar 21, 2006 at 10:40:45PM +1200, Mark Kirkwood wrote:
I was going to recommend higher - but not knowing what else was running,
kept it to quite conservative :-)... and given he's running java, the
JVM could easily eat 512M all by itself!
Oh, didn't pick up on java being in the mix.
On Tue, Mar 21, 2006 at 11:13:06AM +0100, Guillaume Cottenceau wrote:
Jim C. Nasby jnasby 'at' pervasive.com writes:
[...]
My point is that the planner's cost estimate is way above the
actual cost of the query, so the planner doesn't use the best
plan. Even if the index returns so
On Tue, Mar 21, 2006 at 02:30:22PM +0100, Guillaume Cottenceau wrote:
Jim C. Nasby jnasby 'at' pervasive.com writes:
If you feel like running some tests, you need to change
run_cost += max_IO_cost + csquared * (min_IO_cost - max_IO_cost);
in src/backend/optimizer/path/costsize.c
On Tue, Mar 21, 2006 at 02:03:19PM +0100, Guillaume Cottenceau wrote:
Jim C. Nasby jnasby 'at' pervasive.com writes:
On Tue, Mar 21, 2006 at 10:40:45PM +1200, Mark Kirkwood wrote:
I was going to recommend higher - but not knowing what else was running,
kept it to quite conservative
On Fri, 2006-03-17 at 11:09 +0100, Guillaume Cottenceau wrote:
INFO: index idx_sent_msgs_date_theme_status now contains 3692284 row
versions in 88057 pages
SET effective_cache_size = 1;
SET effective_cache_size 88057, round up to 10
to ensure the index cost calculation knows the
Guillaume,
Thanks for your answer.
On 17 Mar 2006 11:09:50 +0100, Guillaume Cottenceau
wrote:
Reading the documentation and postgresql list archives, I have
run ANALYZE right before my tests, I have increased the
statistics target to 50 for the considered table; my problem is
that the
Guillaume,
On 17 Mar 2006 11:09:50 +0100, Guillaume Cottenceau
wrote:
Reading the documentation and postgresql list archives, I have
run ANALYZE right before my tests, I have increased the
statistics target to 50 for the considered table; my problem is
that the index scan cost reported by
Guillaume Cottenceau wrote:
SET random_page_cost = 2;
SET effective_cache_size = 1;
EXPLAIN SELECT * FROM sent_messages WHERE date '2005-09-01' AND date
'2005-09-19';
QUERY PLAN
Hi,
I have a problem with the postgres planner, which gives a cost to
index scan which is much higher than actual cost (worst case
considered, e.g. without any previous disk cache), and am posting
here for advices for tweaking cost constants. Because of this
problem, the planner typically chooses
16 matches
Mail list logo