The execution time has not improved. I am going to increase the
shared_buffers now keeping the work_mem same.
Have you performed a vacuum analyze?
--
regards
Claus
When lenity and cruelty play for a kingdom,
the gentler gamester is the soonest winner.
Shakespeare
--
Sent via
Thu, 26 Feb 2009 09:00:07 +0100 -n
Claus Guttesen kome...@gmail.com írta:
The execution time has not improved. I am going to increase the
shared_buffers now keeping the work_mem same.
Have you performed a vacuum analyze?
and reindex
--
Üdvözlettel,
Gábriel Ákos
-=E-Mail
On Wed, Feb 25, 2009 at 4:10 PM, Kevin Grittner kevin.gritt...@wicourts.gov
wrote:
Farhan Husain russ...@gmail.com wrote:
The machine postgres is running on has 4 GB of RAM.
In addition to the other suggestions, you should be sure that
effective_cache_size is set to a reasonable value,
On Wed, Feb 25, 2009 at 6:07 PM, Scott Carey sc...@richrelevance.comwrote:
I will second Kevin’s suggestion. Unless you think you will have more
than a few dozen concurrent queries, start with work_mem around 32MB.
For the query here, a very large work_mem might help it hash join depending
Farhan Husain russ...@gmail.com wrote:
Thanks a lot Scott! I think that was the problem. I just changed the
default statistics target to 50 and ran explain. The plan changed
and I ran explain analyze. Now it takes a fraction of a second!
Yeah, the default of 10 has been too low. In 8.4 it
On Thu, Feb 26, 2009 at 12:10 PM, Steve Clark scl...@netwolves.com wrote:
Can this be set in the postgresql.conf file?
default_statistics_target = 50
Yep. It will take affect after a reload and after the current
connection has been reset.
If you want to you also set a default for a database
Kevin Grittner wrote:
Farhan Husain russ...@gmail.com wrote:
Thanks a lot Scott! I think that was the problem. I just changed the
default statistics target to 50 and ran explain. The plan changed
and I ran explain analyze. Now it takes a fraction of a second!
Yeah, the default of 10 has been