werner fraga wrote:
Certain queries on my database get slower after running a VACUUM ANALYZE. Why would this happen, and how can I fix it?
I am running PostgreSQL 7.4.2 (I also seen this problem on v. 7.3 and 8.0)
Here is a sample query that exhibits this behaviour (here the query goes from 1 second before VACUUM ANALYZE to 2 seconds after; there are other queries that go from 20 seconds before to 800 seconds after):
First, try to attach your explain analyze as a textfile attachment, rather than inline to prevent wrapping and make it easier to read.
Second, the problem is that it *is* getting a more accurate estimate of the number of rows that are going to be returned, compare:
Plan 1:
----------------------------------------------------------------------------------------------------------------------------------------------------------------- Nested Loop Left Join (cost=3974.74..48055.42 rows=79 width=8) (actual time=359.751..1136.165 rows=1518 loops=1)
The planner was expecting 79 rows, but was actually getting 1518.
Plan 2:
----------------------------------------------------------------------------------------------------------------------------------------------------------------------- Merge Left Join (cost=18310.59..29162.44 rows=1533 width=8) (actual time=1886.942..2183.774 rows=1518 loops=1)
It is predicting 1533 rows, and found 1518, a pretty good guess.
So the big issue is why does the planner think that a nested loop is going to be more expensive than a merge join. That I don't really know. I'm guessing some parameters like random_page_cost could be tweaked, but I don't really know the criteria postgres uses for merge joins vs nested loop joins.
Thanks for any assistance.
Walt
Hopefully someone can help a little better. In the mean time, you might want to resend with an attachment. I know I had trouble reading your explain analyze.
John =:->
signature.asc
Description: OpenPGP digital signature
