e the query per Ismo's suggestion, or b) wait
until more data comes into that table, potentially kicking the query
planner into not using the Nested Loop anymore.
Anyway, thanks again, I appreciate it...
-Jeff
On Mar 7, 2007, at 11:37 AM, Tom Lane wrote:
Jeff Cole <[EMAIL PROTECTED]
On Mar 6, 2007, at 11:40 AM, Tom Lane wrote:
the *actual* average number of rows scanned is 3773. I'm not sure why
this should be --- is it possible that the distribution of keys in
symptom_reports is wildly uneven? This could happen if all of the
physically earlier rows in symptom_reports co
On Mar 5, 2007, at 8:54 PM, Tom Lane wrote:
Hm, the cost for the upper nestloop is way less than you would expect
given that the HASH IN join is going to have to be repeated 100+
times.
I think this must be due to a very low "join_in_selectivity" estimate
but I'm not sure why you are getting
Hi, I'm new to tuning PostgreSQL and I have a query that gets slower
after I run a vacuum analyze. I believe it uses a Hash Join before
the analyze and a Nested Loop IN Join after. It seems the Nested
Loop IN Join estimates the correct number of rows, but underestimates
the amount of time
Hi, I'm new to tuning PostgreSQL and I have a query that gets slower
after I run a vacuum analyze. I believe it uses a Hash Join before
the analyze and a Nested Loop IN Join after. It seems the Nested
Loop IN Join estimates the correct number of rows, but underestimates
the amount of time