Here are my query and schema. The ERD is at http://dshadovi.f2o.org/pg_erd.jpg
(sorry about its resolution).
-David
SELECT
zbr.zebra_name
, dog.dog_name
, mnk.monkey_name
, wrm.abbreviation || ptr.abbreviation as abbrev2
, whg.warthog_num
, whg.color
, rhn.rhino_name
, der.deer_name
,
David Shadovitz <[EMAIL PROTECTED]> writes:
> If you think that you or anyone else would invest the time, I could post more
> info.
I doubt you will get any useful help if you don't post more info.
> I will also try Shridhar's suggestions on statistics_target and
> enable_hash_join.
It seemed t
> This is not very informative when you didn't show us the query nor
> the table schemas..
> BTW, what did you do with this, print and OCR it?
Tom,
I work in a classified environment, so I had to sanitize the query plan, print
it, and OCR it. I spent a lot of time fixing typos, but I guess at
David Shadovitz <[EMAIL PROTECTED]> writes:
> Well, now that I have the plan for my slow-running query, what do I
> do?
This is not very informative when you didn't show us the query nor
the table schemas (column datatypes and the existence of indexes
are the important parts). I have a feeling th
David Shadovitz wrote:
Well, now that I have the plan for my slow-running query, what do I do? Where
should I focus my attention?
Briefly looking over the plan and seeing the estimated v/s actual row mismatch,I
can suggest you following.
1. Vacuum(full) the database. Probably you have already
Well, now that I have the plan for my slow-running query, what do I do? Where
should I focus my attention?
Thanks.
-David
Hash Join (cost=16620.59..22331.88 rows=40133 width=266) (actual
time=118773.28..580889.01 rows=57076 loops=1)
-> Hash Join (cost=16619.49..21628.48 rows=40133