Kevin Kempter wrote:
Hi all;


I have a simple query against two very large tables ( > 800million rows in theurl_hits_category_jt table and 9.2 million in the url_hits_klk1 table )


I have indexes on the join columns and I've run an explain.
also I've set the default statistics to 250 for both join columns. I get a very high overall query cost:

If you had an extra where condition it might be different, but you're just returning results from both tables that match up so doing a sequential scan is going to be the fastest way anyway.

--
Postgresql & php tutorials
http://www.designmagick.com/


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to