Martin Nickel wrote:
When I turn of seqscan it does use the index - and it runs 20 to 30%
longer.  Based on that, the planner is correctly choosing a sequential
scan - but that's just hard for me to comprehend.  I'm joining on an int4
key, 2048 per index page - I guess that's a lot of reads - then the data
-page reads. Still, the 8-minute query time seems excessive.

You'll be getting (many) fewer than 2048 index entries per page. There's a page header and various pointers involved too, and index pages aren't going to be full. So - it needs to search the table on dates, fetch the id's and then assemble them for the hash join. Of course, if you have too many to join then all this will spill to disk slowing you further.

Now, you'd rather get down below 8 minutes. There are a number of options:
 1. Make sure your disk i/o is being pushed to its limit
2. Look into increasing the sort memory for this one query "set work_mem..." (see the runtime configuration section of the manual)
 3. Actually - are you happy that your general configuration is OK?
4. Perhaps use a cursor - I'm guessing you want to process these mailings in some way and only want them one at a time in any case. 5. Try the query one day at a time and see if the balance tips the other way - you'll be dealing with substantially less data per query which might match your system better. Of course, this may not be practical for your applicaton. 6. If your lead table is updated only rarely, you could try a CLUSTER on the table by mailing_id - that should speed the scan. Read the manual for the cluster command first though.

--
  Richard Huxton
  Archonet Ltd

---------------------------(end of broadcast)---------------------------
TIP 3: Have you checked our extensive FAQ?

              http://www.postgresql.org/docs/faq

Reply via email to