I have a table with ~8 million rows and I am executing a query which
should return about ~800,000 rows. The problem is that as soon as I
execute the query it absolutely kills my machine and begins swapping
for 5 or 6 minutes before it begins returning results. Is postgres
trying to load the whole query into memory before returning anything?
Also, why would it choose not to use the index? It is properly
estimating the # of rows returned. If I set enable_seqscan to off it
is just as slow.


1; EXPLAIN ANALYZE.

Note the time it takes. It should not swap, just read data from the disk (and not kill the machine).

        2; Run the query in your software

Note the time it takes. Watch RAM usage. If it's vastly longer and you're swimming in virtual memory, postgres is not the culprit... rather use a cursor to fetch a huge resultset bit by bit.

        Tell us what you find ?

        Regards.

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
     subscribe-nomail command to [EMAIL PROTECTED] so that your
     message can get through to the mailing list cleanly

Reply via email to