Depending on the available memory try increasing the shared buffers and
work_mem and see if that changes the query execution time. Also make sure
you have proper indices created and also if possible try doing partitions
for the table.

Once you post the EXPLAIN ANALYZE output that will certainly help solving
the problem...

-------------
Shoaib Mir
EnterpriseDB (www.enterprisedb.com)

On 1/14/07, Dave Dutcher <[EMAIL PROTECTED]> wrote:

 Have you run vacuum and analyze on the table?  What version of Postgres
are you running?  What OS are you using?

This looks like a straight forward query.  With any database the first
time you run the query its going to be slower because it actually has to
read off disk.  The second time its faster because some or all of the
data/indexes will be cached.  However 10 seconds sounds like a long time for
pulling 10,000 records out of a table of 3 million.  If you post an EXPLAIN
ANALYZE, it might give us a clue.

Dave


 -----Original Message-----
*From:* [EMAIL PROTECTED] [mailto:
[EMAIL PROTECTED] *On Behalf Of *Mark Dobbrow
*Sent:* Friday, January 12, 2007 6:31 PM
*To:* pgsql-performance@postgresql.org
*Subject:* [PERFORM] Large table performance

Hello -

I have a fairly large table (3 million records), and am fetching 10,000
non-contigous records doing a simple select on an indexed column ie

select grades from large_table where teacher_id = X

This is a test database, so the number of records is always 10,000 and i
have 300 different teacher ids.

The problem is, sometimes fetching un-cached records takes 0.5 secs and
sometimes (more often) is takes more like 10.0 seconds

(fetching the same records for a given teacher_id a second time takes
about 0.25 secs)

Has anyone seen similar behavior or know what the solution might be?

any help much appreciated,
Mark



ps. My shared_buffers is set at 5000 (kernal max), and work_mem=8192



Reply via email to