On Thu, 04 Dec 2003 08:06:23 -0800
Jack Coates <[EMAIL PROTECTED]> wrote:

> testbed:
> dual P3 1.3 GHz box with 2GB RAM
> two IDE 120G drives on separate channels (DMA on), OS on one, DB on
> the other, some swap on each (totalling 2.8G).
> RH Linux 8.

Side Note: be sure to turn off write caching on those disks or you may
have data corruption in the event of a failure

> The problem is that pulling the 4 to 6 thousand rows puts PostgreSQL
> into a tail spin: postmaster hammers on CPU anywhere from 90 seconds
> to five minutes before returning the data. During this time vmstat
> shows that disk activity is up of course, but it doesn't appear to be
> with page swapping (free and top and vmstat).
Have you tried modifying the app to retrieve the rows in smaller chunks?
(use a cursor). this way it only needs to alloate memory to hold say,
100 rows at a time instead of 6000. 

Also, have you explain analyze'd your queries to make sure PG is picking
a good plan to execute?

> I've tweaked shared buffers to 8192, pushed sort memory to 2048,
> vacuum memory to 8192, and effective cache size to 10000.
> /proc/sys/kernel/shmmax is set to 1600000000 and /proc/sys/fs/file-max
> is set to 65536. Ulimit -n 3192.

you should set effective cache size bigger, especially with 2GB of
memory. effective_cache_size tells PG 'about' how much data it cna
expect the OS to cache.  

and.. I'm not sure about your query, but perhaps the sort of those 6000
rows is spilling to disk?  If you look in explain analyze you'll see in
the "Sort" step(s) it will tell you how many rows and how "wide" they
are.  If rows * width > sort_mem, it will have to spill the sort to
disk, which is slow.

If you post query info and explain analyze's we can help optimize the
query itself.


---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
      joining column's datatypes do not match

Reply via email to