After having done my best to squeeze better performance out of our
application by tuning within our existing resources, I'm falling back on
adding memory as a short-term solution while we get creative for a long-term
fix. I'm curious about what experiences others have had with the process of
adding big chunks of RAM. In particular, if I'm trying to encourage the OS
to cache more of my index information in RAM, what sort of configuration
should I do at both the PostgreSQL and OS level?

In a slightly off-topic vein, I'd also like to hear about it if anyone knows
about any gotchas at the OS level that might become a problem.

The server is a dual processor Athlon 1.2GHz box with hardware SCSI RAID. It
currently has 1 GB RAM, and we're planning to add one GB more for a total of
2GB. The OS is Debian Linux Kernel 2.4.x, and we're on PostgreSQL v7.3.2

My current memory related settings are:

SHMMAX and SHMALL set to 128MB (OS setting)
shared buffers 8192 (64MB)
sort_mem 16384 (16MB)
effective_cache_size  65536 (512MB)

We support up to 70 active users, sharing a connection pool of 16
connections. Most of the queries center around 3 tables that are about 1.5
GB each.


Nick Fankhauser

    [EMAIL PROTECTED]  Phone 1.765.965.7363  Fax 1.765.962.9788
doxpop - Court records at your fingertips - http://www.doxpop.com/

---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Reply via email to