Florin Andrei wrote:
Hardware: relatively modern Intel CPU, OS and database each on its own IDE hard-drive (separate IDE cables). Enough memory, i think, but i can't add too much (not beyond 1GB). Software: Linux-2.6, pgsql-8.0.1
Function: Essentially a logging server. There are two applications (like syslog) on the same box that are logging to pgsql, each one to its own database. There are a few tables in one DB, and exactly one table in the other. Most of the time, the apps are just doing mindless INSERTs to the DB. Every now and then, an admin performs some SELECTs via a PHP interface.
Objective: Make the DB as fast as possible. Of course i'd like the SELECTs to be fast, but the INSERTs take precedence. It's gotta be able to swallow as many messages per second as possible given the hardware.
Question: What are the pgsql parameters that need to be tweaked? What are the guidelines for such a situation?
Put pg_xlog onto the same drive as the OS, not the drive with the database.
Do as many inserts per transaction that you can get away with. 100-1000 is pretty good.
Keep the number of indexes and foreign key references low to keep INSERTS fast.
Keep a few indexes around to keep SELECTs reasonable speedy.
If you are doing lots and lots of logging, need only archival and slow access for old data, but fast access on new data, consider partitioning your table, and then using a view to join them back together.
If you are only having a couple processing accessing the db at any given time, you can probably increase work_mem and maintenance_work_mem a bit. If you have 1G ram, maybe around 50M for work_mem. But really this is only if you have 1-3 selects going on at a time.
With 2 disks, and fixed hardware, it's a lot more about configuring your schema and the application. If you want more performance, adding more disks is probably the first thing to do.
Description: OpenPGP digital signature