Paul Johnson wrote:
Hi all, we're running PG8 on a Sun V250 with 8GB RAM and 2*1.3GHz SPARC
CPUs running Solaris 10. The DB cluster is on an external fibre-attached
Sun T3 array that has 9*36GB drives configured as a single RAID5 LUN.
The system is for the sole use of a couple of data warehouse developers,
hence we are keen to use 'aggressive' tuning options to maximise
So far we have made the following changes and measured the impact on our
1) Increase checkpoint_segments from 3 to 64. This made a 10x improvement
in some cases.
2) Increase work_mem from 1,024 to 524,288.
Don't forget you can use multiples of this in a single query. Might want
to reign it back a bit. I *think* you can set it per-query if you want
3) Increase shared_buffers from 1,000 to 262,143 (2 GB). This required
setting SHMMAX=4294967295 (4 GB) in /etc/system and re-booting the box.
Question - can Postgres only use 2GB RAM, given that shared_buffers can
only be set as high as 262,143 (8K pages)?
Well, normally you'd want to keep a fair bit for the O.S. to cache data.
One quarter of your RAM seems very high. Did you try 5000,10000,50000
too or go straight to the top end?
So far so good...
4) Move /pg_xlog to an internal disk within the V250. This has had a
severe *negative* impact on performance. Copy job has gone from 2 mins to
12 mins, simple SQL job gone from 1 min to 7 mins. Not even run long SQL
I'm guessing that this is because pg_xlog has gone from a 9 spindle LUN to
a single spindle disk?
The key limitation will be one commit per rotation of the disk. Multiple
spindles, or better still with a battery-backed write-cache will give
you peak transactions.
In cases such as this, where an external storage array with a hardware
RAID controller is used, the normal advice to separate the data from the
pg_xlog seems to come unstuck, or are we missing something?
Well, I think the advice then is actually "get 2 external arrays..."
---------------------------(end of broadcast)---------------------------
TIP 2: Don't 'kill -9' the postmaster