Hi Paul,

I was passed your message... regarding DSS workload with Postgres on Solaris. 
(I am not in the alias).

Performance is relative to your workload. Can you actually send us what you are 
doing in your queries, updates etc?

I have been running few tests myself and here are my rules of thumbs, your 
mileage can vary..

http://blogs.sun.com/roller/page/jkshah?entry=tuning_postgresql_8_0_2

* Increasing checkpoint certainly helps. (I went as far as actually going to 
increase LOGFILE size from 16MB to 256MB and recompiling it and then using 
lower number of checkpoints (appropriately).. (file rotations also decreases 
performance)

* Moving pg_xlog to a different file system and mounting that file system with 
"forcedirectio" also helps a lot (This increases the througput by another 2x to 
5x or more.) (This can be done either by adding forcedirectio in your 
/etc/vfstab mount options or for existing mounts as follows:
mount -o remount,forcedirectio /filesystem
(Note: Database files should not be using forcedirectio otherwise file system 
cache will not be used for it)

* I actually reduced the PG Bufferpool to 1G or less since it seemed to 
decrease performance as I increased its bufferpool size (depending on your 
workload)

* If you are using SPARC then following etc commands will help..

set segmap_percent=60
set ufs:freebehind=0


This will allocate 60% of RAM for file system buffer  (database files) and  
also cache all files (since PostgreSQL files are 1G by default)

This will help your repeat queries significantly. 

Other things depends on what you queries you are running? If you send me few 
samples, I can send you appropriate DTrace scripts (Solaris 10 or higher) to 
run to figure out what's happening

Regards,
Jignesh



____________________________________________________

Jignesh K. Shah         MTS Software Engineer 
Sun Microsystems, Inc   MDE-Horizontal Technologies 
Email: [EMAIL PROTECTED] Phone: (781) 442 3052
http://blogs.sun.com/jkshah
____________________________________________________

----- Original Message -----
>From   Paul Johnson <[EMAIL PROTECTED]>
Date    Thu, 11 Aug 2005 13:23:21 +0100 (BST)
To      pgsql-performance@postgresql.org
Subject         [PERFORM] PG8 Tuning
Hi all, we're running PG8 on a Sun V250 with 8GB RAM and 2*1.3GHz SPARC
CPUs running Solaris 10. The DB cluster is on an external fibre-attached
Sun T3 array that has 9*36GB drives configured as a single RAID5 LUN.

The system is for the sole use of a couple of data warehouse developers,
hence we are keen to use 'aggressive' tuning options to maximise
performance.

So far we have made the following changes and measured the impact on our
test suite:

1) Increase checkpoint_segments from 3 to 64. This made a 10x improvement
in some cases.

2) Increase work_mem from 1,024 to 524,288.

3) Increase shared_buffers from 1,000 to 262,143 (2 GB). This required
setting SHMMAX=4294967295 (4 GB) in /etc/system and re-booting the box.

Question - can Postgres only use 2GB RAM, given that shared_buffers can
only be set as high as 262,143 (8K pages)?

So far so good...

4) Move /pg_xlog to an internal disk within the V250. This has had a
severe *negative* impact on performance. Copy job has gone from 2 mins to
12 mins, simple SQL job gone from 1 min to 7 mins. Not even run long SQL
jobs.

I'm guessing that this is because pg_xlog has gone from a 9 spindle LUN to
a single spindle disk?

In cases such as this, where an external storage array with a hardware
RAID controller is used, the normal advice to separate the data from the
pg_xlog  seems to come unstuck, or are we missing something?

Cheers,

Paul Johnson.


--- Begin Message ---
Hi all, we're running PG8 on a Sun V250 with 8GB RAM and 2*1.3GHz SPARC
CPUs running Solaris 10. The DB cluster is on an external fibre-attached
Sun T3 array that has 9*36GB drives configured as a single RAID5 LUN.

The system is for the sole use of a couple of data warehouse developers,
hence we are keen to use 'aggressive' tuning options to maximise
performance.

So far we have made the following changes and measured the impact on our
test suite:

1) Increase checkpoint_segments from 3 to 64. This made a 10x improvement
in some cases.

2) Increase work_mem from 1,024 to 524,288.

3) Increase shared_buffers from 1,000 to 262,143 (2 GB). This required
setting SHMMAX=4294967295 (4 GB) in /etc/system and re-booting the box.

Question - can Postgres only use 2GB RAM, given that shared_buffers can
only be set as high as 262,143 (8K pages)?

So far so good...

4) Move /pg_xlog to an internal disk within the V250. This has had a
severe *negative* impact on performance. Copy job has gone from 2 mins to
12 mins, simple SQL job gone from 1 min to 7 mins. Not even run long SQL
jobs.

I'm guessing that this is because pg_xlog has gone from a 9 spindle LUN to
a single spindle disk?

In cases such as this, where an external storage array with a hardware
RAID controller is used, the normal advice to separate the data from the
pg_xlog  seems to come unstuck, or are we missing something?

Cheers,

Paul Johnson.

---------------------------(end of broadcast)---------------------------
TIP 4: Have you searched our list archives?

               http://archives.postgresql.org

--- End Message ---
---------------------------(end of broadcast)---------------------------
TIP 6: explain analyze is your friend

Reply via email to