[PERFORM] Basic performance tuning on dedicated server

2011-03-10 Thread runner

 

 I'm setting up my first PostgreSQL server to replace an existing MySQL server. 
 I've been reading Gregory Smith's book Postgres 9.0 High Performance and also 
Riggs/Krosing's PostgreSQL 9 Administration Cookbook.  While both of these 
books are excellent, I am completely new to PostgreSQL and I cannot possibly 
read and understand every aspect of tuning in the short amount time before I 
have to have this server running.

I started out by using the 11 step process for tuning a new dedicated server 
(page 145 in Gregory Smith's book) but I found I had more questions than I 
could get answered in the short amount of time I have.  So, plan B is to use 
pgtune to get a ballpark configuration and then fine tune later as I learn more.

I ran some performance tests where I imported my 11Gb database from our old 
MySQL server into PostgreSQL 9.0.3.  In my testing I left the postgresql.conf 
at default values.  The PostgreSQL test database completely blew away the old 
MySQL server in performance.  Again, the postgresql.conf was never optimized so 
I feel I will be OK if I just get in the ballpark with tuning the 
postgresql.conf file.

I'd like to run my plan by you guys to see if it seems sane and make sure I'm 
not leaving out anything major. 

I'll be running PostgreSQL 9.0.3 on a Solaris 10 64 bit (Sparc) box with 16G of 
RAM.The local file system is ZFS.  The database file systems are UFS and 
are SAN mounted from VERY fast disks with battery backed write cache.  I don't 
know anybody else who is running a mix of ZFS and UFS file systems,  I cannot 
change this.  ZFS has it's own file system cache so I'm concerned about the 
ramifications of having caches for both ZFS and UFS. The only database related 
files that are stored on the local ZFS file system are the PostgreSQL binaries 
and the system logs.

From the extensive reading I've done, it seems generally accepted to set the 
UFS file system cache to use 50% of the system RAM.  That leaves 8G left for 
PostgreSQL.  Well, not really 8G,  I've reserved 1G for system use which 
leaves me with 7G for PostgreSQL to use.  I ran pgtune and specified 7G as the 
memory ( 7 * 1024 * 1024 = 7340032 ) and 300 connections.  The resulting 
postgresql.conf is what I plan to use.   

After reading Gregory Smith's book, I've decided to put the database on one UFS 
file system, the WAL on a separate UFS file system (mounted with forcedirectio) 
and the archive logs on yet another UFS file system.  I'll be on Solaris 10 so 
I've set wal_sync_method = fdatasync based on recommendations from other 
Solaris users.  Did a lot of google searches on wal_sync_method and Solaris.

That's what I plan to go live with in a few days.  Since my test server with 
default configs already blows away the old database server, I think I can get 
away with this strategy.  Time is not on my side.

I originally installed the 32 bit PostgreSQL binaries but later switched to 64 
bit binaries.  I've read the 32 bit version is faster and uses less memory than 
the 64 bit version.  At this point I'm assuming I need the 64 bit binaries in 
order to take full advantage the the 7G of RAM I have allocated to PostgreSQL.  
If I am wrong here please let me know.

This has been a lot of information to cram down in the short amount of time 
I've had to deal with this project.  I'm going to have to go back and read the 
PostgreSQL 9.0 High Performance book two or three more times and really dig in 
to the details but for now I'm going to cheat and use pgtune as described 
above.  Thank you in advance for any advice or additional tips you may be able 
to provide.  

Rick






Re: [PERFORM] Basic performance tuning on dedicated server

2011-03-10 Thread Merlin Moncure
On Thu, Mar 10, 2011 at 3:12 AM, runner run...@winning.com wrote:

 I'm setting up my first PostgreSQL server to replace an existing MySQL
 server.  I've been reading Gregory Smith's book Postgres 9.0 High
 Performance and also Riggs/Krosing's PostgreSQL 9 Administration Cookbook.
 While both of these books are excellent, I am completely new to PostgreSQL
 and I cannot possibly read and understand every aspect of tuning in the
 short amount time before I have to have this server running.

 I started out by using the 11 step process for tuning a new dedicated server
 (page 145 in Gregory Smith's book) but I found I had more questions than I
 could get answered in the short amount of time I have.  So, plan B is to use
 pgtune to get a ballpark configuration and then fine tune later as I learn
 more.

 I ran some performance tests where I imported my 11Gb database from our old
 MySQL server into PostgreSQL 9.0.3.  In my testing I left the
 postgresql.conf at default values.  The PostgreSQL test database completely
 blew away the old MySQL server in performance.  Again, the postgresql.conf
 was never optimized so I feel I will be OK if I just get in the ballpark
 with tuning the postgresql.conf file.

 I'd like to run my plan by you guys to see if it seems sane and make sure
 I'm not leaving out anything major.

 I'll be running PostgreSQL 9.0.3 on a Solaris 10 64 bit (Sparc) box with 16G
 of RAM.    The local file system is ZFS.  The database file systems are UFS
 and are SAN mounted from VERY fast disks with battery backed write cache.  I
 don't know anybody else who is running a mix of ZFS and UFS file systems,  I
 cannot change this.  ZFS has it's own file system cache so I'm concerned
 about the ramifications of having caches for both ZFS and UFS. The only
 database related files that are stored on the local ZFS file system are the
 PostgreSQL binaries and the system logs.

 From the extensive reading I've done, it seems generally accepted to set the
 UFS file system cache to use 50% of the system RAM.  That leaves 8G left for
 PostgreSQL.  Well, not really 8G,  I've reserved 1G for system use which
 leaves me with 7G for PostgreSQL to use.  I ran pgtune and specified 7G as
 the memory ( 7 * 1024 * 1024 = 7340032 ) and 300 connections.  The resulting
 postgresql.conf is what I plan to use.

 After reading Gregory Smith's book, I've decided to put the database on one
 UFS file system, the WAL on a separate UFS file system (mounted with
 forcedirectio) and the archive logs on yet another UFS file system.  I'll be
 on Solaris 10 so I've set wal_sync_method = fdatasync based on
 recommendations from other Solaris users.  Did a lot of google searches on
 wal_sync_method and Solaris.

 That's what I plan to go live with in a few days.  Since my test server with
 default configs already blows away the old database server, I think I can
 get away with this strategy.  Time is not on my side.

 I originally installed the 32 bit PostgreSQL binaries but later switched to
 64 bit binaries.  I've read the 32 bit version is faster and uses less
 memory than the 64 bit version.  At this point I'm assuming I need the 64
 bit binaries in order to take full advantage the the 7G of RAM I have
 allocated to PostgreSQL.  If I am wrong here please let me know.

 This has been a lot of information to cram down in the short amount of time
 I've had to deal with this project.  I'm going to have to go back and read
 the PostgreSQL 9.0 High Performance book two or three more times and really
 dig in to the details but for now I'm going to cheat and use pgtune as
 described above.  Thank you in advance for any advice or additional tips you
 may be able to provide.


congratulations!

postgres memory tuning is a complicated topic but most it tends to be
very subtle in its effects or will apply to specific situations, like
dealing with i/o storms during checkpooints.  The only settings that
often need to be immediately cranked out of the box are
maintenance_work_mem and (much more carefully) work_mem.

Regardless how shared buffers is set, ALL of your server's memory goes
to postgres less what the o/s keeps for itself and other applications.
 You do not allocate memory to postgres -- you only suggest how it
might be used.   I stopped obsessing how it was set years ago.  In
fact, on linux for example dealing with background dirty page flushing
via the o/s (because stock settings can cause i/o storms) is a bigger
deal than shared_buffers by about an order of magnitude imnsho.

The non memory related settings of postgresql.conf, the planner
settings (join/from collapse limit, random_page_cost, etc), i/o
settings (fsync, wal_sync_method etc) are typically much more
important for performance than how memory is set up.

The reason postgres is showing up mysql is almost certainly due to the
query planner and/or (if you were using myisam) reaping the benefits
of mvcc.  My knowledge of mysql stops at the 5.0/5.1 era, but I can
tell you