Re: [PERFORM] settings input for upgrade

2011-08-20 Thread Robert Klemme
On Thu, Aug 18, 2011 at 11:55 PM, Midge Brown midg...@sbcglobal.net wrote:
 I'm in the process of upgrading from postgres 7.4.8 to 9.0.4 and wanted to
 run my decisions past some folks who can give me some input on whether my
 decisions make sense or not.

I am not sure what decisions you actually refer to here: in your
posting I can only see description of the current setup but no
decisions for the upgrade (i.e. changed parameters, other physical
layout etc.).

 The others are very write-heavy, started as one table within the original
 DB, and were split out on an odd/even id # in an effort to get better
 performance:

Did it pay off?  I mean you planned to increase performance and did
this actually happen?  Apart from reserving IO bandwidth (which you
achieved by placing data on different disks) you basically only added
reserved memory for each instance by separating them.  Or are there
any other effects achieved by separating (like reduced lock contention
on some globally shared resource, distribution of CPU for logging)?

Kind regards

robert

-- 
remember.guy do |as, often| as.you_can - without end
http://blog.rubybestpractices.com/

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] settings input for upgrade

2011-08-20 Thread Midge Brown
Robert, 

I was largely looking for input on whether I may have inadvertently shot myself 
in the foot with some of the choices I made when setting up postgresql 9.0, 
which is on different hardware than was the 7.4 setup.

The splitting of one table to two separate databases was done on 7.4 and did 
make a positive change in write performance. I was including that information 
only in an attempt to provide as much detail as possible.

- Midge
  - Original Message - 
  From: Robert Klemme 
  To: Midge Brown 
  Cc: pgsql-performance@postgresql.org 
  Sent: Saturday, August 20, 2011 2:38 AM
  Subject: Re: [PERFORM] settings input for upgrade


  On Thu, Aug 18, 2011 at 11:55 PM, Midge Brown midg...@sbcglobal.net wrote:
   I'm in the process of upgrading from postgres 7.4.8 to 9.0.4 and wanted to
   run my decisions past some folks who can give me some input on whether my
   decisions make sense or not.

  I am not sure what decisions you actually refer to here: in your
  posting I can only see description of the current setup but no
  decisions for the upgrade (i.e. changed parameters, other physical
  layout etc.).

   The others are very write-heavy, started as one table within the original
   DB, and were split out on an odd/even id # in an effort to get better
   performance:

  Did it pay off?  I mean you planned to increase performance and did
  this actually happen?  Apart from reserving IO bandwidth (which you
  achieved by placing data on different disks) you basically only added
  reserved memory for each instance by separating them.  Or are there
  any other effects achieved by separating (like reduced lock contention
  on some globally shared resource, distribution of CPU for logging)?

  Kind regards

  robert

  -- 
  remember.guy do |as, often| as.you_can - without end
  http://blog.rubybestpractices.com/

Re: [PERFORM] settings input for upgrade

2011-08-20 Thread Scott Marlowe
On Thu, Aug 18, 2011 at 3:55 PM, Midge Brown midg...@sbcglobal.net wrote:
 Here are the changes I made to postgres.conf. The only differences between
 the conf file for DB1 and those for DB2  3 are the port and
 effective_cache_size (which I made slightly smaller -- 8 GB instead of 10 --
 for the 2 write-heavy DBs). The 600 max connections are often idle and don't
 get explicitly closed in the application. I'm looking at connection pooling
 as well.

  work_mem = 128MB

I'd lower this unless you are certain that something like 16MB just
isn't gonna get similar performance.  Even with mostly connections
idle, 128M is a rather large work_mem.  Remember it's per sort, per
connection.  It can quickly cause the kernel to dump file cache that
keeps the machine running fast if a couple dozen connections run a
handful of large sorts at once.  What happens is that while things run
smooth when there's low to medium load, under high load the machine
will start thrashing trying to allocate too much work_mem and then
just slow to a crawl.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] settings input for upgrade

2011-08-20 Thread Midge Brown
Thank you! 
  - Original Message - 
  From: Scott Marlowe 
  To: Midge Brown 
  Cc: pgsql-performance@postgresql.org 
  Sent: Saturday, August 20, 2011 9:01 PM
  Subject: Re: [PERFORM] settings input for upgrade


  On Thu, Aug 18, 2011 at 3:55 PM, Midge Brown wrote:
   Here are the changes I made to postgres.conf. The only differences between
   the conf file for DB1 and those for DB2  3 are the port and
   effective_cache_size (which I made slightly smaller -- 8 GB instead of 10 --
   for the 2 write-heavy DBs). The 600 max connections are often idle and don't
   get explicitly closed in the application. I'm looking at connection pooling
   as well.

   work_mem = 128MB

  I'd lower this unless you are certain that something like 16MB just
  isn't gonna get similar performance.  Even with mostly connections
  idle, 128M is a rather large work_mem.  Remember it's per sort, per
  connection.  It can quickly cause the kernel to dump file cache that
  keeps the machine running fast if a couple dozen connections run a
  handful of large sorts at once.  What happens is that while things run
  smooth when there's low to medium load, under high load the machine
  will start thrashing trying to allocate too much work_mem and then
  just slow to a crawl.