I am running a Dual Xeon hyperthreaded server with 4GB RAM RAID-5.  The only 
thing running on the server is Postgres running under Fedora.  I have a 700 
connection limit.

The DB is setup as a backend for a very high volume website.  Most of the queries 
are simple, such as logging accesses, user login verification etc.  There are a few 
bigger things suchas reporting etc but for the most part each transaction lasts less 
then a second.  The connections are not persistant (I'm using pg_connect in PHP)

The system was at 2 GB with a 400 connection limit.  We ran into problems because 
we hit the limit of connections during high volume.

1.  Does 400 connections sound consistant with the 2GB of RAM?  Does 700 sound 
good with 4 GB.  I've read a little on optimizing postgres.  Is there anything else I 
can 
do maybe OS wise to increase how many connections I get before I start swapping?

2.  Are there any clustering technologies that will work with postgres?  Specifically 
I'm 
looking at increasing the number of connections.

The bottom line is since the website launched (middle of January) we have increased 
the number of http connections, and increased bandwidth allowances by over 10 
times.  The site continues to grow and we are looking at our options.  Some of the 
ideas have been possible DB replication.   Write to master and read from multiple 
slaves.  Other ideas including increasing hardware.

This is the biggest site I have ever worked with.  Almost everything else fits in a T1 
with a single DB server handling multiple sites.  Does anybody with experence in this 
realm have any suggestions?

Thank you in advance for whatever help you can provide.
--
Kevin Barnard



---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Reply via email to