My company finally has the means to install a new database server for replication. I have Googled and found a lot of sparse information out there regarding replication systems for PostgreSQL and a lot of it looks very out-of-date. Can I please get some ideas from those of you that are currently using fail-over replication systems? What advantage does your solution have? What are the "gotchas" I need to worry about?

My desire would be to have a parallel server that could act as a hot standby system with automatic fail over in a multi-master role. If our primary server goes down for whatever reason, the secondary would take over and handle the load seamlessly. I think this is really the "holy grail" scenario and I understand how difficult it is to achieve. Especially since we make frequent use of sequences in our databases. If MM is too difficult, I'm willing to accept a hot-standby read-only system that will handle queries until we can fix whatever ails the master. We are primary an OLAP environment but there is a constant stream of inserts into the databases. There are 47 different databases hosted on the primary server and this number will continue to scale up to whatever the server seems to support. The reason I mention this number is that it seems that those systems that make heavy use of schema changes require a lot of "fiddling". For a single database, this doesn't seem too problematic, but any manual work involved and administrative overhead will scale at the same rate as the database count grows and I certainly want to minimize as much fiddling as possible.

We are using 8.3 and the total combined size for the PG data directory is 226G. Hopefully I didn't neglect to include more relevant information.

As always, thank you for your insight.

-Dan



--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to