Cluster Manager
The Cluster Manager feature of Red Hat Cluster Suite provides an application failover infrastructure that can be used by a wide range of applications, including:
* Most custom and mainstream commercial applications * File and print serving * Databases and database applications * Messaging applications * Internet and open source application
With Cluster Manager, these applications can be deployed in high availability configurations so that they are always operational—bringing "scale-out" capabilities to enterprise Linux deployments.
For high-volume open source applications, such as NFS, Samba, and Apache, Cluster Manager provides a complete ready-to-use failover solution. For most other applications, customers can create custom failover scripts using provided templates. Red Hat Professional Services can provide custom Cluster Manager deployment services where required.
Features
* Support for up to eight nodes: Allows high availability to be provided for multiple applications simultaneously. * NFS/CIFS Failover: Supports highly available file serving in Unix and Windows environments. * Fully shared storage subsystem: All cluster members have access to the same storage. * Comprehensive Data Integrity guarantees: Uses the latest I/O barrier technology, such as programmable power switches and watchdog timers. * SCSI and Fibre Channel support: Cluster Manager configurations can be deployed using latest SCSI and Fibre Channel technology. Multi-terabyte configurations can readily be made highly available. * Service failover: Cluster Manager not only ensures hardware shutdowns or failures are detected and recovered from automatically, but also will monitor your applications to ensure they are running correctly, and will restart them automatically if they fail.
John Arbash Meinel wrote:
John Allgood wrote:
This some good info. The type of attached storage is a Kingston 14 bay Fibre Channel Infostation. I have 14 36GB 15,000 RPM drives. I think the way it is being explained that I should build a mirror with two disk for the pg_xlog and the striping and mirroring the rest and put all my databases into one cluster. Also I might mention that I am running clustering using Redhat Clustering Suite.
So are these 14-disks supposed to be shared across all of your 9 databases?
It seems to me that you have a few architectural issues here.
First, you can't really have 2 masters writing to the same disk array. I'm not sure if Redhat Clustering gets around this. But second is that you can't run 2 postgres engines on the same database. Postgres doesn't support a clustered setup. There are too many issues with concurancy and keeping everyone in sync.
Since you seem to be okay with having a bunch of smaller localized databases, which update a master database 1/day, I would think you would want hardware to go something like this.
1 master server, at least dual opteron with access to lots of disks (likely the whole 14 if you can get away with it). Put 2 as a RAID1 for the OS, 4 as a RAID10 for pg_xlog, and then the other 8 as RAID10 for the rest of the database.
8-9 other servers, these don't need to be as powerful, since they are local domains. Probably a 4-disk RAID10 for the OS and pg_xlog is plenty good, and whatever extra disks you can get for the local database.
The master database holds all information for all domains, but the other databases only hold whatever is the local information. Every night your script sequences through the domain databases one-by-one, updating the master database, and synchronizing whatever data is necesary back to the local domain. I would guess that this script could actually just continually run, going to each local db in turn, but you may want nighttime only updating depending on what kind of load they have.
John =:->
---------------------------(end of broadcast)--------------------------- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match