Mitch Pirtle wrote:
Which brings up another question: why not just cluster at the hardware layer? Get an external fiberchannel array, and cluster a bunch of dual Opterons, all sharing that storage. In that sense you would be getting one big PostgreSQL 'image' running across all of the servers.
This isn't as easy as it sounds. Simply sharing the array among hosts with a 'standard' file system won't work because of cache inconsistencies. So, you need to put a shareable filesystem (such as GFS or Lustre) on it.
But that's not enough, because you're going to be running separate postgresql backends on the different hosts, and there are definitely consistency issues with trying to do that. So far as I know (right, experts?) postgresql isn't designed with providing distributed consistency in mind (isn't shared memory used for consistency, which restricts all the backends to a single host?).
-- Steve Wampler -- [EMAIL PROTECTED] The gods that smiled on your birth are now laughing out loud.
---------------------------(end of broadcast)--------------------------- TIP 7: don't forget to increase your free space map settings