Sorry... I didn't understand exactly what you were trying to do... if you want to present the same disks to multiple host you would generally use a cluster "aware" application or a cluster "aware" filesystem that would establish the quarum and act a more true clustered environment with heartbeat.
Going back to GFS, this is actually capable of doing just that. So I guess we are back to GFS :) http://mail.digicola.com/wiki/index.php?title=User:Martin:GFS http://www.yolinux.com/TUTORIALS/LinuxClustersAndFileSystems.html Should give you enough reading to get started... most of the applications we do this with are either read only data sets and don't need clustering filesystems or are oracle databases that use ocfs. So I can only speak in theory... I would check out what veritas products can do... They have an amazing track record on almost all nix platforms (solaris,hp ux) and have a lot of clustering capabilities. Regards, sean On 25-Jan-2007, Brian Kroth wrote: > > > paul k?lle wrote: > >Sean Cook schrieb: > >> > >>GFS is ok if you don't want to mess around with a SAN but it has no where > >>near the performance of fiber or iSCSI attached storage. > >Aren't those apples and oranges? I thought iSCSI is a block level > >protocol and doesn't do locking and such whereas GFS does... > > This is what I was getting at. I know the basics of working with the > SAN to get a set of machines to at least see a storage array. The next > step is getting them to read and write to say the same file on a > filesystem on that storage array without stepping on each others toes or > corrupting the filesystem that lives on top of that storage array. > That's where I haven't learned too much yet. > > I hadn't actually planned on using the SAN to boot off of, but that > might be an option for easier configuration/software management. I > simply wanted to use it almost as if it were an NFS mount that a group > of servers stored web content on. The problem I had with that model is > that the NFS server is a single point of failure. If on the other hand > all the servers are directly attached to the data, any one of them can > go down and the others won't care or notice. At least that's the > working theory behind it right now. > -- > [email protected] mailing list > -- [email protected] mailing list
