Hi,

See below.

msl wrote:
> Hello,
> I will try to concentrate in this post the informations about the 
> configurations
> that i'm deploying, thinking that it should be usefull for somebody else..
> 
> The objective of my tests is:
> 
> High Availability services with ZFS/NFS on solaris 10 using a two-node Sun 
> Cluster.
> 
> The scenarios are two:
> 1) Using "shared discs" (global devices).
> 2) Using "non-shared discs" (local devices).
> 
> - The solution (1) is more simple, and we can use the HAStoragePlus resource 
> type
> that we can found in the SC 3.2 installation. And as the data are "unique", i 
> mean,
> in a failover/switchback scenario the "same" disc will be used in one or 
> another host, the practical purpose is clear.
> 
> The HOWTO for ZFS/NFS HA using shared discs (global devices), can be found 
> here:
> Sun Cluster 3.2 installation:
> http://www.posix.brte.com.br/blog/?p=71
> 
> HA procedure for shared discs:
> http://www.posix.brte.com.br/blog/?p=68
> 
> - The solution (2) maybe be more difficult to see the purpose... but i think 
> are
> many:
> a) We can use it as a share of binaries. So the fact that the discs are not 
> the
> "same", is not a real problem.
I just would like to remind the NFS file handle point which are
constructed from device major number, device minor number and inode
number for the file in the file system accessed by the NFS clients
(the device numbers are the major minor number of the device which
store the file system).
You can have the same content in the different file systems, but for 
example with a different inode number for a particular file.
So, in case of a switch the NFS file handle will change and so the NFS
client will not be able to have access to the file(s).
Generaly the error message on the client looks like "Stale NFS file
handle".

The other point is the locks set by the NFS clients: the NFS dataservice
manage it through the stamon directory located in the same place as
dfstab.<NFS_resource_name>. The locks will be set after the failover 
only if the content of the statmon directory will be the same on the
different nodes.
 From what I understand, you will use the cluster only for binaries, I
guess no locks will be set by the NFS clients as the binaries are
generaly in read-only mode.

Nicolas
> b) We can use it in applications that does handle "loss of data", i mean, the 
> app knows
> if the data is corrupt, and can restart its task. So, the application just 
> needs a "share"
> (like a "tmp" directory).
> c) We will sync/replicate the data using AVS.
> 
> So, i think ZFS/NFS/SC and AVS can let us use that local SATA discs that can 
> be
> 300gb or 500gb, in a consistent way. Don't you think?
> 
> The HOWTO for ZFS/NFS HA using non-shared discs (local devices), can be found
> here:
> AVS installation on Solaris 10 u3:
> 
> http://www.posix.brte.com.br/blog/?p=74
> 
> HA procedure for non-shared discs:
> Part I: http://www.posix.brte.com.br/blog/?p=73
> 
> Part II: http://www.posix.brte.com.br/blog/?p=75
> 
> I hope that information help somebody else with the same (crazy) ideas, like 
> me.
> I will appreciate your comments!
> 
> Thanks for your time.
> 
> Leal.
> --
> 
> This message posted from opensolaris.org
> 
> _______________________________________________
> ha-clusters-discuss mailing list
> ha-clusters-discuss at opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/ha-clusters-discuss

Reply via email to