Hello,
I will try to concentrate in this post the informations about the configurations
that i'm deploying, thinking that it should be usefull for somebody else..

The objective of my tests is:

High Availability services with ZFS/NFS on solaris 10 using a two-node Sun 
Cluster.

The scenarios are two:
1) Using "shared discs" (global devices).
2) Using "non-shared discs" (local devices).

- The solution (1) is more simple, and we can use the HAStoragePlus resource 
type
that we can found in the SC 3.2 installation. And as the data are "unique", i 
mean,
in a failover/switchback scenario the "same" disc will be used in one or 
another host, the practical purpose is clear.

The HOWTO for ZFS/NFS HA using shared discs (global devices), can be found here:
Sun Cluster 3.2 installation:
http://www.posix.brte.com.br/blog/?p=71

HA procedure for shared discs:
http://www.posix.brte.com.br/blog/?p=68

- The solution (2) maybe be more difficult to see the purpose... but i think are
many:
a) We can use it as a share of binaries. So the fact that the discs are not the
"same", is not a real problem.
b) We can use it in applications that does handle "loss of data", i mean, the 
app knows
if the data is corrupt, and can restart its task. So, the application just 
needs a "share"
(like a "tmp" directory).
c) We will sync/replicate the data using AVS.

So, i think ZFS/NFS/SC and AVS can let us use that local SATA discs that can be
300gb or 500gb, in a consistent way. Don't you think?

The HOWTO for ZFS/NFS HA using non-shared discs (local devices), can be found
here:
AVS installation on Solaris 10 u3:

http://www.posix.brte.com.br/blog/?p=74

HA procedure for non-shared discs:
Part I: http://www.posix.brte.com.br/blog/?p=73

Part II: http://www.posix.brte.com.br/blog/?p=75

I hope that information help somebody else with the same (crazy) ideas, like me.
I will appreciate your comments!

Thanks for your time.

Leal.
--

This message posted from opensolaris.org


Reply via email to