Gagan,

If both cluster nodes are up, you can always write to the disks using the /dev/did devices. Cluster doesn't protect you from doing dumb things.

However, if you want to configure a simple NFS failover service then I would do the following (from memory so apologies for errors):

# zpool create mypool /dev/did/dsk/dX

# zfs create mpool/myshare

# mkdir -p /mypool/SUNW.nfs

# vi /mypool/SUNW.nfs/dfstab.myshare-nfs-rs
(put normal dfstab entries in here for your shares of /mypool/myshare)

# clrg create -p PathPrefix=/mypool my-rg

# clrt register SUNW.HAStoragePlus

# clrt register SUNW.nfs

# clrslh create -g my-rg -h <yourLogicalNFSHostname> my-nfs-svr-lh-rs

# clrs create -t SUNW.HAStoragePlus -g my-rg -p zpools=mypool my-hasp-rs

# clrs create -t SUNW.nfs -g my-rg \
        -p resource_dependencies=my-nfs-svr-lh-rs \
        -p resource_dependencies_offline_restart=my-hasp-rs \
        myshare-nfs-rs

# clrg online -emM my-rg

Hope that helps,

Tim
---

PS - There are quite a few step by step examples in the "Oracle Solaris Cluster Essentials" book :-)

On 06/03/11 10:17, Gagan Vyas wrote:
Hi Tim,

Thanks for the quick reply.

I am looking for a failover configuration. Both cluster nodes are Up
and I want only one node to perform write on the shared SAN disk. In
the meantime the other node can remain up but should not allowed to
write till first node leave the control or goes down.

Second configuration I am looking for is NFS mount this shared disk
and access it from non-cluster (may be a linux host) through shared
IP on solaris cluster

Will creating a resource group and  disk resource in it help?

Which device we should use for I/O on the shared lun
/dev/rdsk/xxxxxxxx or /dev/did/rdsk/xxxxxxxx or
/dev/global/rdsk/xxxxxxxxxxxx

Thanks Gagan Vyas

-----Original Message----- From: Tim Read - Software Developer
[mailto:tim.r...@oracle.com] Sent: Friday, June 03, 2011 1:55 PM To:
Gagan Vyas Cc: ha-clusters-discuss@opensolaris.org Subject: Re:
[ha-clusters-discuss] protecting a shared SAN disk in 2 node cluster

Gagan,

Disks are only fenced off when a node leaves the cluster. Here you
seem to have both nodes still in the cluster.

If you want to have a device that is 'owned' by one node or the
other, then use either zpools or Solaris Volume Manager metasets.
These are then 'owned' by one node or the other. Is the application
you had in mind a failover or a parallel one?

Regards,

Tim ---


On 06/03/11 07:22, Gagan wrote:
Hi Friends,

i am new to soalris cluster and looking for some help on it.

i have a two node functional cluster. quorum is configured and
things looks good.

Now, I have a shared SAN disk which is available on both the
nodes. the disk has SCSI3 reservation on them. Now I a able to run
I/O on this disk from both the nodes. I was wondering if there is
way to make one cluster node as owner so that other node cannot
worte it (as it may cause corruption).

I tried to create a data service -. highly available resource
which creates a resource group and a resource for this disk. But, I
am not sure that How to protect the other cluster node from writing
to this disk. the node which shows offline for this resoruce is
also able to write to the disk (/dev/did/rdsk/d8s2)

I am not sure if I am doing things right. I need some guidance on
this.

What all want is to make this shared device locked on master node
till it is failedover.

Thanks Gagan Vyas

--

Tim Read Software Developer Solaris Availability Engineering Oracle
Corporation UK Ltd Springfield Linlithgow EH49 7LR

Phone: +44 (0)1506 672 684 Mobile: +44 (0)7802 212 137 Twitter:
@timread

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

 NOTICE: This email message is for the sole use of the intended
recipient(s) and may contain confidential and privileged
information. Any unauthorized review, use, disclosure or distribution
is prohibited. If you are not the intended recipient, please contact
the sender by reply email and destroy all copies of the original
message.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~



This message and any attached documents contain information from
QLogic Corporation or its wholly-owned subsidiaries that may be
confidential. If you are not the intended recipient, you may not
read, copy, distribute, or use this information. If you have received
this transmission in error, please notify the sender immediately by
reply e-mail and then delete this message.


--

Tim Read
Software Developer
Solaris Availability Engineering
Oracle Corporation UK Ltd
Springfield
Linlithgow
EH49 7LR

Phone: +44 (0)1506 672 684
Mobile: +44 (0)7802 212 137
Twitter: @timread

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

NOTICE: This email message is for the sole use of the intended
recipient(s) and may contain confidential and privileged information.
Any unauthorized review, use, disclosure or distribution is prohibited.
If you are not the intended recipient, please contact the sender by
reply email and destroy all copies of the original message.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
_______________________________________________
ha-clusters-discuss mailing list
ha-clusters-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/ha-clusters-discuss

Reply via email to