Hi Michael,
from the output of the cluster command, everything looks fine.Your 
problem is not related to zone clusters. I suspect there is something 
wrong with the HAStoragePlus resource. Two things to do:
1. Check /var/adm/messages on BOTH nodes. It should indicate what went 
wrong when trying to start this resource.
2. Can you import and export the zpool on top of the iSCSI devices on 
both nodes - not simultaneously, I mean?

If that does not give you a clue, let us know the zpools definition of 
the HAStoragePlus resource: "clrs show -v hasp-rs"

Regards
Hartmut

Michael W Lucas schrieb:
> Hi,
>
> To learn what I'm doing, I set up two OpenSolaris VMs and worked to cluster 
> them.  This let me learn much faster than real hardware.
>
> For anyone following along, the key in Augustus' statement that I needed an 
> iscsi connection to both the local disk and the remote disk.  Once I told 
> iscsiadm on each node to look at both the local and remote interconnect IPs, 
> everything connected.  (If you set up a cluster like this, read the example 
> closely as well as the linked articles explaining how all the pieces work.)
>
> I've hit another snag, though.  I can't online the cluster resource.
>
> # /usr/cluster/bin/clresourcegroup data3
> clresourcegroup:  (C198212) Unrecognized subcommand - "data3".
> clresourcegroup:  (C101856) Usage error.
> ...(usage info)
>
> Reading the clresource manual page, it seemed that what I wanted was 
> "clresourcegroup data3."  That seemed to work, and allowed me to create a 
> resource to manage the iscsi-mirrored zpool "data3" and put it into 
> management mode, as per the example.  But then I try to online this group:
>
> #/usr/cluster/bin/clresourcegroup online data3
> clresourcegroup:  (C748634) Resource group data3 failed to start on chosen 
> node and might fail over to other node(s)
> clresourcegroup:  (C135343) No primary node could be found for resource group 
> data3; it remains offline
>
> It appears that this error shows up when mixing zone and global clusters, but 
> "cluster show" indicates that I have no zone clusters.
>
> Obviously, I'm missing something.  Should the first "clresourcegroup" command 
> have worked without a subcommand?  Or am I missing something else that would 
> let me online the group?
>
> Thanks,
> ==ml
>
> # /usr/cluster/bin/cluster show
>
> === Cluster ===
>
> Cluster Name:                                   test1
>   clusterid:                                       0x4B4E224F
>   installmode:                                     disabled
>   heartbeat_timeout:                               10000
>   heartbeat_quantum:                               1000
>   private_netaddr:                                 172.16.0.0
>   private_netmask:                                 255.255.240.0
>   max_nodes:                                       64
>   max_privatenets:                                 10
>   num_zoneclusters:                                12
>   udp_session_timeout:                             480
>   global_fencing:                                  nofencing
>   Node List:                                       clustertest1, clustertest2
>
>   === Host Access Control ===
>
>   Cluster name:                                 test1
>     Allowed hosts:                                 None
>     Authentication Protocol:                       sys
>
>   === Cluster Nodes ===
>
>   Node Name:                                    clustertest1
>     Node ID:                                       1
>     Enabled:                                       yes
>     privatehostname:                               clusternode1-priv
>     reboot_on_path_failure:                        disabled
>     globalzoneshares:                              1
>     defaultpsetmin:                                1
>     quorum_vote:                                   1
>     quorum_defaultvote:                            1
>     quorum_resv_key:                               0x4B4E224F00000001
>     Transport Adapter List:                        vnic0
>
>   Node Name:                                    clustertest2
>     Node ID:                                       2
>     Enabled:                                       yes
>     privatehostname:                               clusternode2-priv
>     reboot_on_path_failure:                        disabled
>     globalzoneshares:                              1
>     defaultpsetmin:                                1
>     quorum_vote:                                   1
>     quorum_defaultvote:                            1
>     quorum_resv_key:                               0x4B4E224F00000002
>     Transport Adapter List:                        vnic0
>
>   === Transport Cables ===
>
>   Transport Cable:                              
> clustertest2:vnic0,clustertest1:vnic0
>     Endpoint1:                                     clustertest2:vnic0
>     Endpoint2:                                     clustertest1:vnic0
>     State:                                         Enabled
>
>   === Transport Switches ===
>
>   === Global Quorum ===
>
>   Name:                                         membership
>     Type:                                          system
>     multiple_partitions:                           false
>     ping_targets:                                  <NULL>
>
>   === Device Groups ===
>
>   === Registered Resource Types ===
>
>   Resource Type:                                SUNW.LogicalHostname:3
>     RT_description:                                Logical Hostname Resource 
> Type
>     RT_version:                                    3
>     API_version:                                   2
>     RT_basedir:                                    
> /usr/cluster/lib/rgm/rt/hafoip
>     Single_instance:                               False
>     Proxy:                                         False
>     Init_nodes:                                    All potential masters
>     Installed_nodes:                               <All>
>     Failover:                                      True
>     Pkglist:                                       SUNWscu
>     RT_system:                                     True
>     Global_zone:                                   True
>
>   Resource Type:                                SUNW.SharedAddress:2
>     RT_description:                                HA Shared Address Resource 
> Type
>     RT_version:                                    2
>     API_version:                                   2
>     RT_basedir:                                    
> /usr/cluster/lib/rgm/rt/hascip
>     Single_instance:                               False
>     Proxy:                                         False
>     Init_nodes:                                    <Unknown>
>     Installed_nodes:                               <All>
>     Failover:                                      True
>     Pkglist:                                       SUNWscu
>     RT_system:                                     True
>     Global_zone:                                   True
>
>   Resource Type:                                SUNW.HAStoragePlus:8
>     RT_description:                                HA Storage Plus
>     RT_version:                                    8
>     API_version:                                   2
>     RT_basedir:                                    
> /usr/cluster/lib/rgm/rt/hastorageplus
>     Single_instance:                               False
>     Proxy:                                         False
>     Init_nodes:                                    All potential masters
>     Installed_nodes:                               <All>
>     Failover:                                      False
>     Pkglist:                                       SUNWscu
>     RT_system:                                     False
>     Global_zone:                                   True
>
>   === Resource Groups and Resources ===
>
>   Resource Group:                               data3
>     RG_description:                                <NULL>
>     RG_mode:                                       Failover
>     RG_state:                                      Managed
>     Failback:                                      False
>     Nodelist:                                      clustertest1 clustertest2
>
>     --- Resources for Group data3 ---
>
>     Resource:                                   hasp-rs
>       Type:                                        SUNW.HAStoragePlus:8
>       Type_version:                                8
>       Group:                                       data3
>       R_description:
>       Resource_project_name:                       default
>       Enabled{clustertest1}:                       True
>       Enabled{clustertest2}:                       True
>       Monitored{clustertest1}:                     True
>       Monitored{clustertest2}:                     True
>
>   === DID Device Instances ===
>
>   DID Device Name:                              /dev/did/rdsk/d1
>     Full Device Path:                              
> clustertest1:/dev/rdsk/c7t0d0
>     Replication:                                   none
>     default_fencing:                               global
>
>   DID Device Name:                              /dev/did/rdsk/d2
>     Full Device Path:                              
> clustertest1:/dev/rdsk/c8t0d0
>     Replication:                                   none
>     default_fencing:                               global
>
>   DID Device Name:                              /dev/did/rdsk/d3
>     Full Device Path:                              
> clustertest1:/dev/rdsk/c8t1d0
>     Replication:                                   none
>     default_fencing:                               global
>
>   DID Device Name:                              /dev/did/rdsk/d4
>     Full Device Path:                              
> clustertest2:/dev/rdsk/c7t0d0
>     Replication:                                   none
>     default_fencing:                               global
>
>   DID Device Name:                              /dev/did/rdsk/d5
>     Full Device Path:                              
> clustertest2:/dev/rdsk/c8t0d0
>     Replication:                                   none
>     default_fencing:                               global
>
>   DID Device Name:                              /dev/did/rdsk/d6
>     Full Device Path:                              
> clustertest2:/dev/rdsk/c8t1d0
>     Replication:                                   none
>     default_fencing:                               global
>
>   DID Device Name:                              /dev/did/rdsk/d7
>     Full Device Path:                              
> clustertest2:/dev/rdsk/c0t600144F0641C810000004B5488100001d0
>     Full Device Path:                              
> clustertest1:/dev/rdsk/c0t600144F0641C810000004B5488100001d0
>     Replication:                                   none
>     default_fencing:                               global
>
>   DID Device Name:                              /dev/did/rdsk/d8
>     Full Device Path:                              
> clustertest2:/dev/rdsk/c0t600144F0D1C80B0000004B5487D20001d0
>     Full Device Path:                              
> clustertest1:/dev/rdsk/c0t600144F0D1C80B0000004B5487D20001d0
>     Replication:                                   none
>     default_fencing:                               global
>
>   === NAS Devices ===
>
>   === Zone Clusters ===
>   

-- 
Sun Microsystems GmbH           Hartmut Streppel
Sonnenallee 1                   Systems Practice
D-85551 Kirchheim-Heimstetten   Phone:  +49 (0)89 46008 2563
Germany                         Mobile: +49 (0)172 8919711
http://www.sun.de               FAX:    +49 (0)89 46008 2572
mailto: hartmut.streppel at sun.com
My BLOG:  http://blogs.sun.com/Hartmut
Sitz der Gesellschaft:
Sun Microsystems GmbH, Sonnenallee 1, D-85551 Kirchheim-Heimstetten
Amtsgericht M?nchen: HRB 161028
Gesch?ftsf?hrer: Thomas Schr?der, Wolfgang Engels, Wolf Frenkel
Vorsitzender des Aufsichtsrates: Martin H?ring

Reply via email to