Did you try adding one disk at a time?

Thorsten Frueauf a ?crit :
> Hi Stacy et al,
>
> thanks for the additional information. I guess you are absolutely sure 
> that /dev/did/rdsk/d3 and /dev/did/rdsk/d4 are not currently used within 
> any diskset (like d100 or d200)?
>
> Did you look at the corresponding LUNs via format and see what the label 
> look like for d3 and d4?
>
> Maybe you can truss the rpc.metad process on each node (truss -p <pid>) 
> before invoking the metaset command, and see if that says something obvious?
>
> Otherwise I run out of ideas, sorry :(
>
> Greets
>        Thorsten
>
> Stacy Maydew wrote:
>   
>> Thorston,
>>
>> /etc/release contains:
>>
>>                    Solaris Express Community Edition snv_86 X86
>>            Copyright 2008 Sun Microsystems, Inc.  All Rights Reserved.
>>                         Use is subject to license terms.
>>                              Assembled 27 March 2008
>>
>> /etc/vfstab on "middleclass" contains:
>>
>> #device device  mount   FS      fsck    mount   mount
>> #to     mount   to      fsck            point           type    pass    
>> at boot options
>> #
>> fd      -       /dev/fd fd      -       no      -
>> /proc   -       /proc   proc    -       no      -
>> /dev/dsk/c1t0d0s1       -       -       swap    -       no      -
>> /dev/dsk/c1t0d0s0       /dev/rdsk/c1t0d0s0      /       ufs     1       
>> no      -
>> /dev/dsk/c1t0d0s5       /dev/rdsk/c1t0d0s5      /var    ufs     1       
>> no      -
>> /dev/dsk/c1t0d0s6       /dev/rdsk/c1t0d0s6      /export ufs     2       
>> yes     -
>> #/dev/dsk/c1t0d0s3      /dev/rdsk/c1t0d0s3      /globaldevices  ufs     
>> 2       yes     -
>> /dev/dsk/c1t0d0s4       /dev/rdsk/c1t0d0s4      /opt    ufs     2       
>> yes     -
>> /devices        -       /devices        devfs   -       no      -
>> sharefs -       /etc/dfs/sharetab       sharefs -       no      -
>> ctfs    -       /system/contract        ctfs    -       no      -
>> objfs   -       /system/object  objfs   -       no      -
>> swap    -       /tmp    tmpfs   -       yes     -
>> /dev/did/dsk/d18s3 /dev/did/rdsk/d18s3 /global/.devices/node at 2 ufs 2 no 
>> global
>>
>> /dev/md/acsls/dsk/d100 /dev/md/acsls/rdsk/d100 /export/home ufs 2 no -
>> /dev/md/acsls/dsk/d200 /dev/md/acsls/rdsk/d200 /export/backup ufs 2 no -
>>
>> The only difference in the vfstab on "upperclass" is the line for the 
>> global devices entry which on "upperclass" reads:
>>
>> /dev/did/dsk/d15s3 /dev/did/rdsk/d15s3 /global/.devices/node at 1 ufs 2 no 
>> global
>>
>> The entries for d100 and d200 are failover disksets that I've already 
>> created successfully and are currently mounted on "upperclass".
>>
>> Stacy
>>
>> Thorsten Frueauf wrote:
>>     
>>> Hi Stacy et al,
>>>
>>> could you meantion which Solaris version you are using?
>>>
>>> Could you also verify your /etc/vfstab on each node? Look out for 
>>> entries that have inconsistent references to the dsk vs rdsk device.
>>>
>>> Greets
>>>        Thorsten
>>>
>>> Stacy Maydew wrote:
>>>   
>>>       
>>>> I'm trying to add devices to a metaset using the did entries and the 
>>>> following command:
>>>>
>>>> metaset -s globalds1 -a /dev/did/rdsk/d3 /dev/did/rdsk/d4
>>>>
>>>> The metaset globalds1 does exist and has 2 hosts assigned to it and 
>>>> they're also setup as mediators.  "metaset -s globalds1" yields the 
>>>> following output:
>>>>
>>>> Set name = globalds1, Set number = 2
>>>>
>>>> Host                Owner
>>>>   middleclass
>>>>   upperclass
>>>>
>>>> Mediator Host(s)    Aliases
>>>>   middleclass
>>>>   upperclass
>>>>
>>>> Obviously, the two node names in the cluster are upperclass and 
>>>> middleclass.  The shared devices show up correctly with I issue the 
>>>> command "cldev list -v" from either node.  Disks d1 and d2 belong to 
>>>> another disk set, but all the other drives are unused. 
>>>>
>>>> When I execute the "metaset -s globalds1 -a /dev/did/rdsk/d3 
>>>> /dev/did/rdsk/d4" command from upperclass to try and add the devices to 
>>>> the diskset, I get the following error:
>>>>
>>>> metaset: middleclass: metad drive used: RPC: Unable to receive
>>>>
>>>> I've openedup RPC, etc. via the sequence shown at 
>>>> http://blogs.sun.com/TF/entry/secure_by_default_and_sun and even took it 
>>>> one step further in opening network access to both nodes by executing 
>>>> "netservices open".  Still getting the error. Any ideas what may be 
>>>> causing this error?
>>>>
>>>> Thanks,
>>>>
>>>> Stacy
>>>> --
>>>>     
>>>>         
>>>   
>>>       
>> -- 
>>
>> */Stacy Maydew/*//
>>
>> *Sun Microsystems, Inc.*
>>
>> *Phone 303-272-7805*
>>
>> *Mobile 720-980-5105***
>>
>> *Stacy.Maydew at Sun.com*
>>
>>     
> _______________________________________________
> ha-clusters-discuss mailing list
> ha-clusters-discuss at opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/ha-clusters-discuss
>   

-------------- next part --------------
An HTML attachment was scrubbed...
URL: 
<http://mail.opensolaris.org/pipermail/ha-clusters-discuss/attachments/20080715/33cbae2f/attachment.html>

Reply via email to