Hi,
What is the mountpoint of the pool file systems which are set to legacy.
The following output after the resources online might help.
# clrg online +
# zpool list
# zfs list
# df -F zfs

The next step could be trying without HASP in the configuration.

Thanks
-Venku

On 12/03/09 22:06, Tundra Slosek wrote:
> Details of latest test (mltstore0 and mltproc1 are both nodes in the cluster 
> - you can see by the fact that the resource group is on mltstore0, that 
> switching works sometimes!):
> 
> --- START of SESSION COPY ---
> root at mltstore0:~# /usr/cluster/bin/clresourcegroup status common_shares
> 
> === Cluster Resource Groups ===
> 
> Group Name         Node Name      Suspended     Status
> ----------         ---------      ---------     ------
> common_shares      mltproc0       No            Offline
>                    mltproc1       No            Offline
>                    mltstore1      No            Offline
>                    mltstore0      No            Online
> 
> root at mltstore0:~# /usr/cluster/bin/clrg switch -n mltstore1 common_shares
> clrg:  (C969069) Request failed because resource group common_shares is in 
> ERROR_STOP_FAILED state and requires operator attention
> root at mltstore0:~# /usr/cluster/bin/clresourcegroup status common_shares
> 
> === Cluster Resource Groups ===
> 
> Group Name         Node Name      Suspended     Status
> ----------         ---------      ---------     ------
> common_shares      mltproc0       No            Offline
>                    mltproc1       No            Offline
>                    mltstore1      No            Offline
>                    mltstore0      No            Error--stop failed
> 
> root at mltstore0:~# /usr/cluster/bin/clresource status common_zpool
> 
> === Cluster Resources ===
> 
> Resource Name     Node Name     State           Status Message
> -------------     ---------     -----           --------------
> common_zpool      mltproc0      Offline         Offline
>                   mltproc1      Offline         Offline
>                   mltstore1     Offline         Offline
>                   mltstore0     Stop failed     Faulted
> 
> root at mltstore0:~# zpool export common_pool0
> 
> root at mltstore0:~# zpool list common_pool0
> cannot open 'common_pool0': no such pool
> 
> 
> --- END of SESSION COPY ---
> 
> The log /var/adm/messages shows materially the same entries. So after failing 
> to switch via clrg, I did just a regular zpool export, not a forced export, 
> and had no errors. Having exported common_pool0 manually, I was then able to 
> to 'clrg switch -n mltstore1 common_shares' without incident, and in fact 
> could switch back to mltstore0 w/o a problem. Is there something I can add to 
> a script somewhere to detail what files are open on the pool just prior to 
> the hastorageplus_postnet_stop trying to do the 'umount 
> /common_pool0/common_zone/root'?
> 
> Also, when switched back to mltstore0... I did the following to confirm that 
> the exact path /common_pool0/common_zone/root' was actually a mount point - 
> is there some issue around the fact that zfs list shows the mountpoint as 
> 'legacy', not the actual path?:
> 
> root at mltstore0:~# zfs list | grep common_zone
> common_pool0/common_zone                            1.24G   725M  27.5K  
> /common_pool0/common_zone
> common_pool0/common_zone/ROOT                       1.24G   725M    19K  
> legacy
> common_pool0/common_zone/ROOT/zbe                   1.24G   725M  1.24G  
> legacy
> root at mltstore0:~# df /common_pool0/common_zone/root
> Filesystem           1K-blocks      Used Available Use% Mounted on
> common_pool0/common_zone/ROOT/zbe
>                        2047217   1305119    742099  64% 
> /common_pool0/common_zone/root

Reply via email to