The forum has truncated this message

> Thorsten and all,
> 
> I've tried again to reproduce the issues I experienced on s10 / sc32 -33 
> core patch:
> 
>     --- 8< ---
> pub0-node1:/# metaset
> 
> Set name = shared-dg, Set number = 1
> 
> Host                Owner
>   pub0-node0
>   pub0-node1         Yes
> 
> Driv Dbase
> 
> d4   Yes
> 
> d5   Yes
> 
> ub0-node1:/# date ; time metaset -s shared-dg -df -h pub0-node0 ; date
> Wed Oct 14 17:58:39 CEST 2009
> 
> real    7m14.837s
> user    0m0.294s
> sys     0m0.149s
> Wed Oct 14 18:05:54 CEST 2009
> 
> pub0-node1:/# metaset
> 
> Set name = shared-dg, Set number = 1
> 
> Host                Owner
>   pub0-node1         Yes
> 
> Driv Dbase
> 
> d4   Yes
> 
> d5   Yes
>     --- 8< ---
> 
> Even though I didn't (knowingly) correct any other config, the problems 
> I ran into have all vanished - this issue is no longer reproducible on 
> s10/sc32. I could repeatedly remove a dead node from a metaset, 
> irrespective of
> 
>     - whether or not it was offline or booted in non-cluster mode
>     - whether or not [SUCCESS=return] was added to the hosts line of
>       nsswitch.conf
> 
> I can only speculate about what I might have done wrong in the past - 
> could be that I had somehow managed to mess up the disk-set by 
> interrupting metaset commands, running metaset -P on the dead node or 
> anything like that.
> 
> I will now go back again to snv111/ohac and double check there.
> 
> Thorsten, thank you again for your advice, your technical help and your 
> implicit moral support! This has been really helpful so far.
> 
> Nils
> _______________________________________________
> ha-clusters-discuss mailing list
> ha-clusters-discuss at opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/ha-clusters-discuss

Reply via email to