Hi Wesley,

Thanks for the info.  Can you paste the zonecfg -z ldapserver info 
output as well?

I suspect that you have configured the address as part of zone 
configuration and that is the reason why RG is not failing over when 
there is a public network failure.

To confirm if there is a quorum issue, paste the output of clq status 
output as well.

Best Regards,
Madhan Kumar

Wesley Naves wrote:
>
>  
>
> ( Are you having a multi-master HA container group or having 2 
> containers rg?   Your description of the container rg is not very 
> clear.  )
>
>  
>
> I have 2 containers rg see:
>
>  
>
> Cluster Resource Groups ===
>
>  
>
> Group Name     Node Name          Suspended     Status
>
> ----------     ---------          ---------     ------
>
> zone-rg        srv210                 No            Online
>
>                     srv209                No            Offline
>
>  
>
>  
>
> clrs list-props
>
>  
>
> === Properties for resource zone-hafast ===
>
> GlobalDevicePaths
>
> FilesystemMountPoints
>
> AffinityOn
>
> FilesystemCheckCommand
>
> Zpools
>
>  
>
> === Properties for resource zone-rs ===
>
> Monitor_retry_count
>
> Monitor_retry_interval
>
> Probe_timeout
>
> Child_mon_level
>
> Validate_command
>
> Start_command
>
> Stop_command
>
> Probe_command
>
> Network_aware
>
> Stop_signal
>
> Failover_enabled
>
> Log_level
>
>
> zoneadm list -iv from both nodes
>
>  
>
> srv210
>
>   ID NAME             STATUS     PATH                           
> BRAND    IP   
>
>    0 global           running    /                              
> native   shared
>
>    - ldapserver      running   /zonas/ldapserver              native   
> shared
>
>  
>
> srv209
>
>   ID NAME             STATUS     PATH                           
> BRAND    IP   
>
>    0 global           running    /                              
> native   shared
>
>    - ldapserver      installed  /zonas/ldapserver              
> native   shared
>
>  
>
>  
>
>  
>
> About the quorum problem, What can i do about it ?
>
>  
>
> Other question, when i force problem with container on srv210 the 
> other node on srv209 delay about 5 minutes to up, how can i change 
> this timeout/delay to up the other node quickly ?
>
>  
>
>  
>
>  
>
> +++++++++++++++++++++++++++++++++++++++++++++++
>
> Wesley Naves de Faria
>
> Analista de Suporte
>
> SCSA - Sun Certified System Administrator for Solaris 10
>
> SCNA - Sun Certified Network Administrator for Solaris 10
>
> FreeBSD / OpenBSD / Linux
>
> AGANP - Agencia Goiana de Administra??o e Neg?cios P?blicos
>
> Fone: 62 3201-6582
>
> +++++++++++++++++++++++++++++++++++++++++++++++
>
>   
>
>  
>
>  
>
> ------------------------------------------------------------------------
>
> *De:* Madhan.Balasubramanian at Sun.COM 
> [mailto:Madhan.Balasubramanian at Sun.COM]
> *Enviada em:* quinta-feira, 18 de outubro de 2007 15:37
> *Para:* Wesley Naves
> *Cc:* ha-clusters-discuss at opensolaris.org
> *Assunto:* Re: [ha-clusters-discuss] Monitoring Public Netork in 2 nodes
>
>  
>
>
> Wesley,
>
> 1.) disconnecting the server from the switch,
>
> Are you having a multi-master HA container group or having 2 
> containers rg?   Your description of the container rg is not very clear. 
>
> Pasting the output of the following commands will help:
>
> clrg status <container-rg>
>
> zoneadm list -iv from both nodes.
>
> For the resource to failover in case of public network failure, you 
> need to configure the network address of the container using Logical 
> hostname resource.  i.e zonecfg don't add network resource.
>
> You can check the container guide from
>
> http://docs.sun.com/app/docs/doc/819-3069
>
> Regarding problem 2, I think there is a quorum issue. you need to 
> check the quorum configuration -  /usr/cluster/bin/clq status
>
> If the quorum disk is not online, then shutting down one node will 
> panic the other as well.
>
> Best Regards,
> Madhan Kumar
> Wesley Naves wrote:
>
> Hi,
>
>             |?m using SC 3.2 and HA container in 2 servers with 1 
> container each. So, i?m doing some test:
>
>  
>
> 1 -- Disconnect 1 server to switch ( nothing happen )
>
> 2 -- Turn off 1 server ( the other server shutdown and don?t up while 
> the other server don?t is up )
>
>  
>
> My question is following, i need configure for monitor the public 
> network for when 1 server down automatic ou failure the network the 
> other assume automaticly. How can i do my cluster well work.
>
>  
>
>  
>
> Thank?s
>
> +++++++++++++++++++++++++++++++++++++++++++++++
>
> Wesley Naves de Faria
>
> Analista de Suporte
>
> SCSA - Sun Certified System Administrator for Solaris 10
>
> SCNA - Sun Certified Network Administrator for Solaris 10
>
> FreeBSD / OpenBSD / Linux
>
> AGANP - Agencia Goiana de Administra??o e Neg?cios P?blicos
>
> Fone: 62 3201-6582
>
> +++++++++++++++++++++++++++++++++++++++++++++++
>
>   
>
>  
>
>  
>
>  
>
>
> ------------------------------------------------------------------------
>
>
>  
> _______________________________________________
> ha-clusters-discuss mailing list
> ha-clusters-discuss at opensolaris.org <mailto:ha-clusters-discuss at 
> opensolaris.org>
> http://mail.opensolaris.org/mailman/listinfo/ha-clusters-discuss
>   
> ------------------------------------------------------------------------
>
> _______________________________________________
> ha-clusters-discuss mailing list
> ha-clusters-discuss at opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/ha-clusters-discuss
>   
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 
<http://mail.opensolaris.org/pipermail/ha-clusters-discuss/attachments/20071018/875b6ff8/attachment.html>

Reply via email to