Exactly :-) .... sorry but i?m learning SC now. i?v been configured IP
address on zone.
zonename: ldapserver
zonepath: /zonas/ldapserver
brand: native
autoboot: false
bootargs:
pool:
limitpriv:
scheduling-class:
ip-type: shared
[cpu-shares: 2]
net:
address: 10.1.100.21/24
physical: bnx0
attr:
name: comment
type: string
value: "Master LDAP Server"
rctl:
name: zone.cpu-shares
value: (priv=privileged,limit=2,action=none)
clq output
Cluster Quorum ===
--- Quorum Votes Summary ---
Needed Present Possible
------ ------- --------
2 2 2
--- Quorum Votes by Node ---
Node Name Present Possible Status
--------- ------- -------- ------
aga253distp209 1 1 Online
aga253distp210 1 1 Online
+++++++++++++++++++++++++++++++++++++++++++++++
Wesley Naves de Faria
Analista de Suporte
SCSA - Sun Certified System Administrator for Solaris 10
SCNA - Sun Certified Network Administrator for Solaris 10
FreeBSD / OpenBSD / Linux
AGANP - Agencia Goiana de Administra??o e Neg?cios P?blicos
Fone: 62 3201-6582
+++++++++++++++++++++++++++++++++++++++++++++++
_____
De: Madhan.Balasubramanian at Sun.COM [mailto:Madhan.Balasubramanian at
Sun.COM]
Enviada em: quinta-feira, 18 de outubro de 2007 16:16
Para: Wesley Naves
Cc: ha-clusters-discuss at opensolaris.org
Assunto: Re: [ha-clusters-discuss] RES: Monitoring Public Netork in 2 nodes
Hi Wesley,
Thanks for the info. Can you paste the zonecfg -z ldapserver info output as
well?
I suspect that you have configured the address as part of zone configuration
and that is the reason why RG is not failing over when there is a public
network failure.
To confirm if there is a quorum issue, paste the output of clq status output
as well.
Best Regards,
Madhan Kumar
Wesley Naves wrote:
( Are you having a multi-master HA container group or having 2 containers
rg? Your description of the container rg is not very clear. )
I have 2 containers rg see:
Cluster Resource Groups ===
Group Name Node Name Suspended Status
---------- --------- --------- ------
zone-rg srv210 No Online
srv209 No Offline
clrs list-props
=== Properties for resource zone-hafast ===
GlobalDevicePaths
FilesystemMountPoints
AffinityOn
FilesystemCheckCommand
Zpools
=== Properties for resource zone-rs ===
Monitor_retry_count
Monitor_retry_interval
Probe_timeout
Child_mon_level
Validate_command
Start_command
Stop_command
Probe_command
Network_aware
Stop_signal
Failover_enabled
Log_level
zoneadm list -iv from both nodes
srv210
ID NAME STATUS PATH BRAND IP
0 global running / native
shared
- ldapserver running /zonas/ldapserver native
shared
srv209
ID NAME STATUS PATH BRAND IP
0 global running / native
shared
- ldapserver installed /zonas/ldapserver native
shared
About the quorum problem, What can i do about it ?
Other question, when i force problem with container on srv210 the other node
on srv209 delay about 5 minutes to up, how can i change this timeout/delay
to up the other node quickly ?
+++++++++++++++++++++++++++++++++++++++++++++++
Wesley Naves de Faria
Analista de Suporte
SCSA - Sun Certified System Administrator for Solaris 10
SCNA - Sun Certified Network Administrator for Solaris 10
FreeBSD / OpenBSD / Linux
AGANP - Agencia Goiana de Administra??o e Neg?cios P?blicos
Fone: 62 3201-6582
+++++++++++++++++++++++++++++++++++++++++++++++
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://mail.opensolaris.org/pipermail/ha-clusters-discuss/attachments/20071018/01de2d6b/attachment.html>