Hi Wesley,

Thanks for the clarification.  Since you are not using shared disks, you 
can use a quorum server.  i.e install quorum server pkgs on a different 
node and add it using clsetup --> quorum  --> add a quorum device --> 
quorum server .

You run the installer and see the quorum server installation option. 

Quorum is necessary to ensure that the cluster remains in consistent 
state.  By default each node gets a vote and a device gets n-1 votes 
where n = number of nodes to which it is connected to and greater than  
total number of possible votes/2 is required to form a cluster.

In your case, there are just 2 nodes with no shared device and hence 
total votes=2
If one node fails, total vote=1 which is  less than  total possible 
votes/2 + 1 (which is 2 :-) ).

You can avoid this problem by having the quorum server installed on a 
different node in the same subnet (same subnet to avoid network issues).

Hope that helps!

Best Regards,
Madhan Kumar


Wesley Naves wrote:
>
> srv209# scdidadm -L
>
> 1        srv209:/dev/rdsk/c0t0d0 /dev/did/rdsk/d1    
>
> 2        srv209:/dev/rdsk/c1t0d0 /dev/did/rdsk/d2    
>
> 3        srv210:/dev/rdsk/c0t0d0 /dev/did/rdsk/d3    
>
> 4        srv210:/dev/rdsk/c1t0d0 /dev/did/rdsk/d4    
>
>  
>
>  
>
> Madhan, i don?t have certain if i did sure, i?m not using storage then 
> my mountpoint zone is on zpoll ( zpoll create ...... ) and the 
> ldapserver is a zfs file system, my zones is iqual in both servers, 
> this is way that i?m using because i?m not use storage. Is it ok ?
>
>  
>
> Then when i run clq add d1 on srv209 appears (clq:  (C400923) Cannot 
> find device "d1" to add.). Do I realy need the quorum ? i don?t 
> understand very well the concept about quorum vote, quorum device then 
> i am so confused.
>
>  
>
>  
>
> +++++++++++++++++++++++++++++++++++++++++++++++
>
>  
>
> Wesley Naves de Faria
>
> Analista de Suporte
>
> SCSA - Sun Certified System Administrator for Solaris 10
>
> SCNA - Sun Certified Network Administrator for Solaris 10
>
> FreeBSD / OpenBSD / Linux
>
> AGANP - Agencia Goiana de Administra??o e Neg?cios P?blicos
>
> Fone: 62 3201-6582
>
>  
>
> +++++++++++++++++++++++++++++++++++++++++++++++
>
>   
>
>  
>
> ------------------------------------------------------------------------
>
> *De:* Madhan.Balasubramanian at Sun.COM 
> [mailto:Madhan.Balasubramanian at Sun.COM]
> *Enviada em:* quinta-feira, 18 de outubro de 2007 16:36
> *Para:* Wesley Naves
> *Cc:* ha-clusters-discuss at opensolaris.org
> *Assunto:* Re: [ha-clusters-discuss] RES: RES: Monitoring Public 
> Netork in 2 nodes
>
>  
>
>
>
> Wesley Naves wrote:
>
> Exactly J .... sorry but i?m learning SC now. i?v been configured IP 
> address on zone.
>
>
> No problem!.  I am  glad that you are trying it out  :-) . you can go 
> through the  HA containers guide  to understand the different 
> combinations possible for network configuration.  Accordingly, you can 
> reconfigure the zone. 
>
> To add a quorum, do the following:
>
> execute scdidadm -L
>
> If any of the disk is listed twice, it means that it is shared and can 
> be accessed by both nodes. 
>
> Then you can choose one and add it with the clq add command.
>
> eg)
>
> #bash-3.00# scdidadm -L
> 1        pjoker3:/dev/rdsk/c0t216000C0FF898467d21 /dev/did/rdsk/d1    
> 1        pjoker1:/dev/rdsk/c0t216000C0FF898467d21 /dev/did/rdsk/d1    
>
>
> #clq add d1
>
> Then execute clq status once again to check if the device is online.  
> Then you can reboot one node and check if the other goes down or not.
>
> Best Regards,
> Madhan Kumar
>
>  
>
> zonename: ldapserver
>
> zonepath: /zonas/ldapserver
>
> brand: native
>
> autoboot: false
>
> bootargs:
>
> pool:
>
> limitpriv:
>
> scheduling-class:
>
> ip-type: shared
>
> [cpu-shares: 2]
>
> net:
>
>         address: 10.1.100.21/24
>
>         physical: bnx0
>
> attr:
>
>         name: comment
>
>         type: string
>
>         value: "Master LDAP Server"
>
> rctl:
>
>         name: zone.cpu-shares
>
>         value: (priv=privileged,limit=2,action=none)
>
>  
>
>  
>
> clq output
>
>  
>
> Cluster Quorum ===
>
>  
>
> --- Quorum Votes Summary ---
>
>  
>
>             Needed   Present   Possible
>
>             ------   -------   --------
>
>             2        2         2
>
>  
>
>  
>
> --- Quorum Votes by Node ---
>
>  
>
> Node Name           Present      Possible     Status
>
> ---------           -------      --------     ------
>
> aga253distp209      1            1            Online
>
> aga253distp210      1            1            Online
>
>  
>
>  
>
>  
>
>  
>
> ------------------------------------------------------------------------
>
> _______________________________________________
> ha-clusters-discuss mailing list
> ha-clusters-discuss at opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/ha-clusters-discuss
>   
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 
<http://mail.opensolaris.org/pipermail/ha-clusters-discuss/attachments/20071019/7d55da7f/attachment.html>

Reply via email to