On Thu, Jan 13, 2011 at 3:17 PM, Pavlos Polianidis
<[email protected]> wrote:
> Hello,
>
>
> Currently I have installed heartbeat 3.0.2-2.el5 x86_64 and pacemaker 
> 1.0.7-4.el5 x86_64 on a CentOS release 5.3 x86_64 machine using yum 
> repositories.
>
> My configuration is the below:
> Ha.cf
>
> debugfile /var/log/ha-debug
> logfile /var/log/ha-log
> logfacility     local0
> compression_threshold 2
> node    lsc-node01
> node    lsc-node02
> debug                          1
>  use_logd                       false
>  logfacility                    daemon
> traditional_compression        off
>  compression                    bz2
>  coredumps                      true
> udpport                        694
>  bcast                          eth0
> autojoin                       any
> keepalive                      1
>  warntime                       10
>  deadtime                       35
>  initdead                       40
>  max_rexmit_delay               10000
> crm respawn
>
> but the output of the crm_mon command is the below:
>
> Last updated: Thu Jan 13 16:00:15 2011
> Stack: Heartbeat
> Current DC: lsc-node02.velti.net (a7e25657-fb85-4cf1-9d9b-5a21484e1583) - 
> partition WITHOUT quorum
> Version: 1.0.7-d3fa20fc76c7947d6de66db7e52526dc6bd7d782
> 2 Nodes configured, unknown expected votes
> 0 Resources configured.
> ============
>
> Online: [ lsc-node02.velti.net lsc-node01.velti.net ]
>
>
> Previously I have experimented with the latest version of heartbeat and 
> pacemaker and I downgraded to the current versions as I had the same problem 
> and I have read in the forums that it might be some bugs in some versions.
>
> in the debug log I see the below entry:
>
> WARN: cluster_status: We do not have quorum - fencing and resource management 
> disabled
>
> In the log:
>
> Jan 13 15:53:34 lsc-node02.velti.net crmd: [30853]: info: 
> populate_cib_nodes_ha: Requesting the list of configured nodes
> Jan 13 15:53:37 lsc-node02.velti.net crmd: [30853]: WARN: get_uuid: Could not 
> calculate UUID for lsc-node02
> Jan 13 15:53:37 lsc-node02.velti.net crmd: [30853]: WARN: 
> populate_cib_nodes_ha: Node lsc-node02: no uuid found
> Jan 13 15:53:38 lsc-node02.velti.net crmd: [30853]: WARN: get_uuid: Could not 
> calculate UUID for lsc-node01
> Jan 13 15:53:38 lsc-node02.velti.net crmd: [30853]: WARN: 
> populate_cib_nodes_ha: Node lsc-node01: no uuid found
> Jan 13 15:53:38 lsc-node02.velti.net crmd: [30853]: info: 
> do_state_transition: All 1 cluster nodes are eligible to run resources.
> Jan 13 15:53:38 lsc-node02.velti.net crmd: [30853]: info: do_dc_join_final: 
> Ensuring DC, quorum and node attributes are up-to-date
> Jan 13 15:53:38 lsc-node02.velti.net crmd: [30853]: info: crm_update_quorum: 
> Updating quorum status to false (call=22)
> Jan 13 15:53:38 lsc-node02.velti.net attrd: [30852]: info: 
> attrd_local_callback: Sending full refresh (origin=crmd)
> Jan 13 15:53:38 lsc-node02.velti.net cib: [30849]: info: cib_process_request: 
> Operation complete: op cib_modify for section nodes (origin=local/crmd/20, 
> version=0.18.1): ok
>  (rc=0)
>
> I did not have the same issue when tried on Centos 5.3 i386.
>
> Can anyone advise?
>
> What may be my consequences if no-quorum-policy is set to ignore?

Well you;ll be in trouble if you get a split-brain - but no more so
than usual since heartbeat will normally always claim it has quorum in
a two node cluster.

What do the heartbeat/ccm logs say?
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to