On Fri, 2011-04-29 at 09:41 +0200, Andrew Beekhof wrote:
> >>
> > Unfortunately the devel code does not run at all in my environment so I
> > have to fix this first.
> 
> Oh?  I ran CTS on it the other day and it was fine here.
> 

I installed pacemaker-devel on top of a compilation of pacemaker-1.1. In
addition I tried "make uninstall" for both versions and then again
"make install" for devel. Pacemaker does not come up, crm_mon shows
nodes as offline.

I suspect reason is
May  5 17:09:34 devel1 crmd: [5942]: notice: crmd_peer_update: Status update: 
Client devel1/crmd now has status [online] (DC=<null>)
May  5 17:09:34 devel1 crmd: [5942]: info: crm_update_peer: Node devel1: 
id=1790093504 state=unknown addr=(null) votes=0 born=0 seen=0 
proc=00000000000000000000000000111312 (new)
May  5 17:09:34 devel1 crmd: [5942]: info: pcmk_quorum_notification: Membership 
0: quorum retained (0)
May  5 17:09:34 devel1 crmd: [5942]: debug: do_fsa_action: actions:trace: 
#011// A_STARTED
May  5 17:09:34 devel1 crmd: [5942]: info: do_started: Delaying start, no 
membership data (0000000000100000)
                                                       
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
May  5 17:09:34 devel1 crmd: [5942]: debug: register_fsa_input_adv: Stalling 
the FSA pending further input: cause=C_FSA_INTERNAL

Any ideas ?
-holger




May  5 17:09:33 devel1 pacemakerd: [5929]: info: Invoked: pacemakerd 
May  5 17:09:33 devel1 pacemakerd: [5929]: info: crm_log_init_worker: Changed active directory to /var/lib/heartbeat/cores/root
May  5 17:09:33 devel1 pacemakerd: [5929]: info: get_cluster_type: Cluster type is: 'corosync'
May  5 17:09:33 devel1 pacemakerd: [5929]: info: read_config: Reading configure for stack: corosync
May  5 17:09:33 devel1 corosync[2101]:  [CONFDB] lib_init_fn: conn=0x6872f0
May  5 17:09:33 devel1 pacemakerd: [5929]: info: config_find_next: Processing additional logging options...
May  5 17:09:33 devel1 pacemakerd: [5929]: info: get_config_opt: Found 'on' for option: debug
May  5 17:09:33 devel1 pacemakerd: [5929]: info: get_config_opt: Found 'no' for option: to_logfile
May  5 17:09:33 devel1 pacemakerd: [5929]: info: get_config_opt: Found 'yes' for option: to_syslog
May  5 17:09:33 devel1 pacemakerd: [5929]: info: get_config_opt: Found 'daemon' for option: syslog_facility
May  5 17:09:33 devel1 corosync[2101]:  [CONFDB] exit_fn for conn=0x6872f0
May  5 17:09:33 devel1 pacemakerd: [5931]: info: crm_log_init_worker: Changed active directory to /var/lib/heartbeat/cores/root
May  5 17:09:33 devel1 pacemakerd: [5931]: info: main: Starting Pacemaker 1.1.5 (Build: unknown):  ncurses corosync-quorum corosync
May  5 17:09:33 devel1 pacemakerd: [5931]: info: main: Maximum core file size is: 18446744073709551615
May  5 17:09:33 devel1 pacemakerd: [5931]: debug: cluster_connect_cfg: Our nodeid: 1790093504
May  5 17:09:33 devel1 pacemakerd: [5931]: debug: cluster_connect_cfg: Adding fd=5 to mainloop
May  5 17:09:33 devel1 corosync[2101]:  [CPG   ] lib_init_fn: conn=0x68bfc0, cpd=0x6903f0
May  5 17:09:33 devel1 pacemakerd: [5931]: debug: cluster_connect_cpg: Our nodeid: 1790093504
May  5 17:09:33 devel1 pacemakerd: [5931]: debug: cluster_connect_cpg: Adding fd=6 to mainloop
May  5 17:09:33 devel1 pacemakerd: [5931]: info: update_node_processes: 0x60d920 Node 1790093504 now known as devel1 (was: (null))
May  5 17:09:33 devel1 pacemakerd: [5931]: info: update_node_processes: Node devel1 now has process list: 00000000000000000000000000000002 (was 00000000000000000000000000000000)
May  5 17:09:33 devel1 corosync[2101]:  [TOTEM ] mcasted message added to pending queue
May  5 17:09:33 devel1 corosync[2101]:  [CPG   ] got mcast request on 0x68bfc0
May  5 17:09:33 devel1 corosync[2101]:  [TOTEM ] Delivering 24 to 25
May  5 17:09:33 devel1 corosync[2101]:  [TOTEM ] Delivering MCAST message with seq 25 to pending delivery queue
May  5 17:09:33 devel1 corosync[2101]:  [CPG   ] got procjoin message from cluster node 1790093504
May  5 17:09:33 devel1 corosync[2101]:  [TOTEM ] Received ringid(192.168.178.106:332) seq 25
May  5 17:09:33 devel1 corosync[2101]:  [TOTEM ] Received ringid(192.168.178.106:332) seq 25
May  5 17:09:33 devel1 corosync[2101]:  [TOTEM ] mcasted message added to pending queue
May  5 17:09:33 devel1 corosync[2101]:  [TOTEM ] Delivering 25 to 26
May  5 17:09:33 devel1 corosync[2101]:  [TOTEM ] Delivering MCAST message with seq 26 to pending delivery queue
May  5 17:09:33 devel1 corosync[2101]:  [TOTEM ] Received ringid(192.168.178.106:332) seq 26
May  5 17:09:33 devel1 corosync[2101]:  [TOTEM ] Received ringid(192.168.178.106:332) seq 26
May  5 17:09:33 devel1 corosync[2101]:  [TOTEM ] releasing messages up to and including 25
May  5 17:09:33 devel1 pacemakerd: [5931]: info: start_child: Forked child 5935 for process stonith-ng
May  5 17:09:33 devel1 pacemakerd: [5931]: info: update_node_processes: Node devel1 now has process list: 00000000000000000000000000100002 (was 00000000000000000000000000000002)
May  5 17:09:33 devel1 corosync[2101]:  [TOTEM ] releasing messages up to and including 26
May  5 17:09:33 devel1 corosync[2101]:  [CPG   ] got mcast request on 0x68bfc0
May  5 17:09:33 devel1 pacemakerd: [5931]: info: start_child: Forked child 5936 for process cib
May  5 17:09:33 devel1 pacemakerd: [5931]: info: update_node_processes: Node devel1 now has process list: 00000000000000000000000000100102 (was 00000000000000000000000000100002)
May  5 17:09:33 devel1 corosync[2101]:  [TOTEM ] mcasted message added to pending queue
May  5 17:09:33 devel1 stonith-ng: [5935]: info: Invoked: /usr/lib64/heartbeat/stonithd 
May  5 17:09:33 devel1 stonith-ng: [5935]: info: crm_log_init_worker: Changed active directory to /var/lib/heartbeat/cores/root
May  5 17:09:33 devel1 stonith-ng: [5935]: info: get_cluster_type: Cluster type is: 'corosync'
May  5 17:09:33 devel1 stonith-ng: [5935]: notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
May  5 17:09:33 devel1 pacemakerd: [5931]: info: start_child: Forked child 5938 for process lrmd
May  5 17:09:33 devel1 pacemakerd: [5931]: info: update_node_processes: Node devel1 now has process list: 00000000000000000000000000100112 (was 00000000000000000000000000100102)
May  5 17:09:33 devel1 corosync[2101]:  [CPG   ] got mcast request on 0x68bfc0
May  5 17:09:33 devel1 corosync[2101]:  [CPG   ] got mcast request on 0x68bfc0
May  5 17:09:33 devel1 pacemakerd: [5931]: info: start_child: Forked child 5939 for process attrd
May  5 17:09:33 devel1 pacemakerd: [5931]: info: update_node_processes: Node devel1 now has process list: 00000000000000000000000000101112 (was 00000000000000000000000000100112)
May  5 17:09:33 devel1 corosync[2101]:  [CPG   ] got mcast request on 0x68bfc0
May  5 17:09:33 devel1 corosync[2101]:  [TOTEM ] Delivering 26 to 27
May  5 17:09:33 devel1 corosync[2101]:  [TOTEM ] Delivering MCAST message with seq 27 to pending delivery queue
May  5 17:09:33 devel1 corosync[2101]:  [TOTEM ] Received ringid(192.168.178.106:332) seq 27
May  5 17:09:33 devel1 corosync[2101]:  [TOTEM ] Received ringid(192.168.178.106:332) seq 27
May  5 17:09:33 devel1 corosync[2101]:  [TOTEM ] mcasted message added to pending queue
May  5 17:09:33 devel1 corosync[2101]:  [TOTEM ] Delivering 27 to 28
May  5 17:09:33 devel1 corosync[2101]:  [TOTEM ] Delivering MCAST message with seq 28 to pending delivery queue
May  5 17:09:33 devel1 corosync[2101]:  [TOTEM ] Received ringid(192.168.178.106:332) seq 28
May  5 17:09:33 devel1 corosync[2101]:  [TOTEM ] Received ringid(192.168.178.106:332) seq 28
May  5 17:09:33 devel1 corosync[2101]:  [TOTEM ] releasing messages up to and including 27
May  5 17:09:33 devel1 corosync[2101]:  [CPG   ] lib_init_fn: conn=0x6933d0, cpd=0x690320
May  5 17:09:33 devel1 corosync[2101]:  [TOTEM ] releasing messages up to and including 28
May  5 17:09:33 devel1 stonith-ng: [5935]: debug: init_cpg_connection: Adding fd=4 to mainloop
May  5 17:09:33 devel1 stonith-ng: [5935]: info: init_ais_connection_once: Connection to 'corosync': established
May  5 17:09:33 devel1 stonith-ng: [5935]: debug: crm_new_peer: Creating entry for node devel1/1790093504
May  5 17:09:33 devel1 stonith-ng: [5935]: info: crm_new_peer: Node devel1 now has id: 1790093504
May  5 17:09:33 devel1 stonith-ng: [5935]: info: crm_new_peer: Node 1790093504 is now known as devel1
May  5 17:09:33 devel1 stonith-ng: [5935]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/pcmk
May  5 17:09:33 devel1 stonith-ng: [5935]: info: main: Starting stonith-ng mainloop
May  5 17:09:33 devel1 corosync[2101]:  [TOTEM ] mcasted message added to pending queue
May  5 17:09:33 devel1 corosync[2101]:  [TOTEM ] Delivering 28 to 29
May  5 17:09:33 devel1 corosync[2101]:  [TOTEM ] Delivering MCAST message with seq 29 to pending delivery queue
May  5 17:09:33 devel1 pacemakerd: [5931]: info: start_child: Forked child 5941 for process pengine
May  5 17:09:33 devel1 pacemakerd: [5931]: info: update_node_processes: Node devel1 now has process list: 00000000000000000000000000111112 (was 00000000000000000000000000101112)
May  5 17:09:33 devel1 corosync[2101]:  [CPG   ] got mcast request on 0x68bfc0
May  5 17:09:33 devel1 corosync[2101]:  [CPG   ] got procjoin message from cluster node 1790093504
May  5 17:09:33 devel1 cib: [5936]: info: crm_log_init_worker: Changed active directory to /var/lib/heartbeat/cores/hacluster
May  5 17:09:33 devel1 corosync[2101]:  [TOTEM ] Received ringid(192.168.178.106:332) seq 29
May  5 17:09:33 devel1 corosync[2101]:  [TOTEM ] Received ringid(192.168.178.106:332) seq 29
May  5 17:09:33 devel1 corosync[2101]:  [TOTEM ] mcasted message added to pending queue
May  5 17:09:33 devel1 corosync[2101]:  [TOTEM ] Delivering 29 to 2a
May  5 17:09:33 devel1 corosync[2101]:  [TOTEM ] Delivering MCAST message with seq 2a to pending delivery queue
May  5 17:09:33 devel1 corosync[2101]:  [TOTEM ] Received ringid(192.168.178.106:332) seq 2a
May  5 17:09:33 devel1 corosync[2101]:  [TOTEM ] Received ringid(192.168.178.106:332) seq 2a
May  5 17:09:33 devel1 corosync[2101]:  [TOTEM ] releasing messages up to and including 29
May  5 17:09:33 devel1 pacemakerd: [5931]: info: start_child: Forked child 5942 for process crmd
May  5 17:09:33 devel1 corosync[2101]:  [TOTEM ] releasing messages up to and including 2a
May  5 17:09:33 devel1 cib: [5936]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml (digest: /var/lib/heartbeat/crm/cib.xml.sig)
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk] <cib epoch="428" num_updates="0" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.5" have-quorum="1" cib-last-written="Mon May  2 12:09:44 2011" >
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]   <configuration >
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]     <crm_config >
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]       <cluster_property_set id="cib-bootstrap-options" >
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]         <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.1.5-unknown" />
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]         <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="openais" />
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]         <nvpair id="cib-bootstrap-options-expected-quorum-votes" name="expected-quorum-votes" value="3" />
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]         <nvpair id="cib-bootstrap-options-pe-error-series-max" name="pe-error-series-max" value="200" />
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]         <nvpair id="cib-bootstrap-options-pe-input-series-max" name="pe-input-series-max" value="200" />
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]         <nvpair id="cib-bootstrap-options-pe-warn-series-max" name="pe-warn-series-max" value="200" />
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]         <nvpair id="cib-bootstrap-options-no-quorum-policy" name="no-quorum-policy" value="ignore" />
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]         <nvpair id="cib-bootstrap-options-last-lrm-refresh" name="last-lrm-refresh" value="1303973074" />
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]         <nvpair id="cib-bootstrap-options-stonith-enabled" name="stonith-enabled" value="false" />
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]         <nvpair id="cib-bootstrap-options-default-resource-stickiness" name="default-resource-stickiness" value="200" />
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]       </cluster_property_set>
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]     </crm_config>
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]     <nodes >
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]       <node id="devel1" type="normal" uname="devel1" />
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]       <node id="devel2" type="normal" uname="devel2" />
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]       <node id="devel3" type="normal" uname="devel3" />
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]     </nodes>
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]     <resources >
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]       <clone id="stonith" >
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]         <meta_attributes id="stonith-meta_attributes" >
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]           <nvpair id="stonith-meta_attributes-target-role" name="target-role" value="started" />
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]         </meta_attributes>
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]         <primitive class="stonith" id="st_libvirt" type="external/libvirt" >
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]           <operations id="st_libvirt-operations" >
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]             <op id="st_libvirt-op-monitor-300" interval="300" name="monitor" start-delay="15" timeout="15" />
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]           </operations>
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]           <instance_attributes id="st_libvirt-instance_attributes" >
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]             <nvpair id="st_libvirt-instance_attributes-hostlist" name="hostlist" value="devel1 devel2" />
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]             <nvpair id="st_libvirt-instance_attributes-hypervisor_uri" name="hypervisor_uri" value="qemu+ssh://hotlx2.hotbso.de/system" />
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]           </instance_attributes>
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]         </primitive>
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]       </clone>
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]       <primitive class="ocf" id="hot" provider="heartbeat" type="Dummy" >
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]         <meta_attributes id="hot-meta_attributes" >
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]           <nvpair id="hot-meta_attributes-target-role" name="target-role" value="started" />
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]         </meta_attributes>
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]         <operations id="hot-operations" >
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]           <op id="hot-op-monitor-10" interval="10" name="monitor" timeout="20" />
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]         </operations>
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]         <instance_attributes id="hot-instance_attributes" >
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]           <nvpair id="hot-instance_attributes-fake" name="fake" value="abc13" />
May  5 17:09:33 devel1 attrd: [5939]: info: Invoked: /usr/lib64/heartbeat/attrd 
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]         </instance_attributes>
May  5 17:09:33 devel1 pacemakerd: [5931]: info: update_node_processes: Node devel1 now has process list: 00000000000000000000000000111312 (was 00000000000000000000000000111112)
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]       </primitive>
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]       <master id="ms_test" >
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]         <meta_attributes id="ms_test-meta_attributes" >
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]           <nvpair id="ms_test-meta_attributes-target-role" name="target-role" value="started" />
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]           <nvpair id="ms_test-meta_attributes-notify" name="notify" value="true" />
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]           <nvpair id="ms_test-meta_attributes-clone-max" name="clone-max" value="2" />
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]         </meta_attributes>
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]         <primitive class="ocf" id="test" provider="hot" type="DummyMS" >
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]           <operations id="test-operations" >
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]             <op id="test-op-monitor-10" interval="10" name="monitor" timeout="20" />
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]             <op id="test-op-monitor-Master-10" interval="10" name="monitor" role="Master" timeout="20" />
May  5 17:09:33 devel1 pacemakerd: [5931]: info: main: Starting mainloop
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]           </operations>
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]           <instance_attributes id="test-instance_attributes" >
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]             <nvpair id="test-instance_attributes-fake" name="fake" value="aaaa" />
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]           </instance_attributes>
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]           <meta_attributes id="test-meta_attributes" >
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]             <nvpair id="test-meta_attributes-target-role" name="target-role" value="Started" />
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]           </meta_attributes>
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]         </primitive>
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]       </master>
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]       <primitive class="ocf" id="aaa" provider="hot" type="DummyMS" >
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]         <meta_attributes id="aaa-meta_attributes" >
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]           <nvpair id="aaa-meta_attributes-target-role" name="target-role" value="Started" />
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]         </meta_attributes>
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]         <operations id="aaa-operations" >
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]           <op id="aaa-op-monitor-10" interval="10" name="monitor" timeout="20" />
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]           <op id="aaa-op-monitor-Master-10" interval="10" name="monitor" role="Master" timeout="20" />
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]         </operations>
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]         <instance_attributes id="aaa-instance_attributes" >
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]           <nvpair id="aaa-instance_attributes-fake_unique" name="fake_unique" value="123" />
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]         </instance_attributes>
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]       </primitive>
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]       <group id="gr_test" >
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]         <meta_attributes id="gr_test-meta_attributes" >
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]           <nvpair id="gr_test-meta_attributes-target-role" name="target-role" value="Started" />
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]         </meta_attributes>
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]         <primitive class="ocf" id="gr_memb1" provider="hot" type="DummyMS" />
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]         <primitive class="ocf" id="gr_memb2" provider="hot" type="DummyMS" />
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]       </group>
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]       <clone id="cl_test" >
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]         <meta_attributes id="cl_test-meta_attributes" >
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]           <nvpair id="cl_test-meta_attributes-target-role" name="target-role" value="Started" />
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]           <nvpair id="cl_test-meta_attributes-clone-max" name="clone-max" value="2" />
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]         </meta_attributes>
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]         <group id="cl_gr1" >
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]           <primitive class="ocf" id="cl_memb1" provider="hot" type="DummyMS" />
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]           <primitive class="ocf" id="cl_memb2" provider="hot" type="DummyMS" />
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]         </group>
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]       </clone>
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]     </resources>
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]     <constraints />
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]     <rsc_defaults >
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]       <meta_attributes id="rsc_defaults-options" >
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]         <nvpair id="rsc_defaults-options-target-role" name="target-role" value="Stopped" />
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]       </meta_attributes>
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]     </rsc_defaults>
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]   </configuration>
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk]   <status />
May  5 17:09:33 devel1 cib: [5936]: debug: readCibXmlFile[on-disk] </cib>
May  5 17:09:33 devel1 cib: [5936]: info: validate_with_relaxng: Creating RNG parser context
May  5 17:09:33 devel1 pacemakerd: [5931]: debug: pcmk_client_connect: Channel 0x60eee0 connected: 1 children
May  5 17:09:33 devel1 stonith-ng: [5935]: info: crm_update_peer: Node devel1: id=1790093504 state=unknown addr=(null) votes=0 born=0 seen=0 proc=00000000000000000000000000111312 (new)
May  5 17:09:33 devel1 attrd: [5939]: info: crm_log_init_worker: Changed active directory to /var/lib/heartbeat/cores/hacluster
May  5 17:09:33 devel1 attrd: [5939]: info: main: Starting up
May  5 17:09:33 devel1 attrd: [5939]: info: get_cluster_type: Cluster type is: 'corosync'
May  5 17:09:33 devel1 attrd: [5939]: notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
May  5 17:09:33 devel1 corosync[2101]:  [CPG   ] got mcast request on 0x68bfc0
May  5 17:09:33 devel1 corosync[2101]:  [CPG   ] got mcast request on 0x68bfc0
May  5 17:09:33 devel1 corosync[2101]:  [TOTEM ] mcasted message added to pending queue
May  5 17:09:33 devel1 corosync[2101]:  [TOTEM ] Delivering 2a to 2b
May  5 17:09:33 devel1 corosync[2101]:  [TOTEM ] Delivering MCAST message with seq 2b to pending delivery queue
May  5 17:09:33 devel1 corosync[2101]:  [TOTEM ] Received ringid(192.168.178.106:332) seq 2b
May  5 17:09:33 devel1 corosync[2101]:  [TOTEM ] Received ringid(192.168.178.106:332) seq 2b
May  5 17:09:33 devel1 corosync[2101]:  [TOTEM ] releasing messages up to and including 2b
May  5 17:09:33 devel1 crmd: [5942]: info: Invoked: /usr/lib64/heartbeat/crmd 
May  5 17:09:33 devel1 crmd: [5942]: info: crm_log_init_worker: Changed active directory to /var/lib/heartbeat/cores/hacluster
May  5 17:09:33 devel1 crmd: [5942]: info: main: CRM Hg Version: unknown
May  5 17:09:33 devel1 crmd: [5942]: info: crmd_init: Starting crmd
May  5 17:09:33 devel1 crmd: [5942]: debug: s_crmd_fsa: Processing I_STARTUP: [ state=S_STARTING cause=C_STARTUP origin=crmd_init ]
May  5 17:09:33 devel1 crmd: [5942]: debug: do_fsa_action: actions:trace: #011// A_LOG   
May  5 17:09:33 devel1 crmd: [5942]: debug: do_fsa_action: actions:trace: #011// A_STARTUP
May  5 17:09:33 devel1 crmd: [5942]: debug: do_startup: Registering Signal Handlers
May  5 17:09:33 devel1 crmd: [5942]: debug: do_startup: Creating CIB and LRM objects
May  5 17:09:33 devel1 crmd: [5942]: debug: do_fsa_action: actions:trace: #011// A_CIB_START
May  5 17:09:33 devel1 crmd: [5942]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
May  5 17:09:33 devel1 crmd: [5942]: debug: init_client_ipc_comms_nodispatch: Could not init comms on: /var/run/crm/cib_rw
May  5 17:09:33 devel1 crmd: [5942]: debug: cib_native_signon_raw: Connection to command channel failed
May  5 17:09:33 devel1 crmd: [5942]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
May  5 17:09:33 devel1 crmd: [5942]: debug: init_client_ipc_comms_nodispatch: Could not init comms on: /var/run/crm/cib_callback
May  5 17:09:33 devel1 pengine: [5941]: info: Invoked: /usr/lib64/heartbeat/pengine 
May  5 17:09:33 devel1 crmd: [5942]: debug: cib_native_signon_raw: Connection to callback channel failed
May  5 17:09:33 devel1 crmd: [5942]: debug: cib_native_signon_raw: Connection to CIB failed: connection failed
May  5 17:09:33 devel1 crmd: [5942]: debug: cib_native_signoff: Signing out of the CIB Service
May  5 17:09:33 devel1 corosync[2101]:  [CPG   ] lib_init_fn: conn=0x699f90, cpd=0x691430
May  5 17:09:33 devel1 pengine: [5941]: info: crm_log_init_worker: Changed active directory to /var/lib/heartbeat/cores/hacluster
May  5 17:09:33 devel1 pengine: [5941]: debug: main: Checking for old instances of pengine
May  5 17:09:33 devel1 pengine: [5941]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/pengine
May  5 17:09:33 devel1 pengine: [5941]: debug: init_client_ipc_comms_nodispatch: Could not init comms on: /var/run/crm/pengine
May  5 17:09:33 devel1 pengine: [5941]: debug: main: Init server comms
May  5 17:09:33 devel1 pengine: [5941]: info: main: Starting pengine
May  5 17:09:33 devel1 cib: [5936]: debug: activateCibXml: Triggering CIB write for start op
May  5 17:09:33 devel1 attrd: [5939]: debug: init_cpg_connection: Adding fd=4 to mainloop
May  5 17:09:33 devel1 corosync[2101]:  [TOTEM ] mcasted message added to pending queue
May  5 17:09:33 devel1 attrd: [5939]: info: init_ais_connection_once: Connection to 'corosync': established
May  5 17:09:33 devel1 attrd: [5939]: debug: crm_new_peer: Creating entry for node devel1/1790093504
May  5 17:09:33 devel1 attrd: [5939]: info: crm_new_peer: Node devel1 now has id: 1790093504
May  5 17:09:33 devel1 attrd: [5939]: info: crm_new_peer: Node 1790093504 is now known as devel1
May  5 17:09:33 devel1 attrd: [5939]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/pcmk
May  5 17:09:33 devel1 corosync[2101]:  [TOTEM ] Delivering 2b to 2c
May  5 17:09:33 devel1 cib: [5936]: info: startCib: CIB Initialization completed successfully
May  5 17:09:33 devel1 pacemakerd: [5931]: debug: pcmk_client_connect: Channel 0x610a60 connected: 2 children
May  5 17:09:33 devel1 cib: [5936]: info: get_cluster_type: Cluster type is: 'corosync'
May  5 17:09:33 devel1 cib: [5936]: notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
May  5 17:09:33 devel1 corosync[2101]:  [TOTEM ] Delivering MCAST message with seq 2c to pending delivery queue
May  5 17:09:33 devel1 corosync[2101]:  [CPG   ] got procjoin message from cluster node 1790093504
May  5 17:09:33 devel1 attrd: [5939]: info: main: Cluster connection active
May  5 17:09:33 devel1 corosync[2101]:  [TOTEM ] Received ringid(192.168.178.106:332) seq 2c
May  5 17:09:33 devel1 corosync[2101]:  [TOTEM ] Received ringid(192.168.178.106:332) seq 2c
May  5 17:09:33 devel1 lrmd: [5938]: info: enabling coredumps
May  5 17:09:33 devel1 lrmd: [5938]: WARN: Core dumps could be lost if multiple dumps occur.
May  5 17:09:33 devel1 lrmd: [5938]: WARN: Consider setting non-default value in /proc/sys/kernel/core_pattern (or equivalent) for maximum supportability
May  5 17:09:33 devel1 lrmd: [5938]: WARN: Consider setting /proc/sys/kernel/core_uses_pid (or equivalent) to 1 for maximum supportability
May  5 17:09:33 devel1 lrmd: [5938]: debug: main: run the loop...
May  5 17:09:33 devel1 lrmd: [5938]: info: Started.
May  5 17:09:33 devel1 attrd: [5939]: info: main: Accepting attribute updates
May  5 17:09:33 devel1 attrd: [5939]: notice: main: Starting mainloop...
May  5 17:09:33 devel1 attrd: [5939]: info: crm_update_peer: Node devel1: id=1790093504 state=unknown addr=(null) votes=0 born=0 seen=0 proc=00000000000000000000000000111312 (new)
May  5 17:09:33 devel1 corosync[2101]:  [TOTEM ] releasing messages up to and including 2c
May  5 17:09:33 devel1 corosync[2101]:  [CPG   ] lib_init_fn: conn=0x69e2f0, cpd=0x691d60
May  5 17:09:33 devel1 cib: [5936]: debug: init_cpg_connection: Adding fd=4 to mainloop
May  5 17:09:33 devel1 corosync[2101]:  [TOTEM ] mcasted message added to pending queue
May  5 17:09:33 devel1 cib: [5936]: info: init_ais_connection_once: Connection to 'corosync': established
May  5 17:09:33 devel1 cib: [5936]: debug: crm_new_peer: Creating entry for node devel1/1790093504
May  5 17:09:33 devel1 cib: [5936]: info: crm_new_peer: Node devel1 now has id: 1790093504
May  5 17:09:33 devel1 cib: [5936]: info: crm_new_peer: Node 1790093504 is now known as devel1
May  5 17:09:33 devel1 cib: [5936]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/pcmk
May  5 17:09:33 devel1 corosync[2101]:  [TOTEM ] Delivering 2c to 2d
May  5 17:09:33 devel1 corosync[2101]:  [TOTEM ] Delivering MCAST message with seq 2d to pending delivery queue
May  5 17:09:33 devel1 corosync[2101]:  [CPG   ] got procjoin message from cluster node 1790093504
May  5 17:09:33 devel1 corosync[2101]:  [TOTEM ] Received ringid(192.168.178.106:332) seq 2d
May  5 17:09:33 devel1 corosync[2101]:  [TOTEM ] Received ringid(192.168.178.106:332) seq 2d
May  5 17:09:33 devel1 pacemakerd: [5931]: debug: pcmk_client_connect: Channel 0x612370 connected: 3 children
May  5 17:09:33 devel1 corosync[2101]:  [TOTEM ] releasing messages up to and including 2d
May  5 17:09:33 devel1 cib: [5936]: info: cib_init: Starting cib mainloop
May  5 17:09:33 devel1 cib: [5936]: info: crm_update_peer: Node devel1: id=1790093504 state=unknown addr=(null) votes=0 born=0 seen=0 proc=00000000000000000000000000111312 (new)
May  5 17:09:33 devel1 cib: [5936]: debug: Forking temp process write_cib_contents
May  5 17:09:34 devel1 cib: [5936]: info: Managed write_cib_contents process 5945 exited with return code 0.
May  5 17:09:34 devel1 crmd: [5942]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
May  5 17:09:34 devel1 crmd: [5942]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
May  5 17:09:34 devel1 crmd: [5942]: debug: cib_native_signon_raw: Connection to CIB successful
May  5 17:09:34 devel1 crmd: [5942]: info: do_cib_control: CIB connection established
May  5 17:09:34 devel1 crmd: [5942]: debug: do_fsa_action: actions:trace: #011// A_HA_CONNECT
May  5 17:09:34 devel1 crmd: [5942]: info: get_cluster_type: Cluster type is: 'corosync'
May  5 17:09:34 devel1 crmd: [5942]: notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
May  5 17:09:34 devel1 cib: [5936]: debug: cib_common_callback_worker: Setting cib_refresh_notify callbacks for 5942 (886d8d0c-b91d-47cd-ad06-050e6693e1b1): on
May  5 17:09:34 devel1 cib: [5936]: debug: cib_common_callback_worker: Setting cib_diff_notify callbacks for 5942 (886d8d0c-b91d-47cd-ad06-050e6693e1b1): on
May  5 17:09:34 devel1 corosync[2101]:  [CPG   ] lib_init_fn: conn=0x6a2650, cpd=0x6926f0
May  5 17:09:34 devel1 crmd: [5942]: debug: init_cpg_connection: Adding fd=6 to mainloop
May  5 17:09:34 devel1 crmd: [5942]: info: init_ais_connection_once: Connection to 'corosync': established
May  5 17:09:34 devel1 crmd: [5942]: debug: crm_new_peer: Creating entry for node devel1/1790093504
May  5 17:09:34 devel1 crmd: [5942]: info: crm_new_peer: Node devel1 now has id: 1790093504
May  5 17:09:34 devel1 crmd: [5942]: info: crm_new_peer: Node 1790093504 is now known as devel1
May  5 17:09:34 devel1 crmd: [5942]: info: ais_status_callback: status: devel1 is now unknown
May  5 17:09:34 devel1 crmd: [5942]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/pcmk
May  5 17:09:34 devel1 pacemakerd: [5931]: debug: pcmk_client_connect: Channel 0x6125e0 connected: 4 children
May  5 17:09:34 devel1 crmd: [5942]: info: init_quorum_connection: Configuring Pacemaker to obtain quorum from Corosync
May  5 17:09:34 devel1 corosync[2101]:  [TOTEM ] mcasted message added to pending queue
May  5 17:09:34 devel1 corosync[2101]:  [TOTEM ] Delivering 2d to 2e
May  5 17:09:34 devel1 corosync[2101]:  [TOTEM ] Delivering MCAST message with seq 2e to pending delivery queue
May  5 17:09:34 devel1 corosync[2101]:  [CPG   ] got procjoin message from cluster node 1790093504
May  5 17:09:34 devel1 corosync[2101]:  [TOTEM ] Received ringid(192.168.178.106:332) seq 2e
May  5 17:09:34 devel1 corosync[2101]:  [TOTEM ] Received ringid(192.168.178.106:332) seq 2e
May  5 17:09:34 devel1 corosync[2101]:  [TOTEM ] releasing messages up to and including 2e
May  5 17:09:34 devel1 corosync[2101]:  [QUORUM] lib_init_fn: conn=0x6a69b0
May  5 17:09:34 devel1 crmd: [5942]: notice: init_quorum_connection: Quorum acquired
May  5 17:09:34 devel1 crmd: [5942]: info: do_ha_control: Connected to the cluster
May  5 17:09:34 devel1 crmd: [5942]: debug: do_fsa_action: actions:trace: #011// A_READCONFIG
May  5 17:09:34 devel1 crmd: [5942]: debug: do_fsa_action: actions:trace: #011// A_LRM_CONNECT
May  5 17:09:34 devel1 crmd: [5942]: debug: do_lrm_control: Connecting to the LRM
May  5 17:09:34 devel1 lrmd: [5938]: debug: on_msg_register:client crmd [5942] registered
May  5 17:09:34 devel1 crmd: [5942]: debug: do_lrm_control: LRM connection established
May  5 17:09:34 devel1 crmd: [5942]: debug: do_fsa_action: actions:trace: #011// A_CCM_CONNECT
May  5 17:09:34 devel1 crmd: [5942]: debug: do_fsa_action: actions:trace: #011// A_STARTED
May  5 17:09:34 devel1 crmd: [5942]: info: do_started: Delaying start, no membership data (0000000000100000)
May  5 17:09:34 devel1 crmd: [5942]: debug: register_fsa_input_adv: Stalling the FSA pending further input: cause=C_FSA_INTERNAL
May  5 17:09:34 devel1 crmd: [5942]: debug: s_crmd_fsa: Exiting the FSA: queue=0, fsa_actions=0x2, stalled=true
May  5 17:09:34 devel1 crmd: [5942]: debug: fsa_dump_inputs: Added input: 0000000000000100 (R_CIB_CONNECTED)
May  5 17:09:34 devel1 crmd: [5942]: debug: fsa_dump_inputs: Added input: 0000000000000800 (R_LRM_CONNECTED)
May  5 17:09:34 devel1 crmd: [5942]: debug: fsa_dump_inputs: Added input: 0000000000200000 (R_PEER_DATA)
May  5 17:09:34 devel1 corosync[2101]:  [QUORUM] got quorate request on 0x6a69b0
May  5 17:09:34 devel1 corosync[2101]:  [QUORUM] got trackstart request on 0x6a69b0
May  5 17:09:34 devel1 corosync[2101]:  [QUORUM] sending initial status to 0x6a69b0
May  5 17:09:34 devel1 corosync[2101]:  [QUORUM] sending quorum notification to 0x6a69b0, length = 48
May  5 17:09:34 devel1 crmd: [5942]: info: crmd_init: Starting crmd's mainloop
May  5 17:09:34 devel1 crmd: [5942]: debug: config_query_callback: Call 3 : Parsing CIB options
May  5 17:09:34 devel1 crmd: [5942]: info: config_query_callback: Shutdown escalation occurs after: 1200000ms
May  5 17:09:34 devel1 crmd: [5942]: info: config_query_callback: Checking for expired actions every 900000ms
May  5 17:09:34 devel1 crmd: [5942]: notice: crmd_peer_update: Status update: Client devel1/crmd now has status [online] (DC=<null>)
May  5 17:09:34 devel1 crmd: [5942]: info: crm_update_peer: Node devel1: id=1790093504 state=unknown addr=(null) votes=0 born=0 seen=0 proc=00000000000000000000000000111312 (new)
May  5 17:09:34 devel1 crmd: [5942]: info: pcmk_quorum_notification: Membership 0: quorum retained (0)
May  5 17:09:34 devel1 crmd: [5942]: debug: do_fsa_action: actions:trace: #011// A_STARTED
May  5 17:09:34 devel1 crmd: [5942]: info: do_started: Delaying start, no membership data (0000000000100000)
May  5 17:09:34 devel1 crmd: [5942]: debug: register_fsa_input_adv: Stalling the FSA pending further input: cause=C_FSA_INTERNAL
May  5 17:09:34 devel1 crmd: [5942]: debug: s_crmd_fsa: Exiting the FSA: queue=0, fsa_actions=0x2, stalled=true
May  5 17:09:38 devel1 attrd: [5939]: debug: cib_connect: CIB signon attempt 1
May  5 17:09:38 devel1 attrd: [5939]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
May  5 17:09:38 devel1 attrd: [5939]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback
May  5 17:09:38 devel1 attrd: [5939]: debug: cib_native_signon_raw: Connection to CIB successful
May  5 17:09:38 devel1 attrd: [5939]: info: cib_connect: Connected to the CIB after 1 signon attempts
May  5 17:09:38 devel1 attrd: [5939]: info: cib_connect: Sending full refresh
May  5 17:09:38 devel1 cib: [5936]: debug: cib_common_callback_worker: Setting cib_refresh_notify callbacks for 5939 (ea55e33a-3aea-4ed0-8f68-94396dd6af6e): on
May  5 17:11:00 devel1 cib: [5936]: debug: cib_common_callback_worker: Setting cib_diff_notify callbacks for 5962 (d83c877d-d07c-4b9c-b960-e838be37ba32): off
May  5 17:11:00 devel1 cib: [5936]: debug: cib_common_callback_worker: Setting cib_diff_notify callbacks for 5962 (d83c877d-d07c-4b9c-b960-e838be37ba32): on

_______________________________________________
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker

Reply via email to