Hi,

On Wed, Sep 28, 2011 at 11:47:30AM +0000, Amit Jathar wrote:
> Hi,
> 
> I am facing weird issue in the corosync behavior.
> 
> I have configured a two node cluster.
> The cluster is working fine & the crm_mon command is showing proper output.
> The command cibadmin -Q also working on both the nodes properly.
> 
> The issue starts when I put any crm configuration command.
> 
> As I put crm configuration command, I can see the following output:-
> [root@AAA02 corosync]# crm configure property no-quorum-policy=ignore Could 
> not connect to the CIB: Remote node did not respond
> ERROR: creating tmp shadow __crmshell.12274 failed
> [root@AAA02 corosync]#
> 
> 
> At the same time, the logs in the /var/log/messages says that:- Sep 28 
> 13:38:40 localhost cibadmin: [12295]: info: Invoked: cibadmin -Ql Sep 28 
> 13:38:40 localhost cibadmin: [12296]: info: Invoked: cibadmin -Ql Sep 28 
> 13:38:40 localhost crm_shadow: [12298]: info: Invoked: crm_shadow -c 
> __crmshell.12274
> 
> I have attached a file which has cib.xml & corosync.conf file contents on 
> both the nodes .
> 
> Please guide me to troubleshoot this error.

The answer is somewhere in the logs. There should be some
serious looking ERROR/CRIT messages. This looks like a broken
installation. Which packages did you install?

The crm uses cibadmin and crm_shadow internally to update the
CIB.

Thanks,

Dejan

> Thanks in advance.
> 
> Thanks,
> Amit
> 
> 
> ________________________________
> This email (message and any attachment) is confidential and may be 
> privileged. If you are not certain that you are the intended recipient, 
> please notify the sender immediately by replying to this message, and delete 
> all copies of this message and attachments. Any other use of this email by 
> you is prohibited.
> ________________________________
> 
> 

Content-Description: cib_xml_corosync_conf.txt
> 
> 
> cib.xml file on node-1:-
> 
> <cib epoch="7" num_updates="0" admin_epoch="0" validate-with="pacemaker-1.0" 
> crm_feature_set="3.0.1" have-quorum="1" dc-uuid="AAA01" cib-last-written="Wed 
> Sep 28 13:36:11 2011">
>   <configuration>
>     <crm_config>
>       <cluster_property_set id="cib-bootstrap-options">
>         <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" 
> value="1.0.11-1554a83db0d3c3e546cfd3aaff6af1184f79ee87"/>
>         <nvpair id="cib-bootstrap-options-cluster-infrastructure" 
> name="cluster-infrastructure" value="openais"/>
>         <nvpair id="cib-bootstrap-options-expected-quorum-votes" 
> name="expected-quorum-votes" value="2"/>
>       </cluster_property_set>
>     </crm_config>
>     <nodes>
>       <node id="AAA01" uname="AAA01" type="normal"/>
>       <node id="AAA02" uname="AAA02" type="normal"/>
>     </nodes>
>     <resources/>
>     <constraints/>
>   </configuration>
> </cib>
> 
> ======================================================================================
> 
> cib.cml file on node-2:-
> 
> <cib validate-with="pacemaker-1.0" crm_feature_set="3.0.1" have-quorum="1" 
> dc-uuid="AAA01" admin_epoch="0" epoch="7" num_updates="0" 
> cib-last-written="Wed Sep 28 13:36:11 2011">
>   <configuration>
>     <crm_config>
>       <cluster_property_set id="cib-bootstrap-options">
>         <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" 
> value="1.0.11-1554a83db0d3c3e546cfd3aaff6af1184f79ee87"/>
>         <nvpair id="cib-bootstrap-options-cluster-infrastructure" 
> name="cluster-infrastructure" value="openais"/>
>         <nvpair id="cib-bootstrap-options-expected-quorum-votes" 
> name="expected-quorum-votes" value="2"/>
>       </cluster_property_set>
>     </crm_config>
>     <nodes>
>       <node id="AAA01" uname="AAA01" type="normal"/>
>       <node id="AAA02" uname="AAA02" type="normal"/>
>     </nodes>
>     <resources/>
>     <constraints/>
>   </configuration>
> </cib>
> 
> 
> =========================================================================================
> 
> 
> aisexec {
>         user: root
>         group: root
> }
> 
> corosync {
>         user: root
>         group: root
> }
> 
> amf {
>         mode: disabled
> }
> 
> logging {
>         to_stderr: yes
>         debug: off
>         timestamp: on
>         to_file: no
>         to_syslog: yes
>         syslog_facility: daemon
> }
> 
> totem {
>         version: 2
>         token: 3000
>         token_retransmits_before_loss_const: 10
>         join: 60
>         consensus: 4000
>         vsftype: none
>         max_messages: 20
>         clear_node_high_bit: yes
>         secauth: on
>         threads: 0
>         # nodeid: 1234
>         rrp_mode: active
>         fail_recv_const: 5000
> 
>         interface {
>                 ringnumber: 0
>                 bindnetaddr: 172.25.0.0
>                 mcastaddr: 227.95.1.1
>                 mcastport: 5404
>         }
> }
> 
> 
> ======================================================================================
Content-Description: logs_on_node.txt
> Sep 28 13:35:13 localhost corosync[12726]:   [pcmk  ] info: update_member: 
> 0x153fa980 Node 184555948 now known as AAA02 (was: (null))
> Sep 28 13:35:13 localhost cib: [12733]: notice: ais_dispatch: Membership 
> 3388: quorum acquired
> Sep 28 13:35:13 localhost corosync[12726]:   [pcmk  ] info: update_member: 
> Node AAA02 now has process list: 00000000000000000000000000013312 (78610)
> Sep 28 13:35:13 localhost cib: [12733]: info: crm_get_peer: Node 184555948 is 
> now known as AAA02
> Sep 28 13:35:13 localhost corosync[12726]:   [pcmk  ] info: update_member: 
> Node AAA02 now has 1 quorum votes (was 0)
> Sep 28 13:35:13 localhost cib: [12733]: info: crm_update_peer: Node AAA02: 
> id=184555948 state=member addr=r(0) ip(172.25.0.11)  votes=1 (new) born=3388 
> seen=3388 proc=00000000000000000000000000013312 (new)
> Sep 28 13:35:13 localhost corosync[12726]:   [pcmk  ] info: 
> send_member_notification: Sending membership update 3388 to 2 children
> Sep 28 13:35:13 localhost corosync[12726]:   [MAIN  ] Completed service 
> synchronization, ready to provide service.
> Sep 28 13:35:13 localhost attrd: [12735]: info: cib_connect: Connected to the 
> CIB after 1 signon attempts
> Sep 28 13:35:13 localhost attrd: [12735]: info: cib_connect: Sending full 
> refresh
> Sep 28 13:36:10 localhost crmd: [12737]: info: crm_timer_popped: Election 
> Trigger (I_DC_TIMEOUT) just popped!
> Sep 28 13:36:10 localhost crmd: [12737]: WARN: do_log: FSA: Input 
> I_DC_TIMEOUT from crm_timer_popped() received in state S_PENDING
> Sep 28 13:36:10 localhost crmd: [12737]: info: do_state_transition: State 
> transition S_PENDING -> S_ELECTION [ input=I_DC_TIMEOUT cause=C_TIMER_POPPED 
> origin=crm_timer_popped ]
> Sep 28 13:36:10 localhost crmd: [12737]: info: do_state_transition: State 
> transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC 
> cause=C_FSA_INTERNAL origin=do_election_check ]
> Sep 28 13:36:10 localhost crmd: [12737]: info: do_te_control: Registering TE 
> UUID: da9dac9a-2d57-4cde-8dd3-54589b00baef
> Sep 28 13:36:10 localhost crmd: [12737]: info: set_graph_functions: Setting 
> custom graph functions
> Sep 28 13:36:10 localhost crmd: [12737]: info: unpack_graph: Unpacked 
> transition -1: 0 actions in 0 synapses
> Sep 28 13:36:10 localhost crmd: [12737]: info: do_dc_takeover: Taking over DC 
> status for this partition
> Sep 28 13:36:10 localhost cib: [12733]: info: cib_process_readwrite: We are 
> now in R/W mode
> Sep 28 13:36:10 localhost cib: [12733]: info: cib_process_request: Operation 
> complete: op cib_master for section 'all' (origin=local/crmd/5, 
> version=0.0.0): ok (rc=0)
> Sep 28 13:36:10 localhost cib: [12733]: info: log_data_element: cib:diff: - 
> <cib admin_epoch="0" epoch="0" num_updates="0" />
> Sep 28 13:36:10 localhost cib: [12733]: info: log_data_element: cib:diff: + 
> <cib crm_feature_set="3.0.1" admin_epoch="0" epoch="1" num_updates="1" />
> Sep 28 13:36:10 localhost cib: [12733]: info: cib_process_request: Operation 
> complete: op cib_modify for section cib (origin=local/crmd/6, version=0.1.1): 
> ok (rc=0)
> Sep 28 13:36:10 localhost cib: [12754]: info: write_cib_contents: Archived 
> previous version as /var/lib/heartbeat/crm/cib-75.raw
> Sep 28 13:36:10 localhost cib: [12733]: info: log_data_element: cib:diff: - 
> <cib admin_epoch="0" epoch="1" num_updates="1" />
> Sep 28 13:36:10 localhost cib: [12733]: info: log_data_element: cib:diff: + 
> <cib admin_epoch="0" epoch="2" num_updates="1" >
> Sep 28 13:36:10 localhost cib: [12733]: info: log_data_element: cib:diff: +   
> <configuration >
> Sep 28 13:36:10 localhost cib: [12733]: info: log_data_element: cib:diff: +   
>   <crm_config >
> Sep 28 13:36:10 localhost cib: [12733]: info: log_data_element: cib:diff: +   
>     <cluster_property_set id="cib-bootstrap-options" 
> __crm_diff_marker__="added:top" >
> Sep 28 13:36:10 localhost cib: [12733]: info: log_data_element: cib:diff: +   
>       <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" 
> value="1.0.11-1554a83db0d3c3e546cfd3aaff6af1184f79ee87" />
> Sep 28 13:36:10 localhost cib: [12733]: info: log_data_element: cib:diff: +   
>     </cluster_property_set>
> Sep 28 13:36:10 localhost cib: [12733]: info: log_data_element: cib:diff: +   
>   </crm_config>
> Sep 28 13:36:10 localhost cib: [12733]: info: log_data_element: cib:diff: +   
> </configuration>
> Sep 28 13:36:10 localhost cib: [12733]: info: log_data_element: cib:diff: + 
> </cib>
> Sep 28 13:36:10 localhost cib: [12733]: info: cib_process_request: Operation 
> complete: op cib_modify for section crm_config (origin=local/crmd/9, 
> version=0.2.1): ok (rc=0)
> Sep 28 13:36:10 localhost cib: [12754]: info: write_cib_contents: Wrote 
> version 0.1.0 of the CIB to disk (digest: 36572db1ee12b3a5904800f744fe36f1)
> Sep 28 13:36:10 localhost crmd: [12737]: info: join_make_offer: Making join 
> offers based on membership 3388
> Sep 28 13:36:10 localhost cib: [12754]: info: retrieveCib: Reading cluster 
> configuration from: /var/lib/heartbeat/crm/cib.I1eXCq (digest: 
> /var/lib/heartbeat/crm/cib.FyUqDI)
> Sep 28 13:36:10 localhost crmd: [12737]: info: do_dc_join_offer_all: join-1: 
> Waiting on 2 outstanding join acks
> Sep 28 13:36:10 localhost crmd: [12737]: info: ais_dispatch: Membership 3388: 
> quorum retained
> Sep 28 13:36:10 localhost cib: [12733]: info: log_data_element: cib:diff: - 
> <cib admin_epoch="0" epoch="2" num_updates="1" />
> Sep 28 13:36:10 localhost cib: [12733]: info: log_data_element: cib:diff: + 
> <cib admin_epoch="0" epoch="3" num_updates="1" >
> Sep 28 13:36:10 localhost cib: [12733]: info: log_data_element: cib:diff: +   
> <configuration >
> Sep 28 13:36:10 localhost cib: [12733]: info: log_data_element: cib:diff: +   
>   <crm_config >
> Sep 28 13:36:10 localhost cib: [12733]: info: log_data_element: cib:diff: +   
>     <cluster_property_set id="cib-bootstrap-options" >
> Sep 28 13:36:10 localhost cib: [12733]: info: log_data_element: cib:diff: +   
>       <nvpair id="cib-bootstrap-options-cluster-infrastructure" 
> name="cluster-infrastructure" value="openais" __crm_diff_marker__="added:top" 
> />
> Sep 28 13:36:10 localhost cib: [12733]: info: log_data_element: cib:diff: +   
>     </cluster_property_set>
> Sep 28 13:36:10 localhost cib: [12733]: info: log_data_element: cib:diff: +   
>   </crm_config>
> Sep 28 13:36:10 localhost cib: [12733]: info: log_data_element: cib:diff: +   
> </configuration>
> Sep 28 13:36:10 localhost cib: [12733]: info: log_data_element: cib:diff: + 
> </cib>
> Sep 28 13:36:10 localhost cib: [12733]: info: cib_process_request: Operation 
> complete: op cib_modify for section crm_config (origin=local/crmd/12, 
> version=0.3.1): ok (rc=0)
> Sep 28 13:36:10 localhost crmd: [12737]: info: crm_ais_dispatch: Setting 
> expected votes to 2
> Sep 28 13:36:10 localhost crmd: [12737]: info: config_query_callback: 
> Checking for expired actions every 900000ms
> Sep 28 13:36:10 localhost cib: [12755]: info: write_cib_contents: Archived 
> previous version as /var/lib/heartbeat/crm/cib-76.raw
> Sep 28 13:36:10 localhost crmd: [12737]: info: config_query_callback: Sending 
> expected-votes=2 to corosync
> Sep 28 13:36:10 localhost cib: [12733]: info: log_data_element: cib:diff: - 
> <cib admin_epoch="0" epoch="3" num_updates="1" />
> Sep 28 13:36:10 localhost cib: [12755]: info: write_cib_contents: Wrote 
> version 0.3.0 of the CIB to disk (digest: 400d0d22549b70544e114110ba2c5815)
> Sep 28 13:36:10 localhost crmd: [12737]: info: update_dc: Set DC to AAA01 
> (3.0.1)
> Sep 28 13:36:10 localhost cib: [12733]: info: log_data_element: cib:diff: + 
> <cib admin_epoch="0" epoch="4" num_updates="1" >
> Sep 28 13:36:10 localhost cib: [12755]: info: retrieveCib: Reading cluster 
> configuration from: /var/lib/heartbeat/crm/cib.Tl7tJq (digest: 
> /var/lib/heartbeat/crm/cib.7PFuQI)
> Sep 28 13:36:10 localhost crmd: [12737]: info: ais_dispatch: Membership 3388: 
> quorum retained
> Sep 28 13:36:10 localhost cib: [12733]: info: log_data_element: cib:diff: +   
> <configuration >
> Sep 28 13:36:10 localhost cib: [12733]: info: log_data_element: cib:diff: +   
>   <crm_config >
> Sep 28 13:36:10 localhost cib: [12733]: info: log_data_element: cib:diff: +   
>     <cluster_property_set id="cib-bootstrap-options" >
> Sep 28 13:36:10 localhost cib: [12733]: info: log_data_element: cib:diff: +   
>       <nvpair id="cib-bootstrap-options-expected-quorum-votes" 
> name="expected-quorum-votes" value="2" __crm_diff_marker__="added:top" />
> Sep 28 13:36:10 localhost cib: [12733]: info: log_data_element: cib:diff: +   
>     </cluster_property_set>
> Sep 28 13:36:10 localhost cib: [12733]: info: log_data_element: cib:diff: +   
>   </crm_config>
> Sep 28 13:36:10 localhost cib: [12733]: info: log_data_element: cib:diff: +   
> </configuration>
> Sep 28 13:36:10 localhost cib: [12733]: info: log_data_element: cib:diff: + 
> </cib>
> Sep 28 13:36:10 localhost cib: [12733]: info: cib_process_request: Operation 
> complete: op cib_modify for section crm_config (origin=local/crmd/16, 
> version=0.4.1): ok (rc=0)
> Sep 28 13:36:10 localhost crmd: [12737]: info: crm_ais_dispatch: Setting 
> expected votes to 2
> Sep 28 13:36:10 localhost crmd: [12737]: info: config_query_callback: 
> Checking for expired actions every 900000ms
> Sep 28 13:36:10 localhost cib: [12756]: info: write_cib_contents: Archived 
> previous version as /var/lib/heartbeat/crm/cib-77.raw
> Sep 28 13:36:10 localhost cib: [12733]: info: cib_process_request: Operation 
> complete: op cib_modify for section crm_config (origin=local/crmd/20, 
> version=0.4.1): ok (rc=0)
> Sep 28 13:36:10 localhost crmd: [12737]: info: config_query_callback: Sending 
> expected-votes=2 to corosync
> Sep 28 13:36:10 localhost cib: [12756]: info: write_cib_contents: Wrote 
> version 0.4.0 of the CIB to disk (digest: bf561f27ece6873475e986191857e6ec)
> Sep 28 13:36:10 localhost crmd: [12737]: info: ais_dispatch: Membership 3388: 
> quorum retained
> Sep 28 13:36:10 localhost crmd: [12737]: info: crm_ais_dispatch: Setting 
> expected votes to 2
> Sep 28 13:36:10 localhost cib: [12756]: info: retrieveCib: Reading cluster 
> configuration from: /var/lib/heartbeat/crm/cib.7aY0Pq (digest: 
> /var/lib/heartbeat/crm/cib.4Tty3I)
> Sep 28 13:36:10 localhost crmd: [12737]: info: do_state_transition: State 
> transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED 
> cause=C_FSA_INTERNAL origin=check_join_state ]
> Sep 28 13:36:10 localhost crmd: [12737]: info: do_state_transition: All 2 
> cluster nodes responded to the join offer.
> Sep 28 13:36:10 localhost cib: [12733]: info: cib_process_request: Operation 
> complete: op cib_modify for section crm_config (origin=local/crmd/22, 
> version=0.4.1): ok (rc=0)
> Sep 28 13:36:10 localhost crmd: [12737]: info: do_dc_join_finalize: join-1: 
> Syncing the CIB from AAA01 to the rest of the cluster
> Sep 28 13:36:10 localhost crmd: [12737]: info: te_connect_stonith: Attempting 
> connection to fencing daemon...
> Sep 28 13:36:10 localhost cib: [12733]: info: cib_process_request: Operation 
> complete: op cib_sync for section 'all' (origin=local/crmd/23, 
> version=0.4.1): ok (rc=0)
> Sep 28 13:36:11 localhost crmd: [12737]: info: te_connect_stonith: Connected
> Sep 28 13:36:11 localhost crmd: [12737]: info: update_attrd: Connecting to 
> attrd...
> Sep 28 13:36:11 localhost attrd: [12735]: info: find_hash_entry: Creating 
> hash entry for terminate
> Sep 28 13:36:11 localhost attrd: [12735]: info: find_hash_entry: Creating 
> hash entry for shutdown
> Sep 28 13:36:11 localhost crmd: [12737]: info: do_dc_join_ack: join-1: 
> Updating node state to member for AAA01
> Sep 28 13:36:11 localhost crmd: [12737]: info: do_dc_join_ack: join-1: 
> Updating node state to member for AAA02
> Sep 28 13:36:11 localhost attrd: [12735]: info: crm_new_peer: Node AAA02 now 
> has id: 184555948
> Sep 28 13:36:11 localhost attrd: [12735]: info: crm_new_peer: Node 184555948 
> is now known as AAA02
> Sep 28 13:36:11 localhost cib: [12733]: info: log_data_element: cib:diff: - 
> <cib admin_epoch="0" epoch="4" num_updates="1" />
> Sep 28 13:36:11 localhost cib: [12733]: info: log_data_element: cib:diff: + 
> <cib admin_epoch="0" epoch="5" num_updates="1" >
> Sep 28 13:36:11 localhost cib: [12733]: info: log_data_element: cib:diff: +   
> <configuration >
> Sep 28 13:36:11 localhost cib: [12733]: info: log_data_element: cib:diff: +   
>   <nodes >
> Sep 28 13:36:11 localhost cib: [12733]: info: log_data_element: cib:diff: +   
>     <node id="AAA01" uname="AAA01" type="normal" 
> __crm_diff_marker__="added:top" />
> Sep 28 13:36:11 localhost cib: [12733]: info: log_data_element: cib:diff: +   
>   </nodes>
> Sep 28 13:36:11 localhost cib: [12733]: info: log_data_element: cib:diff: +   
> </configuration>
> Sep 28 13:36:11 localhost cib: [12733]: info: log_data_element: cib:diff: + 
> </cib>
> Sep 28 13:36:11 localhost cib: [12733]: info: cib_process_request: Operation 
> complete: op cib_modify for section nodes (origin=local/crmd/24, 
> version=0.5.1): ok (rc=0)
> Sep 28 13:36:11 localhost cib: [12733]: info: log_data_element: cib:diff: - 
> <cib admin_epoch="0" epoch="5" num_updates="1" />
> Sep 28 13:36:11 localhost cib: [12733]: info: log_data_element: cib:diff: + 
> <cib admin_epoch="0" epoch="6" num_updates="1" >
> Sep 28 13:36:11 localhost cib: [12733]: info: log_data_element: cib:diff: +   
> <configuration >
> Sep 28 13:36:11 localhost cib: [12733]: info: log_data_element: cib:diff: +   
>   <nodes >
> Sep 28 13:36:11 localhost cib: [12733]: info: log_data_element: cib:diff: +   
>     <node id="AAA02" uname="AAA02" type="normal" 
> __crm_diff_marker__="added:top" />
> Sep 28 13:36:11 localhost cib: [12733]: info: log_data_element: cib:diff: +   
>   </nodes>
> Sep 28 13:36:11 localhost cib: [12733]: info: log_data_element: cib:diff: +   
> </configuration>
> Sep 28 13:36:11 localhost cib: [12733]: info: log_data_element: cib:diff: + 
> </cib>
> Sep 28 13:36:11 localhost cib: [12733]: info: cib_process_request: Operation 
> complete: op cib_modify for section nodes (origin=local/crmd/25, 
> version=0.6.1): ok (rc=0)
> Sep 28 13:36:11 localhost cib: [12733]: info: cib_process_request: Operation 
> complete: op cib_delete for section 
> //node_state[@uname='AAA01']/transient_attributes (origin=local/crmd/26, 
> version=0.6.1): ok (rc=0)
> Sep 28 13:36:11 localhost crmd: [12737]: info: erase_xpath_callback: Deletion 
> of "//node_state[@uname='AAA01']/transient_attributes": ok (rc=0)
> Sep 28 13:36:11 localhost cib: [12733]: info: cib_process_request: Operation 
> complete: op cib_delete for section //node_state[@uname='AAA01']/lrm 
> (origin=local/crmd/27, version=0.6.1): ok (rc=0)
> Sep 28 13:36:11 localhost crmd: [12737]: info: erase_xpath_callback: Deletion 
> of "//node_state[@uname='AAA01']/lrm": ok (rc=0)
> Sep 28 13:36:11 localhost crmd: [12737]: info: do_state_transition: State 
> transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED 
> cause=C_FSA_INTERNAL origin=check_join_state ]
> Sep 28 13:36:11 localhost crmd: [12737]: info: do_state_transition: All 2 
> cluster nodes are eligible to run resources.
> Sep 28 13:36:11 localhost crmd: [12737]: info: do_dc_join_final: Ensuring DC, 
> quorum and node attributes are up-to-date
> Sep 28 13:36:11 localhost crmd: [12737]: info: crm_update_quorum: Updating 
> quorum status to true (call=33)
> Sep 28 13:36:11 localhost crmd: [12737]: info: abort_transition_graph: 
> do_te_invoke:185 - Triggered transition abort (complete=1) : Peer Cancelled
> Sep 28 13:36:11 localhost crmd: [12737]: info: do_pe_invoke: Query 34: 
> Requesting the current CIB: S_POLICY_ENGINE
> Sep 28 13:36:11 localhost cib: [12733]: info: cib_process_request: Operation 
> complete: op cib_delete for section 
> //node_state[@uname='AAA02']/transient_attributes (origin=AAA02/crmd/9, 
> version=0.6.2): ok (rc=0)
> Sep 28 13:36:11 localhost cib: [12733]: info: cib_process_request: Operation 
> complete: op cib_delete for section //node_state[@uname='AAA02']/lrm 
> (origin=local/crmd/29, version=0.6.2): ok (rc=0)
> Sep 28 13:36:11 localhost crmd: [12737]: info: erase_xpath_callback: Deletion 
> of "//node_state[@uname='AAA02']/lrm": ok (rc=0)
> Sep 28 13:36:11 localhost cib: [12733]: info: cib_process_request: Operation 
> complete: op cib_modify for section nodes (origin=local/crmd/31, 
> version=0.6.3): ok (rc=0)
> Sep 28 13:36:11 localhost cib: [12733]: info: log_data_element: cib:diff: - 
> <cib admin_epoch="0" epoch="6" num_updates="3" />
> Sep 28 13:36:11 localhost cib: [12733]: info: log_data_element: cib:diff: + 
> <cib have-quorum="1" dc-uuid="AAA01" admin_epoch="0" epoch="7" 
> num_updates="1" />
> Sep 28 13:36:11 localhost cib: [12733]: info: cib_process_request: Operation 
> complete: op cib_modify for section cib (origin=local/crmd/33, 
> version=0.7.1): ok (rc=0)
> Sep 28 13:36:11 localhost crmd: [12737]: info: abort_transition_graph: 
> need_abort:59 - Triggered transition abort (complete=1) : Non-status change
> Sep 28 13:36:11 localhost crmd: [12737]: info: need_abort: Aborting on change 
> to have-quorum
> Sep 28 13:36:11 localhost crmd: [12737]: info: do_pe_invoke: Query 35: 
> Requesting the current CIB: S_POLICY_ENGINE
> Sep 28 13:36:11 localhost crmd: [12737]: info: do_pe_invoke_callback: 
> Invoking the PE: query=35, ref=pe_calc-dc-1317209771-11, seq=3388, quorate=1
> Sep 28 13:36:11 localhost attrd: [12735]: info: attrd_local_callback: Sending 
> full refresh (origin=crmd)
> Sep 28 13:36:11 localhost attrd: [12735]: info: attrd_trigger_update: Sending 
> flush op to all hosts for: terminate (<null>)
> Sep 28 13:36:11 localhost pengine: [12736]: info: unpack_config: Node scores: 
> 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
> Sep 28 13:36:11 localhost pengine: [12736]: ERROR: unpack_resources: Resource 
> start-up disabled since no STONITH resources have been defined
> Sep 28 13:36:11 localhost pengine: [12736]: ERROR: unpack_resources: Either 
> configure some or disable STONITH with the stonith-enabled option
> Sep 28 13:36:11 localhost pengine: [12736]: ERROR: unpack_resources: NOTE: 
> Clusters with shared data need STONITH to ensure data integrity
> Sep 28 13:36:11 localhost attrd: [12735]: info: attrd_trigger_update: Sending 
> flush op to all hosts for: shutdown (<null>)
> Sep 28 13:36:11 localhost pengine: [12736]: info: determine_online_status: 
> Node AAA01 is online
> Sep 28 13:36:11 localhost cib: [12757]: info: write_cib_contents: Archived 
> previous version as /var/lib/heartbeat/crm/cib-78.raw
> Sep 28 13:36:11 localhost pengine: [12736]: info: determine_online_status: 
> Node AAA02 is online
> Sep 28 13:36:11 localhost cib: [12757]: info: write_cib_contents: Wrote 
> version 0.7.0 of the CIB to disk (digest: c39ecc4ce511e872ef550953e34493c8)
> Sep 28 13:36:11 localhost pengine: [12736]: info: stage6: Delaying fencing 
> operations until there are resources to manage
> Sep 28 13:36:11 localhost cib: [12757]: info: retrieveCib: Reading cluster 
> configuration from: /var/lib/heartbeat/crm/cib.hP8rKt (digest: 
> /var/lib/heartbeat/crm/cib.j6GqSO)
> Sep 28 13:36:11 localhost crmd: [12737]: info: do_state_transition: State 
> transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS 
> cause=C_IPC_MESSAGE origin=handle_response ]
> Sep 28 13:36:11 localhost pengine: [12736]: info: process_pe_message: 
> Transition 0: PEngine Input stored in: /var/lib/pengine/pe-input-846.bz2
> Sep 28 13:36:11 localhost crmd: [12737]: info: unpack_graph: Unpacked 
> transition 0: 2 actions in 2 synapses
> Sep 28 13:36:11 localhost pengine: [12736]: info: process_pe_message: 
> Configuration ERRORs found during PE processing.  Please run "crm_verify -L" 
> to identify issues.
> Sep 28 13:36:11 localhost crmd: [12737]: info: do_te_invoke: Processing graph 
> 0 (ref=pe_calc-dc-1317209771-11) derived from 
> /var/lib/pengine/pe-input-846.bz2
> Sep 28 13:36:11 localhost crmd: [12737]: info: te_rsc_command: Initiating 
> action 2: probe_complete probe_complete on AAA01 (local) - no waiting
> Sep 28 13:36:11 localhost crmd: [12737]: info: te_rsc_command: Initiating 
> action 3: probe_complete probe_complete on AAA02 - no waiting
> Sep 28 13:36:11 localhost attrd: [12735]: info: find_hash_entry: Creating 
> hash entry for probe_complete
> Sep 28 13:36:11 localhost crmd: [12737]: info: run_graph: 
> ====================================================
> Sep 28 13:36:11 localhost attrd: [12735]: info: attrd_trigger_update: Sending 
> flush op to all hosts for: probe_complete (true)
> Sep 28 13:36:11 localhost crmd: [12737]: notice: run_graph: Transition 0 
> (Complete=2, Pending=0, Fired=0, Skipped=0, Incomplete=0, 
> Source=/var/lib/pengine/pe-input-846.bz2): Complete
> Sep 28 13:36:11 localhost attrd: [12735]: info: attrd_perform_update: Sent 
> update 10: probe_complete=true
> Sep 28 13:36:11 localhost crmd: [12737]: info: te_graph_trigger: Transition 0 
> is now complete
> Sep 28 13:36:11 localhost attrd: [12735]: info: attrd_perform_update: Sent 
> update 12: probe_complete=true
> Sep 28 13:36:11 localhost crmd: [12737]: info: notify_crmd: Transition 0 
> status: done - <null>
> Sep 28 13:36:11 localhost crmd: [12737]: info: do_state_transition: State 
> transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS 
> cause=C_FSA_INTERNAL origin=notify_crmd ]
> Sep 28 13:36:11 localhost crmd: [12737]: info: do_state_transition: Starting 
> PEngine Recheck Timer

> _______________________________________________
> Linux-HA mailing list
> [email protected]
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to