Hi, in our 2-node-cluster (SLES11 SP1) with pacemaker 1.1.2 - 0.7.1 we got the following segfault:
Jan 17 12:24:19 goat1 sudo: clusteradm : TTY=pts/1 ; PWD=/home/clusteradm ; USER=root ; COMMAND=/usr/sbin/crm resource failcount clvg-clone delete sheep1 Jan 17 12:24:20 goat1 crmd: [9531]: notice: run_graph: Transition 87 (Complete=1, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-96226.bz2): Complete Jan 17 12:24:20 goat1 pengine: [9530]: notice: unpack_config: On loss of CCM Quorum: Ignore Jan 17 12:24:20 goat1 pengine: [9530]: WARN: unpack_nodes: Blind faith: not fencing unseen nodes Jan 17 12:24:20 goat1 pengine: [9530]: notice: unpack_rsc_op: Operation clvg:0_monitor_0 found resource clvg:0 active on goat1 Jan 17 12:24:20 goat1 pengine: [9530]: notice: native_print: auvm1 (ocf::heartbeat:Xen): Started goat1 Jan 17 12:24:20 goat1 pengine: [9530]: notice: native_print: auvm2 (ocf::heartbeat:Xen): Started goat1 Jan 17 12:24:20 goat1 pengine: [9530]: notice: clone_print: Clone Set: dlm-clone [dlm] Jan 17 12:24:20 goat1 pengine: [9530]: notice: short_print: Started: [ goat1 sheep1 ] Jan 17 12:24:20 goat1 pengine: [9530]: notice: clone_print: Clone Set: clvm-clone [clvm] Jan 17 12:24:20 goat1 pengine: [9530]: notice: short_print: Started: [ goat1 sheep1 ] Jan 17 12:24:20 goat1 pengine: [9530]: notice: clone_print: Clone Set: clvg-clone [clvg] Jan 17 12:24:20 goat1 pengine: [9530]: notice: short_print: Started: [ goat1 ] Jan 17 12:24:20 goat1 pengine: [9530]: notice: short_print: Stopped: [ clvg:1 ] Jan 17 12:24:20 goat1 pengine: [9530]: notice: clone_print: Clone Set: o2cb-clone [o2cb] Jan 17 12:24:20 goat1 pengine: [9530]: notice: short_print: Started: [ goat1 sheep1 ] Jan 17 12:24:20 goat1 pengine: [9530]: notice: clone_print: Clone Set: clfs-clone [clfs] Jan 17 12:24:20 goat1 pengine: [9530]: notice: short_print: Started: [ sheep1 ] Jan 17 12:24:20 goat1 pengine: [9530]: notice: short_print: Stopped: [ clfs:0 ] Jan 17 12:24:20 goat1 pengine: [9530]: notice: clone_print: Clone Set: stonith-goat1-clone [stonith-goat1] Jan 17 12:24:20 goat1 pengine: [9530]: notice: short_print: Started: [ sheep1 ] Jan 17 12:24:20 goat1 pengine: [9530]: notice: short_print: Stopped: [ stonith-goat1:1 ] Jan 17 12:24:20 goat1 pengine: [9530]: notice: clone_print: Clone Set: stonith-sheep1-clone [stonith-sheep1] Jan 17 12:24:20 goat1 pengine: [9530]: notice: short_print: Started: [ goat1 ] Jan 17 12:24:20 goat1 pengine: [9530]: notice: short_print: Stopped: [ stonith-sheep1:1 ] Jan 17 12:24:20 goat1 pengine: [9530]: WARN: common_apply_stickiness: Forcing clfs-clone away from goat1 after 1000000 failures (max=1000000) Jan 17 12:24:20 goat1 pengine: [9530]: WARN: common_apply_stickiness: Forcing clfs-clone away from goat1 after 1000000 failures (max=1000000) Jan 17 12:24:20 goat1 pengine: [9530]: notice: clone_rsc_colocation_rh: Cannot pair clfs:0 with instance of o2cb-clone Jan 17 12:24:20 goat1 pengine: [9530]: notice: RecurringOp: Start recurring monitor (60s) for clvg:1 on sheep1 Jan 17 12:24:20 goat1 pengine: [9530]: ERROR: crm_abort: clone_update_actions_interleave: Triggered assert at clone.c:1164 :then_action != NULL Jan 17 12:24:20 goat1 pengine: [9530]: ERROR: clone_update_actions_interleave: No action found for stop in clvg:1 (then) Jan 17 12:24:20 goat1 kernel: [ 4325.050954] __ratelimit: 19 callbacks suppressed Jan 17 12:24:20 goat1 kernel: [ 4325.050959] pengine[9530]: segfault at 28 ip 00007ff4954ee285 sp 00007fffd1ab5e40 error 4 in libpengine.so.3.0.0[7ff4954d2000+59000] Jan 17 12:24:20 goat1 crmd: [9531]: CRIT: pe_connection_destroy: Connection to the Policy Engine failed (pid=9530, uuid=9ae096ae-9ab4-4cad-9682-19663eac8c68) Jan 17 12:24:20 goat1 crmd: [9531]: CRIT: pe_connection_destroy: Connection to the Policy Engine failed (pid=-1, uuid=55b3e452-9cba-4c96-8e7c-706755ba912a) Jan 17 12:24:20 goat1 crmd: [9531]: CRIT: pe_connection_destroy: Connection to the Policy Engine failed (pid=-1, uuid=996074e Any idea for the cause of the segfault ? With best regards, Armin Haußecker _______________________________________________ Linux-HA mailing list [email protected] http://lists.linux-ha.org/mailman/listinfo/linux-ha See also: http://linux-ha.org/ReportingProblems
