Re: [ClusterLabs] VIP monitoring failing with Timed Out error

2015-10-29 Thread Jan Pokorný
On 29/10/15 15:27 +0530, Pritam Kharat wrote:
> When I ran ocf-tester to test IPaddr2 agent
> 
> ocf-tester -n sc_vip -o ip=192.168.20.188 -o cidr_netmask=24 -o nic=eth0
> /usr/lib/ocf/resource.d/heartbeat/IPaddr2
> 
> I got this error - ERROR: Setup problem: couldn't find command: ip
> in test_command monitor.  I verified ip command is there but still
> this error. What might be the reason for this error ? Is this okay ?
> 
> + ip_validate
> + check_binary ip
> + have_binary ip
> + '[' 1 = 1 ']'
> + false

It may be the case that you have the environment tainted with
a variable that should only be set in a special testing mode
injecting an error of the particular helper binary missing.

Can you please try "unset OCF_TESTER_FAIL_HAVE_BINARY" to sanitize
your environment first?  Indeed, if you don't have this variable set
for sure in the context of IPAddr2 agent invocations, the problem
is elsewhere.

-- 
Jan (Poki)


pgptrN37B8ovB.pgp
Description: PGP signature
___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] Resources not starting some times after node reboot

2015-10-29 Thread Pritam Kharat
Hi All,

I have single node with 5 resources running on it. When I rebooted node
sometimes I saw resources in stopped state though node comes online.

When looked in to the logs, one difference found in success and failure
case is, when
*Election Trigger (I_DC_TIMEOUT) just popped (2ms)  *occurred LRM did
not start the resources instead jumped to monitor action and then onwards
it did not start the resources at all.

But in success case this Election timeout did not come and first action
taken by LRM was start the resource and then start monitoring it making all
the resources started properly.

I have attached both the success and failure logs. Could some one please
explain the reason for this issue  and how to solve this ?


My CRM configuration is -

root@sc-node-2:~# crm configure show
node $id="2" sc-node-2
primitive oc-fw-agent upstart:oc-fw-agent \
meta allow-migrate="true" migration-threshold="5" failure-timeout="120s" \
op monitor interval="15s" timeout="60s"
primitive oc-lb-agent upstart:oc-lb-agent \
meta allow-migrate="true" migration-threshold="5" failure-timeout="120s" \
op monitor interval="15s" timeout="60s"
primitive oc-service-manager upstart:oc-service-manager \
meta allow-migrate="true" migration-threshold="5" failure-timeout="120s" \
op monitor interval="15s" timeout="60s"
primitive oc-vpn-agent upstart:oc-vpn-agent \
meta allow-migrate="true" migration-threshold="5" failure-timeout="120s" \
op monitor interval="15s" timeout="60s"
primitive sc_vip ocf:heartbeat:IPaddr2 \
params ip="200.10.10.188" cidr_netmask="24" nic="eth1" \
op monitor interval="15s"
group sc-resources sc_vip oc-service-manager oc-fw-agent oc-lb-agent
oc-vpn-agent
property $id="cib-bootstrap-options" \
dc-version="1.1.10-42f2063" \
cluster-infrastructure="corosync" \
stonith-enabled="false" \
cluster-recheck-interval="3min" \
default-action-timeout="180s"


-- 
Thanks and Regards,
Pritam Kharat.
Oct 29 13:02:15 [1021] sc-node-2   crmd: info: do_state_transition: 
State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED 
cause=C_FSA_INTERNAL origin=check_join_state ]
Oct 29 13:02:15 [1021] sc-node-2   crmd: info: crmd_join_phase_log: 
join-1: sc-node-2=integrated
Oct 29 13:02:15 [1021] sc-node-2   crmd: info: do_dc_join_finalize: 
join-1: Syncing our CIB to the rest of the cluster
Oct 29 13:02:15 [1016] sc-node-2cib:   notice: corosync_node_name:  
Unable to get node name for nodeid 2
Oct 29 13:02:15 [1016] sc-node-2cib:   notice: get_node_name:   
Defaulting to uname -n for the local corosync node name
Oct 29 13:02:15 [1016] sc-node-2cib: info: cib_process_request: 
Completed cib_sync operation for section 'all': OK (rc=0, origin=local/crmd/14, 
version=0.11.0)
Oct 29 13:02:15 [1021] sc-node-2   crmd: info: crm_update_peer_join:
finalize_join_for: Node sc-node-2[2] - join-1 phase 2 -> 3
Oct 29 13:02:15 [1016] sc-node-2cib: info: cib_process_request: 
Completed cib_modify operation for section nodes: OK (rc=0, 
origin=local/crmd/15, version=0.11.0)
Oct 29 13:02:15 [1021] sc-node-2   crmd: info: erase_status_tag:
Deleting xpath: //node_state[@uname='sc-node-2']/transient_attributes
Oct 29 13:02:15 [1021] sc-node-2   crmd: info: update_attrd:
Connecting to attrd... 5 retries remaining
Oct 29 13:02:15 [1016] sc-node-2cib: info: cib_process_request: 
Completed cib_delete operation for section 
//node_state[@uname='sc-node-2']/transient_attributes: OK (rc=0, 
origin=local/crmd/16, version=0.11.0)
Oct 29 13:02:15 [1021] sc-node-2   crmd: info: crm_update_peer_join:
do_dc_join_ack: Node sc-node-2[2] - join-1 phase 3 -> 4
Oct 29 13:02:15 [1021] sc-node-2   crmd: info: do_dc_join_ack:  join-1: 
Updating node state to member for sc-node-2
Oct 29 13:02:15 [1021] sc-node-2   crmd: info: erase_status_tag:
Deleting xpath: //node_state[@uname='sc-node-2']/lrm
Oct 29 13:02:15 [1016] sc-node-2cib: info: cib_process_request: 
Completed cib_delete operation for section 
//node_state[@uname='sc-node-2']/lrm: OK (rc=0, origin=local/crmd/17, 
version=0.11.0)
Oct 29 13:02:15 [1016] sc-node-2cib: info: cib_process_request: 
Completed cib_modify operation for section status: OK (rc=0, 
origin=local/crmd/18, version=0.11.1)
Oct 29 13:02:15 [1021] sc-node-2   crmd: info: do_state_transition: 
State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED 
cause=C_FSA_INTERNAL origin=check_join_state ]
Oct 29 13:02:15 [1021] sc-node-2   crmd: info: abort_transition_graph:  
do_te_invoke:151 - Triggered transition abort (complete=1) : Peer Cancelled
Oct 29 13:02:15 [1019] sc-node-2  attrd:   notice: attrd_local_callback:
Sending full refresh (origin=crmd)
Oct 29 13:02:15 [1016] sc-node-2cib: info: cib_process_request: 
Completed cib_modify operation for section nodes: 

Re: [ClusterLabs] ORACLE 12 and SLES HAE (Sles 11sp3)

2015-10-29 Thread Dejan Muhamedagic
Hi,

On Wed, Oct 28, 2015 at 02:45:55AM -0600, Cristiano Coltro wrote:
> Hi,
> most of the SLES 11 sp3 with HAE are migrating Oracle Db.
> The migration will be from Oracle 11 to Oracle 12
> 
> They have verified that the Oracles cluster resources actually supports  
> - Oracle 10.2 and 11.2 
> Command used: usando il comando "crm ra info ocf:heartbeat:SAPDatabase"
> So seems they are out of support.

It just means that the agent was tested with those. Oracle 12 was
probably not available at the time. At any rate, as others
already pointed out, the RA should support all databases which
are supported by SAPHostAgent.

Thanks,

Dejan

> So I would like to know which version of cluster/SO/Agent supports Oracle 12.
> AFAIK agents are tipically included in rpm.
> # rpm -qf /usr/lib/ocf/resource.d/heartbeat/SAPDatabase
> resource-agents-3.9.5-0.34.57
> and there are NOT updates about thatin the channel.
> 
> Any Idea on that?
> Thanks,
> Crisitiano
> 
> 
> 
> Cristiano Coltro
> Premium Support Engineer
>   
> mail: cristiano.col...@microfocus.com
> phone +39 02 36634936
> mobile +39 3351435589
> 
> 
>  
> 
> 
> 
> 
> 
> 
> 
> 


> ___
> Users mailing list: Users@clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org


___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] VIP monitoring failing with Timed Out error

2015-10-29 Thread Dejan Muhamedagic
Hi,

On Thu, Oct 29, 2015 at 10:40:18AM +0530, Pritam Kharat wrote:
> Thank you very much Ken for reply. I will try your suggested steps.

If you cannot figure out from the logs why the stop operation
times out, you can also try to trace the resource agent:

# crm resource help trace
# crm resource trace vip stop

Then take a look at the trace or post it somewhere.

Thanks,

Dejan

> 
> On Wed, Oct 28, 2015 at 11:23 PM, Ken Gaillot  wrote:
> 
> > On 10/28/2015 03:51 AM, Pritam Kharat wrote:
> > > Hi All,
> > >
> > > I am facing one issue in my two node HA. When I stop pacemaker on ACTIVE
> > > node, it takes more time to stop and by this time VIP migration with
> > other
> > > resources migration fails to STANDBY node. (I have seen same issue in
> > > ACTIVE node reboot case also)
> >
> > I assume STANDBY in this case is just a description of the node's
> > purpose, and does not mean that you placed the node in pacemaker's
> > standby mode. If the node really is in standby mode, it can't run any
> > resources.
> >
> > > Last change: Wed Oct 28 02:52:57 2015 via cibadmin on node-1
> > > Stack: corosync
> > > Current DC: node-1 (1) - partition with quorum
> > > Version: 1.1.10-42f2063
> > > 2 Nodes configured
> > > 2 Resources configured
> > >
> > >
> > > Online: [ node-1 node-2 ]
> > >
> > > Full list of resources:
> > >
> > >  resource (upstart:resource): Stopped
> > >  vip (ocf::heartbeat:IPaddr2): Started node-2 (unmanaged) FAILED
> > >
> > > Migration summary:
> > > * Node node-1:
> > > * Node node-2:
> > >
> > > Failed actions:
> > > vip_stop_0 (node=node-2, call=-1, rc=1, status=Timed Out,
> > > last-rc-change=Wed Oct 28 03:05:24 2015
> > > , queued=0ms, exec=0ms
> > > ): unknown error
> > >
> > > VIP monitor is failing over here with error Timed Out. What is the
> > general
> > > reason for TimeOut. ? I have kept default-action-timeout=180secs which
> > > should be enough for monitoring
> >
> > 180s should be far more than enough, so something must be going wrong.
> > Notice that it is the stop operation on the active node that is failing.
> > Normally in such a case, pacemaker would fence that node to be sure that
> > it is safe to bring it up elsewhere, but you have disabled stonith.
> >
> > Fencing is important in failure recovery such as this, so it would be a
> > good idea to try to get it implemented.
> >
> > > I have added order property -> when vip is started then only start other
> > > resources.
> > > Any clue to solve this problem ? Most of the time this VIP monitoring is
> > > failing with Timed Out error.
> >
> > The "stop" in "vip_stop_0" means that the stop operation is what failed.
> > Have you seen timeouts on any other operations?
> >
> > Look through the logs around the time of the failure, and try to see if
> > there are any indications as to why the stop failed.
> >
> > If you can set aside some time for testing or have a test cluster that
> > exhibits the same issue, you can try unmanaging the resource in
> > pacemaker, then:
> >
> > 1. Try adding/removing the IP via normal system commands, and make sure
> > that works.
> >
> > 2. Try running the resource agent manually (with any verbose option) to
> > start/stop/monitor the IP to see if you can reproduce the problem and
> > get more messages.
> >
> > ___
> > Users mailing list: Users@clusterlabs.org
> > http://clusterlabs.org/mailman/listinfo/users
> >
> > Project Home: http://www.clusterlabs.org
> > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> > Bugs: http://bugs.clusterlabs.org
> >
> 
> 
> 
> -- 
> Thanks and Regards,
> Pritam Kharat.

> ___
> Users mailing list: Users@clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org


___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] restarting resources after configuration changes

2015-10-29 Thread Dejan Muhamedagic
Hi,

On Wed, Oct 28, 2015 at 10:21:26AM +, - - wrote:
> Hi,
> I am having problems restarting resources (e.g apache) after a
> configuration file change. I have tried 'pcs resource restart resourceid',
> which says 'resource successfully restarted', but the httpd process
> does not restart and hence my configuration changes in httpd.conf
> does not take effect.
> I am sure this scenario is quite common as administrators needs to
> update httpd.config files often - how is it done in an HA cluster?

Exactly as you tried. Something apparently went wrong, but hard
to say what.

Thanks,

Dejan

> I can send a HUP signal to the httpd process to achieve this, but I hope
> there will be a cluster (pcs/crm) method to do this.
> Many Thanks
> 
> krishan

> ___
> Users mailing list: Users@clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org


___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] [Patch][glue][external/libvirt] Conversion to a lower case of hostlist.

2015-10-29 Thread renayama19661014
Hi Dejan,
Hi All,

How about the patch which I contributed by a former email?
I would like an opinion.

Best Regards,
Hideo Yamauchi.

- Original Message -
> From: "renayama19661...@ybb.ne.jp" 
> To: Cluster Labs - All topics related to open-source clustering welcomed 
> 
> Cc: 
> Date: 2015/10/14, Wed 09:38
> Subject: Re: [ClusterLabs] [Patch][glue][external/libvirt] Conversion to a 
> lower case of hostlist.
> 
> Hi Dejan,
> Hi All,
> 
> We reconsidered a patch.
> 
> 
> 
> In Pacemaker1.1, node names in STONITH are always small letters.
> When a user uses a capital letter in host name, STONITH of libvirt fails.
> 
> This patch lets STONITH by libvirt succeed in the next setting.
> 
>  * host name(upper case) and hostlist(upper case) and domain_id on 
> libvirt(uppper case)
>  * host name(upper case) and hostlist(lower case) and domain_id on 
> libvirt(lower 
> case)
>  * host name(lower case) and hostlist(upper case) and domain_id on 
> libvirt(uppper case)
>  * host name(lower case) and hostlist(lower case) and domain_id on 
> libvirt(lower 
> case)
> 
> 
> However, in the case of the next setting, STONITH of libvirt causes an error.
> In this case it is necessary for the user to make the size of the letter of 
> the 
> host name to manage of libvirt the same as host name to appoint in hostlist.
> 
>  * host name(upper case) and hostlist(lower case) and domain_id on 
> libvirt(uppper case)
>  * host name(upper case) and hostlist(uppper case) and domain_id on 
> libvirt(lower case)
>  * host name(lower case) and hostlist(lower case) and domain_id on 
> libvirt(uppper case)
>  * host name(lower case) and hostlist(uppper case) and domain_id on 
> libvirt(lower case)
> 
> 
> This patch is effective for letting STONITH by libvirt when host name was set 
> for a capital letter succeed.
> 
> Best Regards,
> Hideo Yamauchi.
> 
> 
> 
> 
> - Original Message -
>>  From: "renayama19661...@ybb.ne.jp" 
> 
>>  To: Cluster Labs - All topics related to open-source clustering welcomed 
> 
>>  Cc: 
>>  Date: 2015/9/15, Tue 03:28
>>  Subject: Re: [ClusterLabs] [Patch][glue][external/libvirt] Conversion to a 
> lower case of hostlist.
>> 
>>  Hi Dejan,
>> 
>>>   I suppose that you'll send another one? I can vaguelly recall
>>>   a problem with non-lower case node names, but not the specifics.
>>>   Is that supposed to be handled within a stonith agent?
>> 
>> 
>>  Yes.
>>  We make a different patch now.
>>  With the patch, I solve a node name of the non-small letter in the range of 
> 
>>  stonith agent.
>>  # But the patch cannot cover all all patterns.
>> 
>>  Please wait a little longer.
>>  I send a patch again.
>>  For a new patch, please tell me your opinion.
>> 
>>  Best Regards,
>>  Hideo Yamauchi.
>> 
>> 
>> 
>>  - Original Message -
>>>   From: Dejan Muhamedagic 
>>>   To: ClusterLabs-ML 
>>>   Cc: 
>>>   Date: 2015/9/14, Mon 22:20
>>>   Subject: Re: [ClusterLabs] [Patch][glue][external/libvirt] Conversion 
> to a 
>>  lower case of hostlist.
>>> 
>>>   Hi Hideo-san,
>>> 
>>>   On Tue, Sep 08, 2015 at 05:28:05PM +0900, renayama19661...@ybb.ne.jp 
> wrote:
    Hi All,
 
    We intend to change some patches.
    We withdraw this patch.
>>> 
>>>   I suppose that you'll send another one? I can vaguelly recall
>>>   a problem with non-lower case node names, but not the specifics.
>>>   Is that supposed to be handled within a stonith agent?
>>> 
>>>   Cheers,
>>> 
>>>   Dejan
>>> 
    Best Regards,
    Hideo Yamauchi.
 
 
    - Original Message -
    > From: "renayama19661...@ybb.ne.jp" 
>>>   
    > To: ClusterLabs-ML 
    > Cc: 
    > Date: 2015/9/7, Mon 09:06
    > Subject: [ClusterLabs] [Patch][glue][external/libvirt] 
> Conversion 
>>  to a 
>>>   lower case of hostlist.
    > 
    > Hi All,
    > 
    > When a cluster carries out stonith, Pacemaker handles host 
> name 
>>  by a 
>>>   small 
    > letter.
    > When a user sets the host name of the OS and host name of 
>>  hostlist of 
    > external/libvrit in capital letters and waits, stonith is 
> not 
>>  carried 
>>>   out.
    > 
    > The external/libvrit to convert host name of hostlist, and 
> to 
>>  compare 
>>>   can assist 
    > a setting error of the user.
    > 
    > Best Regards,
    > Hideo Yamauchi.
    > 
    > ___
    > Users mailing list: Users@clusterlabs.org
    > http://clusterlabs.org/mailman/listinfo/users
    > 
    > Project Home: http://www.clusterlabs.org
    > Getting started: 
>>>   http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
    > Bugs: http://bugs.clusterlabs.org
    > 
 
    

Re: [ClusterLabs] [Enhancement] When STONITH is not completed, a resource moves.

2015-10-29 Thread renayama19661014
Hi Ken,

Thank you for comments.

> The above is the reason for the behavior you're seeing.
> 
> A fenced node can come back up and rejoin the cluster before the fence
> command reports completion. When Pacemaker sees the rejoin, it assumes
> the fence command completed.
> 
> However in this case, the lost node rejoined on its own while fencing
> was still in progress, so that was an incorrect assumption.
> 
> A proper fix will take some investigation. As a workaround in the
> meantime, you could try increasing the corosync token timeout, so the
> node is not declared lost for brief outages.



We think so, too.
We understand that we can evade a problem by lengthening token of corosync.

If log when a problem happened is necessary for a survey by you, please contact 
me.


Many Thanks!
Hideo Yamauchi.


- Original Message -
> From: Ken Gaillot 
> To: users@clusterlabs.org
> Cc: 
> Date: 2015/10/29, Thu 23:09
> Subject: Re: [ClusterLabs] [Enhancement] When STONITH is not completed, a 
> resource moves.
> 
> On 10/28/2015 08:39 PM, renayama19661...@ybb.ne.jp wrote:
>>  Hi All,
>> 
>>  The following problem produced us in Pacemaker1.1.12.
>>  While STONITH was not completed, a resource moved it.
>> 
>>  The next movement seemed to happen in a cluster.
>> 
>>  Step1) Start a cluster.
>> 
>>  Step2) Node 1 breaks down.
>> 
>>  Step3) Node 1 is reconnected before practice is completed from node 2 
> STONITH.
>> 
>>  Step4) Repeated between Step2 and Step3.
>> 
>>  Step5) STONITH from node 2 is not completed, but a resource moves to node 
> 2.
>> 
>> 
>> 
>>  There was not resource information of node 1 when I saw pe file when a 
> resource moved in node 2.
>>  (snip)
>>    
>>       in_ccm="false" crmd="offline" 
> crm-debug-origin="do_state_transition" join="down" 
> expected="down">
>>        
>>          
>>             name="last-failure-prm_XXX1" value="1441957021"/>
>>             name="default_ping_set" value="300"/>
>>             name="last-failure-prm_XXX2" value="1441956891"/>
>>             name="shutdown" value="0"/>
>>             name="probe_complete" value="true"/>
>>          
>>        
>>      
>>       crmd="online" crm-debug-origin="do_state_transition" 
> uname="node2" join="member" expected="member">
>>        
>>          
>>             name="shutdown" value="0"/>
>>             name="probe_complete" value="true"/>
>>             name="default_ping_set" value="300"/>
>>          
>>        
>>        
>>          
>>  (snip)
>> 
>>  While STONITH is not completed, the information of the node of cib is 
> deleted and seems to be caused by the fact that cib does not have the 
> resource 
> information of the node.
>> 
>>  The cause of the problem was that the communication of the cluster became 
> unstable.
>>  However, an action of this cluster is a problem.
>> 
>>  This problem is not taking place in Pacemaker1.1.13 for the moment.
>>  However, I think that it is the same processing as far as I see a source 
> code.
>> 
>>  Does the deletion of the node information not have to perform it after all 
> new node information gathered?
>> 
>>   * crmd/callback.c
>>  (snip)
>>  void
>>  peer_update_callback(enum crm_status_type type, crm_node_t * node, const 
> void *data)
>>  {
>>  (snip)
>>       if (down) {
>>              const char *task = crm_element_value(down->xml, 
> XML_LRM_ATTR_TASK);
>> 
>>              if (alive && safe_str_eq(task, CRM_OP_FENCE)) {
>>                  crm_info("Node return implies stonith of %s (action 
> %d) completed", node->uname,
>>                           down->id);
> 
> The above is the reason for the behavior you're seeing.
> 
> A fenced node can come back up and rejoin the cluster before the fence
> command reports completion. When Pacemaker sees the rejoin, it assumes
> the fence command completed.
> 
> However in this case, the lost node rejoined on its own while fencing
> was still in progress, so that was an incorrect assumption.
> 
> A proper fix will take some investigation. As a workaround in the
> meantime, you could try increasing the corosync token timeout, so the
> node is not declared lost for brief outages.
> 
>>                  st_fail_count_reset(node->uname);
>> 
>>                  erase_status_tag(node->uname, XML_CIB_TAG_LRM, 
> cib_scope_local);
>>                  erase_status_tag(node->uname, 
> XML_TAG_TRANSIENT_NODEATTRS, cib_scope_local);
>>                  /* down->confirmed = TRUE; Only stonith-ng returning 
> should imply completion */
>>                  down->sent_update = TRUE;       /* Prevent 
> tengine_stonith_callback() from calling send_stonith_update() */
>> 
>>  (snip)
>> 
>> 
>>   * There is the log, but cannot attach it because the information of the 
> user is included.
>>   * Please contact me by an email if you need it.
>> 
>> 
>>  These contents are registered with Bugzilla.
>>   * http://bugs.clusterlabs.org/show_bug.cgi?id=5254
>> 
>> 
>>  Best Regards,
>>  Hideo Yamauchi.
>>