Hi,

I'm using now the most actual xen0 stonith plugin, that Serge attachted
the 2008-02-28 to this thread.
I thought I configured everything correct but it seems that the stonith
clone cannot be started on my 2nd node.
I must admit I configured the Clone via hb_gui but I still have some issues.
As for these reasons the stonith plugin only works to reset my 2nd node
and not vice versa.
The version I am using is: heartbeat-2.1.2-3 of the Centos5.1 repository.

Has anyone a clue where the failure could be?

To this email I attach the cib.xml
Below follows the cib stonith section, crm_verfy and error msgs.

Perhaps Serge, you could give me a hint? What did you do differently
than me?

I would be very glad for any hints and clues.

Thanks in advance,
 Lino

node1: mysql1
node2: mysql2
xen-host: simulator

       <clone id="DoFencing">
         <instance_attributes id="DoFencing_instance_attrs">
           <attributes>
             <nvpair id="DoFencing_clone_max" name="clone_max" value="2"/>
             <nvpair id="DoFencing_clone_node_max" name="clone_node_max"
value="1"/>
             <nvpair id="DoFencing_target_role" name="target_role"
value="started"/>
           </attributes>
         </instance_attributes>
         <primitive class="stonith" type="external/xen0"
provider="heartbeat" id="child_DoFencing">
           <instance_attributes id="child_DoFencing_instance_attrs">
             <attributes>
               <nvpair name="target_role"
id="child_DoFencing_target_role" value="started"/>
               <nvpair name="hostlist"
id="5525c381-5956-4564-af3d-2bc7b547812a" value="mysql1:mysql1.cfg
mysql2:mysql2.cfg"/>
               <nvpair id="65feeaf5-501f-4648-a155-83b79b587fbf"
name="dom0" value="simulator"/>
             </attributes>
           </instance_attributes>
         </primitive>
       </clone>

If I use crm_verify I get the following results:
=============================================================
crm_verify[6169]: 2008/03/03_17:06:55 WARN: unpack_rsc_op: Processing
failed op (child_DoFencing:0_start_0) on mysql2
crm_verify[6169]: 2008/03/03_17:06:55 WARN: unpack_rsc_op: Handling
failed start for child_DoFencing:0 on mysql2
crm_verify[6169]: 2008/03/03_17:06:55 WARN: unpack_rsc_op: Processing
failed op (child_DoFencing:1_start_0) on mysql2
crm_verify[6169]: 2008/03/03_17:06:55 WARN: unpack_rsc_op: Handling
failed start for child_DoFencing:1 on mysql2
=============================================================

Furthermore I get the following Errors in my log:
=============================================================
Mar  3 16:29:42 mysql2 crmd: [1478]: ERROR: process_lrm_event: LRM
operation child_DoFencing:0_start_0 (call=22, rc=1) Error unknown error
Mar  3 16:29:46 mysql2 crmd: [1478]: ERROR: process_lrm_event: LRM
operation child_DoFencing:1_start_0 (call=24, rc=1) Error unknown error
Mar  3 16:35:33 mysql2 crmd: [1478]: ERROR: process_lrm_event: LRM
operation child_DoFencing:1_start_0 (call=28, rc=1) Error unknown error
Mar  3 16:46:40 mysql2 crmd: [1477]: ERROR: process_lrm_event: LRM
operation child_DoFencing:0_start_0 (call=10, rc=1) Error unknown error
Mar  3 16:46:45 mysql2 crmd: [1477]: ERROR: process_lrm_event: LRM
operation child_DoFencing:1_start_0 (call=12, rc=1) Error unknown error
==============================================================





Serge Dubrouski schrieb:
> Attached.
> 
> On Thu, Feb 28, 2008 at 3:35 AM, Dejan Muhamedagic <[EMAIL PROTECTED]> wrote:
>> Hi Serge,
>>
>>
>>  On Tue, Feb 26, 2008 at 09:46:14AM -0700, Serge Dubrouski wrote:
>>  > Dejan -
>>  >
>>  > I found a compromise :-) Attached is a version of that plugin that
>>  > supports following parameters:
>>  >
>>  > 1. hostlist. A string that has a list of controlled nodes separated by
>>  > space or commas. A required parameter. In a simple form its just a
>>  > list of nodes. If one needs to use non-standard Xen configuration he
>>  > can use extended form of this parameter: "node1_name:config1_file
>>  > node2_name:config2_file". If config file isn't given it defaults to
>>  > /etc/xen/node_name.cfg
>>  >
>>  > 2. Dom0. Name of Dom0 Xen node. A required parameter.
>>  >
>>  > 3. ssh_command. SSH command that is used to ssh from DomU to Dom0.
>>  > Defaults to "/usr/bin/ssh -q -x -n -l root". If one wants to use SSH
>>  > keys for higher security he needs to use this parameter.
>>
>>  This is not necessary. One can setup everything needed in
>>  ~/.ssh/config on a per host basis, i.e. key or user to connect
>>  with. Since the plugin always runs as root, you might leave out
>>  '-l root' as well.
>>
>>
>>  > So in this form this plugin can be configured as a clone or as a set
>>  > of resources and location constraints.
>>  >
>>  >
>>  > I'd be very pleased if this plugin gets its way into Linux-HA 
>> distribution.
>>  >
>>
>>  Sure. Could you please just drop the ssh_command parameter.
>>
>>  Many thanks for the contribution.
>>
>>  Cheers,
>>
>>  Dejan
>>
>>
>>
>>  >
>>  > On Tue, Feb 26, 2008 at 8:45 AM, Serge Dubrouski <[EMAIL PROTECTED]> 
>> wrote:
>>  > >
>>  > > On Mon, Feb 25, 2008 at 4:02 PM, Dejan Muhamedagic <[EMAIL PROTECTED]> 
>> wrote:
>>  > >  > Hi,
>>  > >  >
>>  > >  >
>>  > >  >  On Mon, Feb 25, 2008 at 12:17:40PM -0700, Serge Dubrouski wrote:
>>  > >  >  > On Mon, Feb 25, 2008 at 12:10 PM, Dejan Muhamedagic <[EMAIL 
>> PROTECTED]> wrote:
>>  > >  >  > > Hi,
>>  > >  >  > >
>>  > >  >  > >
>>  > >  >  > >  On Mon, Feb 25, 2008 at 11:27:38AM -0700, Serge Dubrouski 
>> wrote:
>>  > >  >  > >  > I would love to do that and already tried it. Though we 
>> didn't come to
>>  > >  >  > >  > agreement on how configuration parameters should look like.
>>  > >  >  > >
>>  > >  >  > >  Why? Was there a discussion on the list about it? The
>>  > >  >  > >  configuration is a bit unusual. Other stonith agents take named
>>  > >  >  > >  parameters. Though this kind of configuration also works, I'd
>>  > >  >  > >  prefer something similar to the others, e.g.
>>  > >  >  >
>>  > >  >  > Yes there was a discussion
>>  > >  >  > 
>> http://lists.community.tummy.com/pipermail/linux-ha-dev/2007-February/
>>  > >  >
>>  > >  >  It's a long one and peters out inconclusively.
>>  > >  >
>>  > >  >
>>  > >  >  > See "new stonith external plugin". The config parameter hostslist 
>> is
>>  > >  >  > actually derived from original ssh plugin. I needed to have a full
>>  > >  >  > list of all controlled nodes and preferred to have it as one
>>  > >  >  > parameter.
>>  > >  >  >
>>  > >  >  > >
>>  > >  >  > >  hostname dom0 (or xenhost) config
>>  > >  >  > >
>>  > >  >  >
>>  > >  >  > That would work if I needed just a dom0 host, but I also need a 
>> list
>>  > >  >  > of controlled nodes and probably configuration files.
>>  > >  >
>>  > >  >  That's why you can have several instances of a stonith resource
>>  > >  >  (see e.g. external/ipmi). Each of them would run with different
>>  > >  >  parameters. What I meant was:
>>  > >  >
>>  > >  >  hostname: xen vm
>>  > >  >  dom0: xen dom0
>>  > >  >  config: vm configuration file
>>  > >  >
>>  > >
>>  > >  That's possible and easy to do but I'm not sure that it'll be better.
>>  > >  Current version allows to configure a clone. New version would require
>>  > >  configuring a separate resource for each node and creating location
>>  > >  constraints for each of them. Per my opinion that would be more
>>  > >  complex configuration. May be I'm missing something.
>>  > >
>>  >
>>  >
>>  >
>>  > --
>>  > Serge Dubrouski.
>>
>>
>>
>>> _______________________________________________
>>  > Linux-HA mailing list
>>  > [email protected]
>>  > http://lists.linux-ha.org/mailman/listinfo/linux-ha
>>  > See also: http://linux-ha.org/ReportingProblems
>>
>>  --
>>  Dejan
>>
>>
>> _______________________________________________
>>  Linux-HA mailing list
>>  [email protected]
>>  http://lists.linux-ha.org/mailman/listinfo/linux-ha
>>  See also: http://linux-ha.org/ReportingProblems
>>
> 
> 
> 
> 
> ------------------------------------------------------------------------
> 
> _______________________________________________
> Linux-HA mailing list
> [email protected]
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems


 <cib admin_epoch="0" have_quorum="true" ignore_dtd="false" num_peers="2" cib_feature_revision="1.3" generated="true" epoch="267" num_updates="1" cib-last-written="Mon Mar  3 17:05:23 2008" ccm_transition="2" dc_uuid="5d57e711-82d6-4b28-945f-27ffde25a877">
   <configuration>
     <crm_config>
       <cluster_property_set id="cib-bootstrap-options">
         <attributes>
           <nvpair id="cib-bootstrap-options-symmetric-cluster" name="symmetric-cluster" value="true"/>
           <nvpair id="cib-bootstrap-options-no-quorum-policy" name="no-quorum-policy" value="stop"/>
           <nvpair id="cib-bootstrap-options-default-resource-stickiness" name="default-resource-stickiness" value="0"/>
           <nvpair id="cib-bootstrap-options-default-resource-failure-stickiness" name="default-resource-failure-stickiness" value="0"/>
           <nvpair name="stonith-enabled" id="cib-bootstrap-options-stonith-enabled" value="True"/>
           <nvpair id="cib-bootstrap-options-stonith-action" name="stonith-action" value="reboot"/>
           <nvpair id="cib-bootstrap-options-stop-orphan-resources" name="stop-orphan-resources" value="true"/>
           <nvpair id="cib-bootstrap-options-stop-orphan-actions" name="stop-orphan-actions" value="true"/>
           <nvpair id="cib-bootstrap-options-remove-after-stop" name="remove-after-stop" value="false"/>
           <nvpair id="cib-bootstrap-options-short-resource-names" name="short-resource-names" value="true"/>
           <nvpair id="cib-bootstrap-options-transition-idle-timeout" name="transition-idle-timeout" value="5min"/>
           <nvpair id="cib-bootstrap-options-default-action-timeout" name="default-action-timeout" value="15s"/>
           <nvpair id="cib-bootstrap-options-is-managed-default" name="is-managed-default" value="true"/>
           <nvpair name="last-lrm-refresh" id="cib-bootstrap-options-last-lrm-refresh" value="1204558520"/>
         </attributes>
       </cluster_property_set>
     </crm_config>
     <nodes>
       <node id="5d57e711-82d6-4b28-945f-27ffde25a877" uname="mysql2" type="normal"/>
       <node id="b5eb8171-9dfe-4854-b998-4487d758c644" uname="mysql1" type="normal"/>
     </nodes>
     <resources>
       <group id="group_1">
         <primitive class="heartbeat" id="drbddisk_1" provider="heartbeat" type="drbddisk">
           <operations>
             <op id="drbddisk_1_mon" interval="120s" name="monitor" timeout="60s"/>
           </operations>
           <instance_attributes id="drbddisk_1_inst_attr">
             <attributes>
               <nvpair id="drbddisk_1_attr_1" name="1" value="r0"/>
               <nvpair id="drbddisk_1_target_role" name="target_role" value="started"/>
             </attributes>
           </instance_attributes>
         </primitive>
         <primitive class="ocf" id="Filesystem_2" provider="heartbeat" type="Filesystem">
           <operations>
             <op id="Filesystem_2_mon" interval="120s" name="monitor" timeout="60s"/>
           </operations>
           <instance_attributes id="Filesystem_2_inst_attr">
             <attributes>
               <nvpair id="Filesystem_2_attr_0" name="device" value="/dev/drbd0"/>
               <nvpair id="Filesystem_2_attr_1" name="directory" value="/pool/mysql/"/>
               <nvpair id="Filesystem_2_attr_2" name="fstype" value="ext3"/>
             </attributes>
           </instance_attributes>
         </primitive>
         <primitive class="ocf" id="IPaddr_172_16_100_110" provider="heartbeat" type="IPaddr">
           <operations>
             <op id="IPaddr_172_16_100_110_mon" interval="5s" name="monitor" timeout="5s"/>
           </operations>
           <instance_attributes id="IPaddr_172_16_100_110_inst_attr">
             <attributes>
               <nvpair id="IPaddr_172_16_100_110_attr_0" name="ip" value="172.16.100.110"/>
             </attributes>
           </instance_attributes>
         </primitive>
         <primitive id="mysqld_its" class="heartbeat" type="mysqld_its" provider="heartbeat">
           <instance_attributes id="mysqld_its_instance_attrs">
             <attributes>
               <nvpair id="mysqld_its_target_role" name="target_role" value="started"/>
             </attributes>
           </instance_attributes>
           <operations>
             <op id="24c46ab5-e73e-43b5-a312-8d08757bb42d" name="start" timeout="30" start_delay="5"/>
           </operations>
         </primitive>
         <primitive id="mysqld_ky2" class="heartbeat" type="mysqld_ky2" provider="heartbeat">
           <instance_attributes id="mysqld_ky2_instance_attrs">
             <attributes>
               <nvpair id="mysqld_ky2_target_role" name="target_role" value="started"/>
             </attributes>
           </instance_attributes>
           <operations>
             <op id="82e8563c-238a-416e-84c6-7368316a1140" name="start" timeout="30" start_delay="5"/>
           </operations>
         </primitive>
         <primitive class="heartbeat" type="mysqld_test" provider="heartbeat" id="mysqld_test">
           <instance_attributes id="mysqld_test_instance_attrs">
             <attributes>
               <nvpair name="target_role" id="mysqld_test_target_role" value="started"/>
             </attributes>
           </instance_attributes>
           <operations>
             <op id="7236f1c9-3341-4c56-a87f-ac3c2a4b5faa" name="start" timeout="30" start_delay="5"/>
           </operations>
         </primitive>
       </group>
       <clone id="DoFencing">
         <instance_attributes id="DoFencing_instance_attrs">
           <attributes>
             <nvpair id="DoFencing_clone_max" name="clone_max" value="2"/>
             <nvpair id="DoFencing_clone_node_max" name="clone_node_max" value="1"/>
             <nvpair id="DoFencing_target_role" name="target_role" value="started"/>
           </attributes>
         </instance_attributes>
         <primitive class="stonith" type="external/xen0" provider="heartbeat" id="child_DoFencing">
           <instance_attributes id="child_DoFencing_instance_attrs">
             <attributes>
               <nvpair name="target_role" id="child_DoFencing_target_role" value="started"/>
               <nvpair name="hostlist" id="5525c381-5956-4564-af3d-2bc7b547812a" value="mysql1:mysql1.cfg mysql2:mysql2.cfg"/>
               <nvpair id="65feeaf5-501f-4648-a155-83b79b587fbf" name="dom0" value="simulator"/>
             </attributes>
           </instance_attributes>
         </primitive>
       </clone>
     </resources>
     <constraints>
       <rsc_location id="rsc_location_group_1" rsc="group_1">
         <rule id="prefered_location_group_1" score="100">
           <expression attribute="#uname" id="prefered_location_group_1_expr" operation="eq" value="mysql1"/>
         </rule>
       </rsc_location>
     </constraints>
   </configuration>
 </cib>

_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to