On Mon, Mar 3, 2008 at 12:08 PM, Lino Moragon <[EMAIL PROTECTED]> wrote:
> Serge Dubrouski wrote:
> > Configuration looks right to me, I even tested it and it worked fine
> > on my test cluster. So hints are obvious:
> >
> > 1. Check that you really put that script on a second node and made it
> > executable.
> >
> That was my first error, but i noticed a error message in the logfile and
> corrected it.
> So i can exclude this posibility
>
>
> > 2. Nodes should be able to ping each other. That programmed in a
> > "status" function.
> >
> What do you mean by "programmed in a status function?" From each node I can
> ping the other one. It also works fine with name resolution.
I mean that "status" function in script pings each node from hostlist.
If it can't ping it it fails with exit code 1.
>
> Do you think, it could possibly be an issue with my current version
> (2.1.2-3)?
No it can't because I use the same version.
> With which version did you try the configurations?
> when you start the Clone the first time, then both resources should start
> (on node1 as well as on node2) is that correct?
Right.
> Could it be another configuration error outside the clone_id section in the
> cib? Could you perhaps attach your whole CIB?
Attached.
> Thanks for your support so far, I'm very grateful.
> Lino
>
>
>
> >
> >
> > On Mon, Mar 3, 2008 at 9:16 AM, Lino Moragon <[EMAIL PROTECTED]> wrote:
> >
> >> Hi,
> >>
> >> I'm using now the most actual xen0 stonith plugin, that Serge attachted
> >> the 2008-02-28 to this thread.
> >> I thought I configured everything correct but it seems that the stonith
> >> clone cannot be started on my 2nd node.
> >> I must admit I configured the Clone via hb_gui but I still have some
> issues.
> >> As for these reasons the stonith plugin only works to reset my 2nd node
> >> and not vice versa.
> >> The version I am using is: heartbeat-2.1.2-3 of the Centos5.1 repository.
> >>
> >> Has anyone a clue where the failure could be?
> >>
> >> To this email I attach the cib.xml
> >> Below follows the cib stonith section, crm_verfy and error msgs.
> >>
> >> Perhaps Serge, you could give me a hint? What did you do differently
> >> than me?
> >>
> >> I would be very glad for any hints and clues.
> >>
> >> Thanks in advance,
> >> Lino
> >>
> >> node1: mysql1
> >> node2: mysql2
> >> xen-host: simulator
> >>
> >> <clone id="DoFencing">
> >> <instance_attributes id="DoFencing_instance_attrs">
> >> <attributes>
> >> <nvpair id="DoFencing_clone_max" name="clone_max" value="2"/>
> >> <nvpair id="DoFencing_clone_node_max" name="clone_node_max"
> >> value="1"/>
> >> <nvpair id="DoFencing_target_role" name="target_role"
> >> value="started"/>
> >> </attributes>
> >> </instance_attributes>
> >> <primitive class="stonith" type="external/xen0"
> >> provider="heartbeat" id="child_DoFencing">
> >> <instance_attributes id="child_DoFencing_instance_attrs">
> >> <attributes>
> >> <nvpair name="target_role"
> >> id="child_DoFencing_target_role" value="started"/>
> >> <nvpair name="hostlist"
> >> id="5525c381-5956-4564-af3d-2bc7b547812a" value="mysql1:mysql1.cfg
> >> mysql2:mysql2.cfg"/>
> >> <nvpair id="65feeaf5-501f-4648-a155-83b79b587fbf"
> >> name="dom0" value="simulator"/>
> >> </attributes>
> >> </instance_attributes>
> >> </primitive>
> >> </clone>
> >>
> >> If I use crm_verify I get the following results:
> >> =============================================================
> >> crm_verify[6169]: 2008/03/03_17:06:55 WARN: unpack_rsc_op: Processing
> >> failed op (child_DoFencing:0_start_0) on mysql2
> >> crm_verify[6169]: 2008/03/03_17:06:55 WARN: unpack_rsc_op: Handling
> >> failed start for child_DoFencing:0 on mysql2
> >> crm_verify[6169]: 2008/03/03_17:06:55 WARN: unpack_rsc_op: Processing
> >> failed op (child_DoFencing:1_start_0) on mysql2
> >> crm_verify[6169]: 2008/03/03_17:06:55 WARN: unpack_rsc_op: Handling
> >> failed start for child_DoFencing:1 on mysql2
> >> =============================================================
> >>
> >> Furthermore I get the following Errors in my log:
> >> =============================================================
> >> Mar 3 16:29:42 mysql2 crmd: [1478]: ERROR: process_lrm_event: LRM
> >> operation child_DoFencing:0_start_0 (call=22, rc=1) Error unknown error
> >> Mar 3 16:29:46 mysql2 crmd: [1478]: ERROR: process_lrm_event: LRM
> >> operation child_DoFencing:1_start_0 (call=24, rc=1) Error unknown error
> >> Mar 3 16:35:33 mysql2 crmd: [1478]: ERROR: process_lrm_event: LRM
> >> operation child_DoFencing:1_start_0 (call=28, rc=1) Error unknown error
> >> Mar 3 16:46:40 mysql2 crmd: [1477]: ERROR: process_lrm_event: LRM
> >> operation child_DoFencing:0_start_0 (call=10, rc=1) Error unknown error
> >> Mar 3 16:46:45 mysql2 crmd: [1477]: ERROR: process_lrm_event: LRM
> >> operation child_DoFencing:1_start_0 (call=12, rc=1) Error unknown error
> >> ==============================================================
> >>
> >>
> >>
> >>
> >>
> >> Serge Dubrouski schrieb:
> >> > Attached.
> >> >
> >> > On Thu, Feb 28, 2008 at 3:35 AM, Dejan Muhamedagic <[EMAIL PROTECTED]>
> wrote:
> >> >> Hi Serge,
> >> >>
> >> >>
> >> >> On Tue, Feb 26, 2008 at 09:46:14AM -0700, Serge Dubrouski wrote:
> >> >> > Dejan -
> >> >> >
> >> >> > I found a compromise :-) Attached is a version of that plugin that
> >> >> > supports following parameters:
> >> >> >
> >> >> > 1. hostlist. A string that has a list of controlled nodes
> separated by
> >> >> > space or commas. A required parameter. In a simple form its just a
> >> >> > list of nodes. If one needs to use non-standard Xen configuration
> he
> >> >> > can use extended form of this parameter: "node1_name:config1_file
> >> >> > node2_name:config2_file". If config file isn't given it defaults to
> >> >> > /etc/xen/node_name.cfg
> >> >> >
> >> >> > 2. Dom0. Name of Dom0 Xen node. A required parameter.
> >> >> >
> >> >> > 3. ssh_command. SSH command that is used to ssh from DomU to Dom0.
> >> >> > Defaults to "/usr/bin/ssh -q -x -n -l root". If one wants to use
> SSH
> >> >> > keys for higher security he needs to use this parameter.
> >> >>
> >> >> This is not necessary. One can setup everything needed in
> >> >> ~/.ssh/config on a per host basis, i.e. key or user to connect
> >> >> with. Since the plugin always runs as root, you might leave out
> >> >> '-l root' as well.
> >> >>
> >> >>
> >> >> > So in this form this plugin can be configured as a clone or as a
> set
> >> >> > of resources and location constraints.
> >> >> >
> >> >> >
> >> >> > I'd be very pleased if this plugin gets its way into Linux-HA
> distribution.
> >> >> >
> >> >>
> >> >> Sure. Could you please just drop the ssh_command parameter.
> >> >>
> >> >> Many thanks for the contribution.
> >> >>
> >> >> Cheers,
> >> >>
> >> >> Dejan
> >> >>
> >> >>
> >> >>
> >> >> >
> >> >> > On Tue, Feb 26, 2008 at 8:45 AM, Serge Dubrouski <[EMAIL
> PROTECTED]> wrote:
> >> >> > >
> >> >> > > On Mon, Feb 25, 2008 at 4:02 PM, Dejan Muhamedagic <[EMAIL
> PROTECTED]> wrote:
> >> >> > > > Hi,
> >> >> > > >
> >> >> > > >
> >> >> > > > On Mon, Feb 25, 2008 at 12:17:40PM -0700, Serge Dubrouski
> wrote:
> >> >> > > > > On Mon, Feb 25, 2008 at 12:10 PM, Dejan Muhamedagic
> <[EMAIL PROTECTED]> wrote:
> >> >> > > > > > Hi,
> >> >> > > > > >
> >> >> > > > > >
> >> >> > > > > > On Mon, Feb 25, 2008 at 11:27:38AM -0700, Serge
> Dubrouski wrote:
> >> >> > > > > > > I would love to do that and already tried it. Though
> we didn't come to
> >> >> > > > > > > agreement on how configuration parameters should look
> like.
> >> >> > > > > >
> >> >> > > > > > Why? Was there a discussion on the list about it? The
> >> >> > > > > > configuration is a bit unusual. Other stonith agents
> take named
> >> >> > > > > > parameters. Though this kind of configuration also
> works, I'd
> >> >> > > > > > prefer something similar to the others, e.g.
> >> >> > > > >
> >> >> > > > > Yes there was a discussion
> >> >> > > > >
> http://lists.community.tummy.com/pipermail/linux-ha-dev/2007-February/
> >> >> > > >
> >> >> > > > It's a long one and peters out inconclusively.
> >> >> > > >
> >> >> > > >
> >> >> > > > > See "new stonith external plugin". The config parameter
> hostslist is
> >> >> > > > > actually derived from original ssh plugin. I needed to
> have a full
> >> >> > > > > list of all controlled nodes and preferred to have it as
> one
> >> >> > > > > parameter.
> >> >> > > > >
> >> >> > > > > >
> >> >> > > > > > hostname dom0 (or xenhost) config
> >> >> > > > > >
> >> >> > > > >
> >> >> > > > > That would work if I needed just a dom0 host, but I also
> need a list
> >> >> > > > > of controlled nodes and probably configuration files.
> >> >> > > >
> >> >> > > > That's why you can have several instances of a stonith
> resource
> >> >> > > > (see e.g. external/ipmi). Each of them would run with
> different
> >> >> > > > parameters. What I meant was:
> >> >> > > >
> >> >> > > > hostname: xen vm
> >> >> > > > dom0: xen dom0
> >> >> > > > config: vm configuration file
> >> >> > > >
> >> >> > >
> >> >> > > That's possible and easy to do but I'm not sure that it'll be
> better.
> >> >> > > Current version allows to configure a clone. New version would
> require
> >> >> > > configuring a separate resource for each node and creating
> location
> >> >> > > constraints for each of them. Per my opinion that would be more
> >> >> > > complex configuration. May be I'm missing something.
> >> >> > >
> >> >> >
> >> >> >
> >> >> >
> >> >> > --
> >> >> > Serge Dubrouski.
> >> >>
> >> >>
> >> >>
> >> >>> _______________________________________________
> >> >> > Linux-HA mailing list
> >> >> > [email protected]
> >> >> > http://lists.linux-ha.org/mailman/listinfo/linux-ha
> >> >> > See also: http://linux-ha.org/ReportingProblems
> >> >>
> >> >> --
> >> >> Dejan
> >> >>
> >> >>
> >> >> _______________________________________________
> >> >> Linux-HA mailing list
> >> >> [email protected]
> >> >> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> >> >> See also: http://linux-ha.org/ReportingProblems
> >> >>
> >> >
> >> >
> >> >
> >> >
> >> >
> ------------------------------------------------------------------------
> >> >
> >> > _______________________________________________
> >> > Linux-HA mailing list
> >> > [email protected]
> >> > http://lists.linux-ha.org/mailman/listinfo/linux-ha
> >> > See also: http://linux-ha.org/ReportingProblems
> >>
> >>
> >>
> >> _______________________________________________
> >> Linux-HA mailing list
> >> [email protected]
> >> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> >> See also: http://linux-ha.org/ReportingProblems
> >>
> >>
> >
> >
> >
> >
>
> _______________________________________________
> Linux-HA mailing list
> [email protected]
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems
>
--
Serge Dubrouski.
<cib admin_epoch="0" have_quorum="1" num_peers="0" cib_feature_revision="1.3" ignore_dtd="false" generated="false" crm_feature_set="2.1" epoch="281" num_updates="1" cib-last-written="Mon Mar 3 11:59:50 2008">
<configuration>
<crm_config>
<cluster_property_set id="cib-bootstrap-options">
<attributes>
<nvpair id="cib-bootstrap-options-default_resource_stickiness" name="default-resource-stickiness" value="600"/>
<nvpair id="cib-bootstrap-options-default_resource_failure_stickiness" name="default-resource-failure-stickiness" value="-520"/>
<nvpair id="symmetric-cluster" name="symmetric-cluster" value="true"/>
<nvpair id="stonith-enabled" name="stonith-enabled" value="true"/>
<nvpair name="last-lrm-refresh" id="cib-bootstrap-options-last-lrm-refresh" value="1203565421"/>
<nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="0.6.0-node: c94b92d550cf57217fd0292a9aa913bcf977651c"/>
</attributes>
</cluster_property_set>
</crm_config>
<nodes>
<node id="ad6f19b7-228a-48b7-bae0-f95a838bde2a" uname="fc-node2" type="normal"/>
<node id="b88f98c6-50f2-463a-a6eb-51abbec645a9" uname="fc-node1" type="normal"/>
</nodes>
<resources>
<clone id="DoFencing">
<instance_attributes id="fence_attributes">
<attributes>
<nvpair id="fence_clone_max" name="clone_max" value="2"/>
<nvpair id="fence_clone_node_max" name="clone_node_max" value="1"/>
</attributes>
</instance_attributes>
<primitive id="child_DoFencing" class="stonith" type="external/xen0" provider="heartbeat">
<instance_attributes id="fence_inst_attr">
<attributes>
<nvpair id="xen0_hostlist" name="hostlist" value="fc-node1 fc-node2"/>
<nvpair id="xen0_dom0" name="dom0" value="home"/>
</attributes>
</instance_attributes>
</primitive>
</clone>
<group id="myGroup">
<instance_attributes id="myGroup_instance_attrs">
<attributes/>
</instance_attributes>
<primitive class="ocf" type="IPaddr" provider="heartbeat" id="myIP">
<instance_attributes id="myIP_attributes">
<attributes>
<nvpair id="myIP_ip" name="ip" value="192.168.1.130"/>
</attributes>
</instance_attributes>
<operations>
<op id="63460aec-8759-4a35-a41c-0e402d5409a0" name="monitor" interval="30s" timeout="30s"/>
<op id="75bb6f39-b41c-4837-8714-9fc2305fa4c0" name="start" interval="0s" timeout="30s"/>
<op id="9c694b65-9c32-43d5-8df2-615dd9dbe56e" name="stop" interval="0s" timeout="30s"/>
</operations>
<instance_attributes id="myIP">
<attributes/>
</instance_attributes>
</primitive>
<primitive class="ocf" type="pgsql" provider="heartbeat" id="myPgsql">
<instance_attributes id="myPgsql_instance_attrs">
<attributes>
<nvpair id="pgsql_ctl_opt" name="ctl_opt" value="-w"/>
</attributes>
</instance_attributes>
<operations>
<op id="pgsql_monitor" name="monitor" interval="30s" timeout="30s"/>
<op id="pgsql_start" name="start" interval="0s" timeout="30s"/>
<op id="pgsal_stop" name="stop" interval="0s" timeout="30s"/>
</operations>
<instance_attributes id="myPgsql">
<attributes/>
</instance_attributes>
</primitive>
<instance_attributes id="myGroup">
<attributes>
<nvpair id="myGroup-is_managed" name="is_managed" value="true"/>
</attributes>
</instance_attributes>
</group>
</resources>
<constraints>
<rsc_location id="primNode" rsc="myGroup">
<rule id="prefered_primNode" score="1000">
<expression attribute="#uname" id="906247e1-1d96-4a63-a80b-13d103d1b31c" operation="eq" value="fc-node1"/>
</rule>
</rsc_location>
<rsc_location id="PGSQL:connected" rsc="myGroup">
<rule id="PGSQL:connected:rule" score="-INFINITY" boolean_op="or">
<expression id="PGSQL:connected:expr:undefined" attribute="pingd" operation="not_defined"/>
<expression id="PGSQL:connected:expr:zero" attribute="pingd" operation="lte" value="0"/>
</rule>
</rsc_location>
</rsc_location>
</constraints>
</configuration>
</cib>
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems