I upgraded the software to
[EMAIL PROTECTED] hb]# rpm -qa | egrep -i "pacemake|heartbeat|openais"
libopenais2-0.80.3-11.1 heartbeat-2.99.2-4.1
libheartbeat2-2.99.2-4.1 heartbeat-resources-2.99.2-4.1
libpacemaker3-1.0.1-1.1 pacemaker-pygui-1.4-11.5
heartbeat-common-2.99.2-4.1 openais-0.80.3-11.1
pacemaker-1.0.1-1.1
I still cannot get pingd running! (Non-symmetricalcluster)
Never used clones before :-(
<clone id="pingd-clone">
<primitive id="pingd" provider="heartbeat" class="ocf" type="pingd">
<instance_attributes id="instance_attributes.id49788">
<nvpair id="pingd-dampen" name="dampen" value="5s"/>
<nvpair id="pingd-multiplier" name="multiplier" value="1000"/>
<nvpair id="pingd-hosts" name="host_list"
value="192.168.201.1"/>
</instance_attributes>
<meta_attributes id="primitive-pingd.meta"/>
</primitive>
<meta_attributes id="clone-pingd-clone.meta"/>
</clone>
<rsc_location id="resource_its_vip-connected" rsc="resource_its_vip">
<rule id="pingd-exclude-rule" score="-INFINITY">
<expression id="expression.id49786" attribute="pingd"
operation="not_defined"/>
</rule>
<rule id="pingd-prefer-rule" score-attribute="pingd">
<expression id="pingd-prefer" attribute="pingd"
operation="defined"/>
</rule>
</rsc_location>
<rsc_location id="pingd-prefer-rule-dtbaims" rsc="pingd"
node="dtbaims" score="1"/>
<rsc_location id="pingd-prefer-rule-itbaims" rsc="pingd"
node="itbaims" score="1"/>
All I get is
Resource Group: group_its
resource_its_drbd (heartbeat:its_drbddisk): Started itbaims
resource_its_fs (ocf::heartbeat:its_Filesystem): Started
itbaims
resource_its_vip (ocf::heartbeat:IPaddr): Started itbaims
...CUT...
Clone Set: pingd-clone
pingd:0 (ocf::heartbeat:pingd): Stopped
pingd:1 (ocf::heartbeat:pingd): Stopped
from crm_verify
[EMAIL PROTECTED] 20081114]# crm_verify -L -V
crm_verify[31260]: 2008/11/19_14:27:13 WARN: unpack_rsc_location: No
resource (con=pingd-prefer-rule-dtbaims, rsc=pingd)
crm_verify[31260]: 2008/11/19_14:27:13 WARN: unpack_rsc_location: No
resource (con=pingd-prefer-rule-itbaims, rsc=pingd)
crm_verify[31260]: 2008/11/19_14:27:13 WARN: native_color: Resource
resource_its_vip cannot run anywhere
crm_verify[31260]: 2008/11/19_14:27:13 WARN: native_color: Resource
resource_its_oracle cannot run anywhere
...CUT...
crm_verify[31260]: 2008/11/19_14:27:13 WARN: native_color: Resource pingd:0
cannot run anywhere
crm_verify[31260]: 2008/11/19_14:27:13 WARN: native_color: Resource pingd:1
cannot run anywhere
How do I define rsc=? in the location constraint?
How to get pingd to run?
> -----Original Message-----
> From: [EMAIL PROTECTED] [mailto:linux-ha-
> [EMAIL PROTECTED] On Behalf Of Andrew Beekhof
> Sent: Tuesday, 4 November 2008 9:05 PM
> To: General Linux-HA mailing list
> Subject: Re: [Linux-HA] pingd - clones, non-symmetrical
> cluster,rsc_location rules - HA 2.99.1, pacemaker 1.0
>
> On Tue, Nov 4, 2008 at 11:31, Adrian Chapela
> <[EMAIL PROTECTED]> wrote:
> > Andrew Beekhof escribió:
> >>
> >> On Thu, Oct 30, 2008 at 13:00, Adrian Chapela
> >> <[EMAIL PROTECTED]> wrote:
> >>
> >>>
> >>> Alex Strachan escribió:
> >>>
> >>>>
> >>>> Hi All,
> >>>>
> >>>> HA non-symmetrical cluster with two nodes; dtbaims, itbaims.
> >>>> HA 2.99.1, pacemaker 1.0
> >>>>
> >>>>
> >>>
> >>> You need to use a last version of Pacemaker, Pacemaker 1.0 stable has
> >>> pingd
> >>> broken.
> >>>
> >>> I am trying some config with pingd with bad results too. The last
> >>> Pacemaker
> >>> seems to have a bug also, a pingd related to stopping cib..
> >>>
> >>
> >> Which bug is this?
> >>
> >
> > I think it is not a bug. This mail is older than the last releases and I
> > have some problems with rules of pingd. In last release the code is OK.
> >
> > In stable release the bug still exists no ?
>
> right - I've started testing for 1.0.1
> _______________________________________________
> Linux-HA mailing list
> [email protected]
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems