Andrew Beekhof wrote:
> On 10/23/07, Terry L. Inzauro <[EMAIL PROTECTED]> wrote:
>> list,
>>
>> i think i have pingd working properly.
>>
>> /etc/ha.d/ha.cf
>>         apiauth ping gid=root uid=root
>>         respawn root /usr/lib/heartbeat/pingd -m 1000 -d 5s -a 
>> default_ping_set
> 
> here use tell pingd to define "default_ping_set"
> 
>> cib.xml locational constraint:
>> <constraints>
>>        <rsc_location id="afp_cl_vpn_loc" rsc="afp_cl_vpn">
>>          <rule id="afp_cl_vpn_pref_1" score="100">
>>            <expression id="afp_cl_vpn_loc_attr_1" attribute="#uname" 
>> operation="eq" value="clvpn01"/>
>>          </rule>
>>          <rule id="afp_cl_vpn_loc_pingd_rule" score_attribute="afp_cl_vpn">
>>            <expression id="afp_cl_vpn_loc_pingd_rule_defined" 
>> attribute="pingd" operation="defined"/>
> 
> but here you tell the PE to look for "pingd"
> 
> s/pingd/default_ping_set/
> 
>>          </rule>
>>        </rsc_location>
>> </constraints>
>>
>>
>> my question is in regards to resource fail back.  how does one get the 
>> resource to move back to the
>> primary node in the event that the resource moves from its primary cluster 
>> node to the secondary
>> node as a result of its pingd score?
> 
> it depends on how default-resource-stickiness is set
> if its > 0, then you need to move it manually with crm_resource -M
> 
>> is this a manual administrtive process or 'should' hb take care
>> of that for you?
>>
>>
>> regards,
>>
>>
>> _Terry
>>


ok, after resolving my mistakes in naming, i began to test.  the way i 
understand this to work is
that the resource will
be placed on the node with the highest score.  i'm simulating network failure 
by placing black hole
routes on the node
i wish to test (route add -host 172.16.1.2 gw 127.0.0.1). i would 'assume' that 
the resource would
be moved to the
node with the best connectivity aka score...it does, but it also offlines the 
node that i added the
black hole route to.

how does one reverse the effects of the off lining, short of restarting 
heartbeat of both nodes?

is that what ping is supposed to do? if so, what if i had other resources 
configured to run on that
node?


Note: default-resource-stickiness is set to "zero"
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to