Thank you for your response. Luckily I found it while I was searching
for an answer. Thank you http://www.gossamer-threads.com! Apparently
your response was eaten by our spam filter.
I was using the GUI and it wouldn't let me leave the value blank as
"<expression id="myresource:connected-rule-01-expr-01" attribute="pingd"
operation="not_defined"/>" is blank. But after adding this constraint,
the behavior is still the same. When I manually added the line, it was
gone after reboot.
Here is what I've got now:
<rsc_location id="myresource:connected"
rsc="My-DRBD-MySQL-group">
<rule id="prefered_myresource:connected" score="-INFINITY">
<expression attribute="pingd"
id="aa4643a1-c7a6-4e2f-bdb7-75d4f45f035a" operation="lte" value="0"/>
</rule>
</rsc_location>
Thanks,
Chase
>>> hunvagyok at freemail Aug 28, 2008, 12:03 PM >>>
I think that requires different constraints. You can find fairly
decent
examples here:
http://www.linux-ha.org/v2/faq/pingd
I chose to use the one which tells the cluster NOT to run any resource
on a node where pingd connectivity is lost. It fails over instantly
when
I pull the cord...
<rsc_location id="myresource:connected" rsc="myresource">
<rule id="myresource:connected-rule-01" score="-INFINITY"
boolean_op="or">
<expression id="myresource:connected-rule-01-expr-01"
attribute="pingd" operation="not_defined"/>
<expression id="myresource:connected-rule-01-expr-02"
attribute="pingd" operation="lte" value="0"/>
</rule>
</rsc_location>
Ivan
>>> "Chase Simms" <[EMAIL PROTECTED]> 8/28/2008 1:53 PM >>>
I am trying to get make sure the node with the best connectivity is
the
active node. Mainly, I just want to make sure that if the public
interface on the active node goes down the cluster will fail over. I
tried to have Pingd as a requirement for drbd and mysql to run. They
would not start if Pingd was not active. But once everything was
going
I could pull the plug on the public interface and Pingd would not fail.
I had Pingd set up to monitor and restart on failure.
So now I am trying to use Pingd clones to monitor both connections and
choose the best. Both clones will start up but continuously fail and
restart after a few minutes. This causes all the dependent
applications
to go into rolling restarts.
I attached the debug log, ha.cf, and cib.xml.
Would anyone be able to point me in the direction of config files for
a
two-node cluster that successfully fails over when the public
interface
goes down? I'd like to see all three that work together rather than
snippets from each that may or may not work together.
Thank you.
Chase
The information in this email is intended for the sole use of the
addressees and may be confidential and subject to protection under the
law. If you are not the intended recipient, you are hereby notified
that
any distribution or copying of this email is strictly prohibited. If
you
have received this message in error, please reply and delete your
copy.
The information in this email is intended for the sole use of the
addressees and may be confidential and subject to protection under the
law. If you are not the intended recipient, you are hereby notified that
any distribution or copying of this email is strictly prohibited. If you
have received this message in error, please reply and delete your copy.
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems