On Wed, 05 Dec 2012 11:30:10 +0100 Andreas Kurz <[email protected]>
wrote:
> On 12/04/2012 08:56 PM, Emmanuel Saint-Joanis wrote:
> > This setup might do the trick :
> > 
> > primitive srv-mysql lsb:mysql \
> > op monitor interval="120" \
> > op start interval="0" timeout="60" on-fail="restart" \
> > op stop interval="0" timeout="60s" on-fail="ignore"
> > 
> > primitive srv-websphere lsb:websphere \
> > op monitor interval="120" \
> > op start interval="0" timeout="60" on-fail="restart" \
> > op stop interval="0" timeout="60s" on-fail="ignore"
> > 
> > ms ms-drbd-data drbd-data \
> > meta master-max="1" master-node-max="1" clone-max="2" notify="true"
> > target-role="Master"
> > 
> > colocation mysql-only-slave -inf: srv-mysql ms-drbd-data:Master
> 
> with a score of -inf this would prevent to run srv-mysql on the same
> node "forever" ... even in case of a node failure ... using a negative
> but not -inf score should also allow them to run together in case of
> node failures.

I think for negative values the actual value has to be choosen very
carefully. A to low value will prevent your service from running in
fail-over the same as with -inf. It pretty much depends on the other
positive factors, once the outcome is a negative number, the service
will not run on that node ever.

Better discard the anti-colocation and use two positive location
constraints. And when that value is higher then your stickiness,
services will fall back to their original node when all is well. When
the value is below the stickiness, your services will stay where they
are even as the second node comes back online.

Have fun,

Arnold

Attachment: signature.asc
Description: PGP signature

_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to