Hi Ivan,

I have followed Andrew's Ordering_explained.pdf and looked at the mailing list 
archive. Also I have added score="0" to the order constraints definitions. 
After a lot of extensive testing  It looks like the cluster now is working 
stable. The are no unnecessary resources restart while switching node to the 
stand-by mode. Thank you for the solution !

> I had this even in 2.1.3 and Lars provided the solution above which
> worked instantly. Whether it's still a bug or not? Decide yourself but
> looking at Andrew's Ordering_explained.pdf score=0 is not actually a bad
> idea and I think should be in CRM anyway.

I think it's not a bug, it is a feature :)
Maybe not documented well.

Andrew Ivan and Nikita thank you for taking your time to track down my 
problem.

I've got one more question about stand-by mode. When one of cluster nodes is 
in stand-by, crm_verlify -LV returns warnings about resources stopped  on this 
node(no errors, or failed actions): 

WARN: native_color: Resource ipmistonith1 cannot run anywhere
WARN: native_color: Resource Xenconfig_clone:0 cannot run anywhere
WARN: native_color: Resource Xendata_clone:0 cannot run anywhere
Warnings found during check: config may not be valid

I' am not sure should I here treat this warnings like an information only or 
this indicates some errors in my configuration. 

Best regards

Jakub


On Tuesday 16 of December 2008 18:52:05 Ivan wrote:
> Hi,
>
> Not sure you need bugzilla, it's already been discussed on the list.
> What you need is this:
>
> add 'score="0"' to the rsc_order
> constraints for all your resources.
>
> I had this even in 2.1.3 and Lars provided the solution above which
> worked instantly. Whether it's still a bug or not? Decide yourself but
> looking at Andrew's Ordering_explained.pdf score=0 is not actually a bad
> idea and I think should be in CRM anyway.
>
> Regards,
> Ivan
>
> On Tue, 2008-12-16 at 11:48 +0100, Andrew Beekhof wrote:
> > On Tue, Dec 16, 2008 at 00:33, Jakub Kuźniar <[email protected]> wrote:
> > > Hi everybody,
> > >
> > > I have recently updated heartbeat 2.0.8 to heartbeat 2.1.4. I'am
> > > running two node Xen cluster using OCFS2. With heartbeat 2.0.8 my
> > > configuration worked fine, but after upgrade strange things started to
> > > happen. When one of the node was switched to stand-by mode the virtual
> > > machines were migrated (live) to the second node, forcing the restart
> > > of the virtual machines running on second node.
> >
> > So the problem is not that they migrated, but that this caused a
> > restart of the resources already on the second node?
> >
> > If so, add the interleave meta attribute to the clones and set it to
> > true.
> >
> > > When the first node was then switched back online, the failed over VMs,
> > > where migrated back to first node, once again causing the restart of
> > > VMs running on the second node. This behaviour seems strange to me. I'
> > > am probably making some mistake in the configuration. I would be very
> > > grateful for any help. I attach also configuration part of cib and
> > > ha.cf file.
> > >
> > > Thank you for response
> > >
> > > Jakub
> > >
> > >
> > >
> > >
> > > _______________________________________________
> > > Linux-HA mailing list
> > > [email protected]
> > > http://lists.linux-ha.org/mailman/listinfo/linux-ha
> > > See also: http://linux-ha.org/ReportingProblems
> >
> > _______________________________________________
> > Linux-HA mailing list
> > [email protected]
> > http://lists.linux-ha.org/mailman/listinfo/linux-ha
> > See also: http://linux-ha.org/ReportingProblems
>
> _______________________________________________
> Linux-HA mailing list
> [email protected]
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems


_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to