Hi Jakub,

 I dont know too much about Stonith/Ipmi, but I noticed the following in your 
cib.xml:
...
  <nvpair id="ipmistonith0_userid" name="userid" value="stoop"/>
...  <nvpair id="ipmistonith1_userid" name="userid" value="stoop"/>
-> look at this :                                                               
                        ^^^


HTH

Nikita Michalko


Am Dienstag, 16. Dezember 2008 13:58 schrieb Jakub Kuźniar:
> Thank you very much for help
>
> > So the problem is not that they migrated, but that this caused a
> > restart of the resources already on the second node?
>
> Yes, that's right. There is an unnecessary restart of resources on second
> node.
>
> > If so, add the interleave meta attribute to the clones and set it to
> > true.
>
> I have added this meta attribute for clone resources Xenconfig_cloneset and
> Xendata_clonest. I also attached a new version of CIB with this attribute
> added. But there was no change. Still resources one the second node are
> restarted whenever first node is switched into stand-by mode. I have erased
> CIB content and added resources configuration again. Still the same result.
>
>
> Jakub
>
> On Tuesday 16 of December 2008 11:48:07 Andrew Beekhof wrote:
> > On Tue, Dec 16, 2008 at 00:33, Jakub Kuźniar <[email protected]> wrote:
> > > Hi everybody,
> > >
> > > I have recently updated heartbeat 2.0.8 to heartbeat 2.1.4. I'am
> > > running two node Xen cluster using OCFS2. With heartbeat 2.0.8 my
> > > configuration worked fine, but after upgrade strange things started to
> > > happen. When one of the node was switched to stand-by mode the virtual
> > > machines were migrated (live) to the second node, forcing the restart
> > > of the virtual machines running on second node.
> >
> > So the problem is not that they migrated, but that this caused a
> > restart of the resources already on the second node?
> >
> > If so, add the interleave meta attribute to the clones and set it to
> > true.
> >
> > > When the first node was then switched back online, the failed over VMs,
> > > where migrated back to first node, once again causing the restart of
> > > VMs running on the second node. This behaviour seems strange to me. I'
> > > am probably making some mistake in the configuration. I would be very
> > > grateful for any help. I attach also configuration part of cib and
> > > ha.cf file.
> > >
> > > Thank you for response
> > >
> > > Jakub
> > >
> > >
> > >
> > >
> > > _______________________________________________
> > > Linux-HA mailing list
> > > [email protected]
> > > http://lists.linux-ha.org/mailman/listinfo/linux-ha
> > > See also: http://linux-ha.org/ReportingProblems
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to