How did you resolve the issue? I see a problem in the CIB, and it may
be related to the issue you encountered. Even if not, it may cause
other issues later.
You have the following resource group:
<group id="halvm">
<primitive class="ocf" id="ClusterIP" provider="heartbeat"
type="IPaddr2">
</primitive>
<primitive class="ocf" id="halvmd" provider="heartbeat" type="LVM">
</primitive>
<primitive class="ocf" id="clxfs" provider="heartbeat"
type="Filesystem">
</primitive>
<primitive class="ocf" id="db2inst" provider="db2luwacademy"
type="db2server">
</primitive>
</group>
You have the following colocation constraint set:
<rsc_colocation id="colocation_set_dthdcs" score="INFINITY">
<resource_set id="colocation_set_dthdcs_set">
<resource_ref id="db2inst"/>
<resource_ref id="halvmd"/>
<resource_ref id="clxfs"/>
<resource_ref id="ClusterIP"/>
</resource_set>
</rsc_colocation>
The group says "place ClusterIP, then place halvmd, then place clxfs,
then place db2inst".
The constraint set says "place db2inst, then place halvmd, then place
clxfs, then place ClusterIP"[1].
A resource group is already an implicit set of ordering and colocation
constraints[2]. If you're happy with the order configured in the
resource group, then you should remove the colocation_set_dthdcs
constraint.
[1] Example 5.15. Equivalent colocation chain expressed using
resource_set
(https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/2.0/html-single/Pacemaker_Explained/index.html#idm46061107170640)
[2] 10.1. Groups - A Syntactic Shortcut
(https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/2.0/html-single/Pacemaker_Explained/index.html#group-resources)
On Wed, Dec 2, 2020 at 4:01 AM Harishkumar Pathangay
<[email protected]> wrote:
>
> Hi,
>
> I realized it can be used in standard mode only after you pointing to that.
>
> Anyways, writing custom agent always gives me a good understanding of the
> resources start/stop/monitor etc…
>
> My custom agent still has lot of “hard coded” values, but it is meant for
> studying and understanding purposes rather than to put in a production
> machine.
>
>
>
> Please find attachments.
>
>
>
> Thanks,
>
> Harish P
>
>
>
> Sent from Mail for Windows 10
>
>
>
> From: Reid Wahl
> Sent: 02 December 2020 15:55
> To: Cluster Labs - All topics related to open-source clustering welcomed
> Subject: Re: [ClusterLabs] Question on restart of resource during fail over
>
>
>
> On Wed, Dec 2, 2020 at 2:16 AM Harishkumar Pathangay
> <[email protected]> wrote:
> >
> > Just got the issue resolved.
>
> Nice work!
>
> > Any case I will send the cib.xml and my custom db2 resource agent.
> >
> > The existing resource agent is for HADR database, where there are two
> > databases one running as Primary and other as standby.
>
> HADR is only one option. There's also a standard mode:
> -
> https://github.com/oalbrigt/resource-agents/blob/master/heartbeat/db2#L64-L69
>
> I don't know much about DB2, so I'm not sure whether that would meet
> your needs. Based on the metadata, standard mode appears to manage a
> single instance (with the databases you select) on one node at a time.
>
> > I have created a script which will start/stop db2 instances with a single
> > database on shared logical volume [HA-LVM] exclusively activated on one
> > node.
> >
> >
> >
> > Will mail you shortly.
> >
> >
> >
> > Thanks,
> >
> > Harish P
> >
> >
> >
> > Sent from Mail for Windows 10
> >
> >
> >
> > From: Reid Wahl
> > Sent: 02 December 2020 12:46
> > To: Cluster Labs - All topics related to open-source clustering welcomed
> > Subject: Re: [ClusterLabs] Question on restart of resource during fail over
> >
> >
> >
> > Can you share your pacemaker configuration (i.e.,
> > /var/lib/pacemaker/cib/cib.xml)? If you're concerned about quorum,
> > then also share your /etc/corosync/corosync.conf just in case.
> >
> > Also there's a db2 resource agent already written, if you're interested:
> > - https://github.com/oalbrigt/resource-agents/blob/master/heartbeat/db2
> >
> > On Tue, Dec 1, 2020 at 9:50 AM Harishkumar Pathangay
> > <[email protected]> wrote:
> > >
> > > Hi,
> > >
> > > I have DB2 resource agent scripted by myself.
> > >
> > > It is working fine with a small glitch.
> > >
> > >
> > >
> > > I have node1 and node2 in the cluster. No stonith enabled as I don't need
> > > one. The environment is for learning purpose only.
> > >
> > >
> > >
> > > If node one is down [power off], it is starting the resource on other
> > > node which is good. My custom resource agent doing its job. Let us say
> > > DB2 is running with pid 4567.
> > >
> > >
> > >
> > > Now, the original node which went down is back again. I issue “pcs
> > > cluster start” on the node. Node is online. The resource also stays in
> > > the other node, which is again good. That way unnecessary movement of
> > > resources is avoided, exactly what I want. Good but there is a issue.
> > >
> > > On the other node it is restarting the DB2 resource. So my pid of db2
> > > changes to 3452.
> > >
> > > This is unnecessary restart of resource which I want to avoid.
> > >
> > > How to I get this working.
> > >
> > >
> > >
> > > I am very new to cluster pacemaker.
> > >
> > > Please help me so that I can create a working DB2 cluster for my learning
> > > purpose.
> > >
> > > Also I will be blogging in my youtube channel DB2LUWACADEMY.
> > >
> > > Please any help is of great significance to me.
> > >
> > >
> > >
> > > I think it could be quorum issue. But don't know for sure, because there
> > > is only two nodes and DB2 resource needs to be active only in one node.
> > >
> > >
> > >
> > > How do I get this configured.
> > >
> > >
> > >
> > > Thanks.
> > >
> > > Harish P
> > >
> > >
> > >
> > >
> > >
> > > Sent from Mail for Windows 10
> > >
> > >
> > >
> > > _______________________________________________
> > > Manage your subscription:
> > > https://lists.clusterlabs.org/mailman/listinfo/users
> > >
> > > ClusterLabs home: https://www.clusterlabs.org/
> >
> >
> >
> > --
> > Regards,
> >
> > Reid Wahl, RHCA
> > Senior Software Maintenance Engineer, Red Hat
> > CEE - Platform Support Delivery - ClusterHA
> >
> > _______________________________________________
> > Manage your subscription:
> > https://lists.clusterlabs.org/mailman/listinfo/users
> >
> > ClusterLabs home: https://www.clusterlabs.org/
> >
> >
> >
> > _______________________________________________
> > Manage your subscription:
> > https://lists.clusterlabs.org/mailman/listinfo/users
> >
> > ClusterLabs home: https://www.clusterlabs.org/
>
>
>
> --
> Regards,
>
> Reid Wahl, RHCA
> Senior Software Maintenance Engineer, Red Hat
> CEE - Platform Support Delivery - ClusterHA
>
> _______________________________________________
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users
>
> ClusterLabs home: https://www.clusterlabs.org/
>
>
>
> _______________________________________________
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users
>
> ClusterLabs home: https://www.clusterlabs.org/
--
Regards,
Reid Wahl, RHCA
Senior Software Maintenance Engineer, Red Hat
CEE - Platform Support Delivery - ClusterHA
_______________________________________________
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users
ClusterLabs home: https://www.clusterlabs.org/