Hi,
I realized it can be used in standard mode only after you pointing to that.
Anyways, writing custom agent always gives me a good understanding of the 
resources start/stop/monitor etc…
My custom agent still has lot of “hard coded” values, but it is meant for 
studying and understanding purposes rather than to put in a production machine.

Please find attachments.

Thanks,
Harish P

Sent from Mail<https://go.microsoft.com/fwlink/?LinkId=550986> for Windows 10

From: Reid Wahl<mailto:[email protected]>
Sent: 02 December 2020 15:55
To: Cluster Labs - All topics related to open-source clustering 
welcomed<mailto:[email protected]>
Subject: Re: [ClusterLabs] Question on restart of resource during fail over

On Wed, Dec 2, 2020 at 2:16 AM Harishkumar Pathangay
<[email protected]> wrote:
>
> Just got the issue resolved.

Nice work!

> Any case I will send the cib.xml and my custom db2 resource agent.
>
> The existing resource agent is for HADR database, where there are two 
> databases one running as Primary and other as standby.

HADR is only one option. There's also a standard mode:
  - 
https://github.com/oalbrigt/resource-agents/blob/master/heartbeat/db2#L64-L69

I don't know much about DB2, so I'm not sure whether that would meet
your needs. Based on the metadata, standard mode appears to manage a
single instance (with the databases you select) on one node at a time.

> I have created a script which will start/stop db2 instances with a single 
> database on shared logical volume [HA-LVM] exclusively activated on one node.
>
>
>
> Will mail you shortly.
>
>
>
> Thanks,
>
> Harish P
>
>
>
> Sent from Mail for Windows 10
>
>
>
> From: Reid Wahl
> Sent: 02 December 2020 12:46
> To: Cluster Labs - All topics related to open-source clustering welcomed
> Subject: Re: [ClusterLabs] Question on restart of resource during fail over
>
>
>
> Can you share your pacemaker configuration (i.e.,
> /var/lib/pacemaker/cib/cib.xml)? If you're concerned about quorum,
> then also share your /etc/corosync/corosync.conf just in case.
>
> Also there's a db2 resource agent already written, if you're interested:
> - https://github.com/oalbrigt/resource-agents/blob/master/heartbeat/db2
>
> On Tue, Dec 1, 2020 at 9:50 AM Harishkumar Pathangay
> <[email protected]> wrote:
> >
> > Hi,
> >
> > I have DB2 resource agent scripted by myself.
> >
> > It is working fine with a small glitch.
> >
> >
> >
> > I have node1 and node2 in the cluster. No stonith enabled as I don't need 
> > one. The environment is for learning purpose only.
> >
> >
> >
> > If node one is down [power off], it is starting the resource on other node 
> > which is good. My custom resource agent doing its job. Let us say DB2 is 
> > running with pid 4567.
> >
> >
> >
> > Now, the original node which went down is back again.  I issue “pcs cluster 
> > start” on the node. Node is online. The resource also stays in the other 
> > node, which is again good. That way unnecessary movement of resources is 
> > avoided, exactly what I want. Good but there is a issue.
> >
> > On the other node it is restarting the DB2 resource. So my pid of db2 
> > changes to 3452.
> >
> > This is unnecessary restart of resource which I want to avoid.
> >
> > How to I get this working.
> >
> >
> >
> > I am very new to cluster pacemaker.
> >
> > Please help me so that I can create a working DB2 cluster for my learning 
> > purpose.
> >
> > Also I will be blogging in my youtube channel DB2LUWACADEMY.
> >
> > Please any help is of great significance to me.
> >
> >
> >
> > I think it could be quorum issue. But don't know for sure, because there is 
> > only two nodes and DB2 resource needs to be active only in one node.
> >
> >
> >
> > How do I get this configured.
> >
> >
> >
> > Thanks.
> >
> > Harish P
> >
> >
> >
> >
> >
> > Sent from Mail for Windows 10
> >
> >
> >
> > _______________________________________________
> > Manage your subscription:
> > https://lists.clusterlabs.org/mailman/listinfo/users
> >
> > ClusterLabs home: https://www.clusterlabs.org/
>
>
>
> --
> Regards,
>
> Reid Wahl, RHCA
> Senior Software Maintenance Engineer, Red Hat
> CEE - Platform Support Delivery - ClusterHA
>
> _______________________________________________
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users
>
> ClusterLabs home: https://www.clusterlabs.org/
>
>
>
> _______________________________________________
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users
>
> ClusterLabs home: https://www.clusterlabs.org/



--
Regards,

Reid Wahl, RHCA
Senior Software Maintenance Engineer, Red Hat
CEE - Platform Support Delivery - ClusterHA

_______________________________________________
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

<cib crm_feature_set="3.0.14" validate-with="pacemaker-2.10" epoch="26" num_updates="0" admin_epoch="0" cib-last-written="Wed Dec  2 15:35:47 2020" update-origin="tiger" update-client="cibadmin" update-user="root" have-quorum="1" dc-uuid="2">
  <configuration>
    <crm_config>
      <cluster_property_set id="cib-bootstrap-options">
        <nvpair id="cib-bootstrap-options-have-watchdog" name="have-watchdog" value="false"/>
        <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.1.23-1.el7-9acf116022"/>
        <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="corosync"/>
        <nvpair id="cib-bootstrap-options-cluster-name" name="cluster-name" value="db2hacl"/>
        <nvpair id="cib-bootstrap-options-stonith-enabled" name="stonith-enabled" value="false"/>
        <nvpair id="cib-bootstrap-options-no-quorum-policy" name="no-quorum-policy" value="ignore"/>
      </cluster_property_set>
    </crm_config>
    <nodes>
      <node id="1" uname="tiger">
        <instance_attributes id="nodes-1"/>
      </node>
      <node id="2" uname="dragon">
        <instance_attributes id="nodes-2"/>
      </node>
    </nodes>
    <resources>
      <group id="halvm">
        <primitive class="ocf" id="ClusterIP" provider="heartbeat" type="IPaddr2">
          <instance_attributes id="ClusterIP-instance_attributes">
            <nvpair id="ClusterIP-instance_attributes-cidr_netmask" name="cidr_netmask" value="24"/>
            <nvpair id="ClusterIP-instance_attributes-ip" name="ip" value="192.168.96.101"/>
          </instance_attributes>
          <operations>
            <op id="ClusterIP-monitor-interval-30s" interval="30s" name="monitor"/>
            <op id="ClusterIP-start-interval-0s" interval="0s" name="start" timeout="20s"/>
            <op id="ClusterIP-stop-interval-0s" interval="0s" name="stop" timeout="20s"/>
          </operations>
        </primitive>
        <primitive class="ocf" id="halvmd" provider="heartbeat" type="LVM">
          <instance_attributes id="halvmd-instance_attributes">
            <nvpair id="halvmd-instance_attributes-exclusive" name="exclusive" value="true"/>
            <nvpair id="halvmd-instance_attributes-volgrpname" name="volgrpname" value="vgcluster"/>
          </instance_attributes>
          <operations>
            <op id="halvmd-methods-interval-0s" interval="0s" name="methods" timeout="5s"/>
            <op id="halvmd-monitor-interval-10s" interval="10s" name="monitor" timeout="30s"/>
            <op id="halvmd-start-interval-0s" interval="0s" name="start" timeout="30s"/>
            <op id="halvmd-stop-interval-0s" interval="0s" name="stop" timeout="30s"/>
          </operations>
        </primitive>
        <primitive class="ocf" id="clxfs" provider="heartbeat" type="Filesystem">
          <instance_attributes id="clxfs-instance_attributes">
            <nvpair id="clxfs-instance_attributes-device" name="device" value="/dev/vgcluster/clxfs"/>
            <nvpair id="clxfs-instance_attributes-directory" name="directory" value="/db2data"/>
            <nvpair id="clxfs-instance_attributes-fstype" name="fstype" value="xfs"/>
          </instance_attributes>
          <operations>
            <op id="clxfs-monitor-interval-20s" interval="20s" name="monitor" timeout="40s"/>
            <op id="clxfs-notify-interval-0s" interval="0s" name="notify" timeout="60s"/>
            <op id="clxfs-start-interval-0s" interval="0s" name="start" timeout="60s"/>
            <op id="clxfs-stop-interval-0s" interval="0s" name="stop" timeout="60s"/>
          </operations>
        </primitive>
        <primitive class="ocf" id="db2inst" provider="db2luwacademy" type="db2server">
          <meta_attributes id="db2inst-meta_attributes"/>
          <instance_attributes id="db2inst-instance_attributes">
            <nvpair id="db2inst-instance_attributes-instance" name="instance" value="db2inst1"/>
          </instance_attributes>
          <operations>
            <op id="db2inst-monitor-interval-30s" interval="30s" name="monitor" timeout="30s"/>
            <op id="db2inst-start-interval-0s" interval="0s" name="start" timeout="60s"/>
            <op id="db2inst-stop-interval-0s" interval="0s" name="stop" timeout="60s"/>
          </operations>
        </primitive>
      </group>
    </resources>
    <constraints>
      <rsc_colocation id="colocation_set_dthdcs" score="INFINITY">
        <resource_set id="colocation_set_dthdcs_set">
          <resource_ref id="db2inst"/>
          <resource_ref id="halvmd"/>
          <resource_ref id="clxfs"/>
          <resource_ref id="ClusterIP"/>
        </resource_set>
      </rsc_colocation>
    </constraints>
    <rsc_defaults>
      <meta_attributes id="rsc_defaults-options">
        <nvpair id="rsc_defaults-options-resource-stickiness" name="resource-stickiness" value="100"/>
      </meta_attributes>
    </rsc_defaults>
  </configuration>
</cib>

Attachment: db2server
Description: db2server

_______________________________________________
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

Reply via email to